Download as pdf or txt
Download as pdf or txt
You are on page 1of 2442

Tell us about your PDF experience.

SAP on Azure documentation


SAP on Azure provides multiple options for running, managing and monitoring SAP
workloads on Azure.

SAP workloads on Azure VMs

e OVERVIEW

How do you run SAP workloads on Azure VMs?

Documentation change log

SAP certifications on Azure

Supported SAP scenarios on Azure VMs

Supported SAP software on Azure VMs

p CONCEPT

Planning guidance

Azure storage types for SAP workloads

High availability for SAP components

Azure Availability Zones for SAP workloads

c HOW-TO GUIDE

How to deploy SAP on Azure VMs

How to deploy DBMS on Azure VMs

Azure Center for SAP solutions

e OVERVIEW

What is Azure Center for SAP solutions?

FAQ about Azure Center for SAP solutions

c HOW-TO GUIDE
Prepare for deployment

Deploy SAP infrastructure

Install SAP software

Register an existing SAP system

Start and stop SAP systems

Manage a Virtual Instance for SAP solutions

Get quality checks and insights

View cost analysis for SAP system

Azure Monitor for SAP solutions

e OVERVIEW

What is Azure Monitor for SAP solutions?

FAQ about Azure Monitor for SAP solutions

p CONCEPT

Providers in Azure Monitor for SAP solutions

f QUICKSTART

Deploy Azure Monitor for SAP solutions (Azure portal)

Deploy Azure Monitor for SAP solutions (PowerShell)

SAP on Azure deployment automation framework

e OVERVIEW

What is SAP on Azure deployment automation framework?

p CONCEPT

Supported platform and features

Get started with the automation framework


Plan your deployment of the automation framework

f QUICKSTART

Configure Azure DevOps Services for automation framework

SAP on Azure Large Instances

e OVERVIEW

What is SAP HANA on Azure Large Instances?

p CONCEPT

Certifications for Large Instances

Supported scenarios for Large Instances

Available SKUs for Large Instances

Architecture for Large Instances

c HOW-TO GUIDE

How to install SAP HANA on Azure Large Instances

SAP and Microsoft integrations

e OVERVIEW

What scenarios are available and how do I get started?

How do you integrate with SAP RISE?

c HOW-TO GUIDE

How to configure M365 Exchange Online for SAP

How to expose SAP Process Orchestration on Azure securely

How to configure SAP printing with Microsoft Universal Print

f QUICKSTART
Deploy an ERP extension using SAP's Cloud SDK on Azure in one click

Use free developer accounts for SAP BTP, M365 and Azure

Use SAP ABAP platform and SAP BTP, ABAP environment to integrate with Microsoft
What SAP on Azure offerings are
available?
Article • 10/27/2023

There are multiple Microsoft Azure offerings for running and managing your SAP
systems. These offerings range from traditional Azure virtual machine (VM) offerings, to
top-level Azure services, to tools that integrate with other Azure services or external
products.

SAP on Azure VM workloads


You can run SAP workloads on the Azure platform using different Azure Virtual
Machines (Azure VMs) offerings. Azure is certified for multiple SAP products, including
SAP HANA and SAP NetWeaver products.

For more information, see the SAP on Azure VM workloads documentation.

SAP Integration with Microsoft Services


In addition to the capabilities to run SAP IaaS and SaaS workloads on Azure, Microsoft
offers a variety of capabilities, scenarios, best-practice guides, and tutorials to integrate
SAP workloads running anywhere with other Microsoft products and services. Among
them are popular services such as Microsoft Entra ID, Exchange Online, Power Platform
and Power BI, Azure Integration Services, Excel, SAP Business Technology Platform, SAP
Analytics Cloud, SAP Data Warehouse Cloud, and SAP Success Factors to name a few.

For more information, see the SAP Integration with Microsoft Services documentation.

SAP HANA on Azure (Large Instances)


SAP HANA on Azure (Large Instances) is a solution that provides VMs for deploying and
running SAP HANA.

For more information, see the SAP HANA on Azure (Large Instances) documentation.

7 Note

This offering is no longer accepting new customers. For alternatives, please check
the offers of HANA certified Azure VMs in the HANA Hardware Directory .
Azure Center for SAP solutions
Azure Center for SAP solutions is a service that makes SAP a top-level workload in
Azure. This end-to-end solution allows you to create and run SAP systems as a unified
workload on Azure. You can use this service through the Azure portal, a REST API, and
the Azure CLI.

For more information, see the Azure Center for SAP solutions documentation.

SAP on Azure deployment automation


framework
The SAP on Azure deployment automation framework is an open-source orchestration
tool for deploying, installing and maintaining SAP environments.

For more information, see the SAP on Azure deployment automation framework
documentation.

Azure Monitor for SAP solutions


Azure Monitor for SAP solutions is an Azure-native monitoring product for SAP
landscapes that run on Azure, which uses specific parts of the Azure Monitor
infrastructure.

For more information, see the Azure Monitor for SAP solutions documentation.

Next steps
SAP solutions on Azure
Get started with SAP and Azure integration scenarios
Use Azure to host and run SAP
workload scenarios
Article • 04/01/2024

When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads
and scenarios on a scalable, compliant, and enterprise-proven platform. You get the
scalability, flexibility, and cost savings of Azure. With the expanded partnership between
Microsoft and SAP, you can run SAP applications across development and test and
production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP
S/4HANA, SAP BI on Linux to Windows, and SAP HANA to SQL Server, Oracle, Db2, etc.,
we've got you covered.

Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on
Azure, you can host other SAP workload scenarios, like SAP BI on Azure. Our partnership
with SAP resulted in various integration scenarios with the overall Microsoft ecosystem.
Check out the dedicated Integration section to learn more.

We just announced our new services of Azure Center for SAP solutions and Azure
Monitor for SAP solutions 2.0 entering the public preview stage. These services give you
the possibility to deploy SAP workload on Azure in a highly automated manner in an
optimal architecture and configuration. And monitor your Azure infrastructure, OS,
DBMS, and ABAP stack deployments on one single pane of glass.

For customers and partners who are focused on deploying and operating their assets in
public cloud through Terraform and Ansible, use our SAP on Azure Deployment
Automation Framework to jump start your SAP deployments into Azure using our public
Terraform and Ansible modules on github .

Hosting SAP workload scenarios in Azure also can create requirements of identity
integration and single sign-on. This situation can occur when you use Microsoft Entra ID
to connect different SAP components and SAP software-as-a-service (SaaS) or platform-
as-a-service (PaaS) offers. A list of such integration and single sign-on scenarios with
Microsoft Entra ID and SAP entities is described and documented in the section
"Microsoft Entra SAP identity integration and single sign-on."

Changes to the SAP workload section


Changes to documents in the SAP on Azure workload section are listed at the end of
this article. The entries in the change log are kept for around 180 days.
You want to know
If you have specific questions, we're going to point you to specific documents or flows
in this section of the start page. You want to know:

Is Azure accepting new customers for HANA Large Instances? HANA Large
Instance service is in sunset mode and doesn't accept new customers anymore.
Providing units for existing HANA Large Instance customers is still possible. For
alternatives, check the offers of HANA certified Azure VMs in the HANA Hardware
Directory .
Can Microsoft Entra accounts be used to run the SAP ABAP stack in Windows
guest OS. No, due to shortcomings in feature set of Microsoft Entra ID, it can't be
used for running the ABAP stack within the Windows guest OS
What Azure Services, Azure VM types and Azure storage services are available in
the different Azure regions, check the site Products available by region
Are third-party HA frameworks, besides Windows and Pacemaker supported?
Check bottom part of SAP support note #1928533
What Azure storage is best for my scenario? Read Azure Storage types for SAP
workload
Is the Red Hat kernel in Oracle Enterprise Linux supported by SAP? Read SAP SAP
support note #1565179
Why are the Azures Da(s)v4/Ea(s) VM families not certified for SAP HANA? The
Azure Das/Eas VM families are based on AMD processor-driven hardware. SAP
HANA doesn't support AMD processors, not even in virtualized scenarios
Why am I still getting the message: 'The cpu flags for the RDTSCP instruction or the
cpu flags for constant_tsc or nonstop_tsc aren't set or current_clocksource and
available_clocksource aren't correctly configured' with SAP HANA, although I'm
running the most recent Linux kernels. For the answer, check SAP support note
#2791572
Where can I find architectures for deploying SAP Fiori on Azure? Check out the
blog SAP on Azure: Application Gateway Web Application Firewall (WAF) v2 Setup
for Internet facing SAP Fiori Apps

Documentation space
In the SAP workload documentation space, you can find the following areas:

Integration with Microsoft Services and References contain different links to


integration scenarios between SAP and other Microsoft services. The list may not
be complete.
SAP on Azure Large Instances: This documentation section is covering a bare-
metal service that originally was named HANA Large Instances. Different topics
around this technology are covered in this section
Plan and Deploy (Azure VMs): Deploying SAP workload into Azure Infrastructure
as a Service, you should go through the documents in this section first to learn
more about the principle Azure components used and guidelines
Storage (Azure VMs): This section includes documents that give recommendations
how to use the different Azure storage types when deploying SAP workload on
Azure
DBMS Guides (Azure VMs): The section DBMS Guides covers specifics around
deploying different DBMS that are supported for SAP workload in Azure IaaS
High Availability (Azure VMs): In this section, many of the high availability
configurations around SAP workload on Azure are covered. This section includes
detailed documentation around deploying Windows clustering and Pacemaker
cluster configuration for the different SAP components and different database
systems
Automation Framework (Azure VMs): Automation Framework documentation is
covering a Terraform and Ansible based automation framework that allows
automation of Azure infrastructure and SAP software
Azure Monitor for SAP solutions: Microsoft developed monitoring solutions
specifically for SAP supported OS and DBMS, as well as S/4HANA and NetWeaver.
This section documents the deployment and usage of the service

Change Log
May 21, 2024: Update timeouts and added start delay for pacemaker scheduled
events in Set up Pacemaker on RHEL in Azure and Set up Pacemaker on SUSE Linux
Enterprise Server (SLES) in Azure.
April 1, 2024: Reference the considerations section for sizing HANA shared file
system in NFS v4.1 volumes on Azure NetApp Files for SAP HANA, SAP HANA
Azure virtual machine Premium SSD storage configurations, SAP HANA Azure
virtual machine Premium SSD v2 storage configurations, and Azure Files NFS for
SAP
March 18, 2024: Added considerations for sizing the HANA shared file system in
SAP HANA Azure virtual machine storage configurations
February 07, 2024: Clarified disk allocation when using PPGs to bind availability set
in specific Availability Zone in Configuration options for optimal network latency
with SAP applications
February 01, 2024: Added guidance for SAP front-end printing to Universal Print
January 24, 2024: Split SAP RISE integration documentation into multiple segments
for improved legibility, additional overview information added.
January 22, 2024: Changes in all high availability documentation to include
guidelines for setting the “probeThreshold” property to 2 in the load balancer’s
health probe configuration.
January 21, 2024: Change recommendations around LARGEPAGES in Azure Virtual
Machines Oracle DBMS deployment for SAP workload
December 15, 2023: Change recommendations around DIRECTIO and LVM in
Azure Virtual Machines Oracle DBMS deployment for SAP workload
December 11, 2023: Add RHEL requirements to HANA third site for multi-target
replication and integrating into a Pacemaker cluster.
November 20, 2023: Add storage configuration for Mv3 medium memory VMs into
the documents SAP HANA Azure virtual machine Premium SSD storage
configurations, SAP HANA Azure virtual machine Premium SSD v2 storage
configurations, and SAP HANA Azure virtual machine Ultra Disk storage
configurations
November 20, 2023: Add supported storage matrix into the document Azure
Virtual Machines Oracle DBMS deployment for SAP workload
November 09, 2023: Change in SAP HANA infrastructure configurations and
operations on Azure to align multiple vNIC instructions with planning guide and
add /hana/shared on NFS on Azure Files
September 26, 2023: Change in SAP HANA scale-out HSR with Pacemaker on
Azure VMs on RHEL to add instructions for deploying /hana/shared (only) on NFS
on Azure Files
September 12, 2023: Adding support to handle Azure scheduled events for
Pacemaker clusters running on RHEL.
August 24, 2023: Support of priority-fencing-delay cluster property on two-node
pacemaker cluster to address split-brain situation in RHEL is updated on Setting up
Pacemaker on RHEL in Azure, High availability of SAP HANA on Azure VMs on
RHEL, High availability of SAP HANA Scale-up with ANF on RHEL, Azure VMs high
availability for SAP NW on RHEL with NFS on Azure Files, and Azure VMs high
availability for SAP NW on RHEL with Azure NetApp Files documents.
August 03, 2023: Change of recommendation to use a /25 IP range for delegated
subnet for ANF for SAP workload NFS v4.1 volumes on Azure NetApp Files for SAP
HANA
August 03, 2023: Change in support of block storage and NFS on ANF storage for
SAP HANA documented in SAP HANA Azure virtual machine storage
configurations
July 25, 2023: Adding reference to SAP Note #3074643 to Azure Virtual Machines
Oracle DBMS deployment for SAP workload
July 21, 2023: Support of priority-fencing-delay cluster property on two-node
pacemaker cluster to address split-brain situation in SLES is updated on High
availability for SAP HANA on Azure VMs on SLES, High availability of SAP HANA
Scale-up with ANF on SLES, Azure VMs high availability for SAP NetWeaver on
SLES for SAP Applications with simple mount and NFS, Azure VMs high availability
for SAP NW on SLES with NFS on Azure Files, Azure VMs high availability for SAP
NW on SLES with Azure NetApp Files document.
July 13, 2023: Clarifying differences in zonal replication between NFS on AFS and
ANF in table in Azure Storage types for SAP workload
July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2
don't show any performance difference in SAP HANA Azure virtual machine Ultra
Disk storage configurations
July 13, 2023: Replaced links in ANF section of Azure Virtual Machines Oracle
DBMS deployment for SAP workload to new ANF related documentation
July 11, 2023: Add a note about Azure NetApp Files application volume group for
SAP HANA in HA for HANA Scale-up with ANF on SLES, HANA scale-out with
standby node with ANF on SLES, HA for HANA Scale-out HA on SLES, HA for
HANA scale-up with ANF on RHEL, HANA scale-out with standby node on Azure
VMs with ANF on RHEL and HA for HANA scale-out on RHEL
June 29, 2023: Update important considerations and sizing information in HA for
HANA scale-up with ANF on RHEL, HANA scale-out with standby node on Azure
VMs with ANF on RHEL
June 26, 2023: Update important considerations and sizing information in HA for
HANA Scale-up with ANF on SLES and HANA scale-out with standby node with
ANF on SLES.
June 23, 2023: Updated Azure scheduled events for SLES in Pacemaker set up
guide.
June 22, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do
not show any performance difference in SAP HANA Azure virtual machine
Premium SSD v2 storage configurations
June 1, 2023: Included virtual machine scale set with flexible orchestration
guidelines in SAP workload planning guide.
June 1, 2023: Updated high availability guidelines in HA architecture and scenarios,
and added additional deployment option in configuring optimal network latency
with SAP applications.
June 1, 2023: Release of virtual machine scale set with flexible orchestration
support for SAP workload.
April 25, 2023: Adjust mount options in HA for HANA Scale-up with ANF on SLES,
HANA scale-out with standby node with ANF on SLES, HA for HANA Scale-out HA
on SLES, HA for HANA scale-up with ANF on RHEL, HANA scale-out with standby
node on Azure VMs with ANF on RHEL, HA for HANA scale-out on RHEL, HA for
SAP NW on SLES with ANF , HA for SAP NW on RHEL with ANF and HA for SAP
NW on SLES with simple mount and NFS
April 6, 2023: Updates for RHEL 9 in Setting up Pacemaker on RHEL in Azure
March 26, 2023: Adding recommended sector size in SAP HANA Azure virtual
machine Premium SSD v2 storage configurations
March 1, 2023: Change in HA for SAP HANA on Azure VMs on RHEL to add
configuration for cluster default properties
February 21, 2023: Correct link to HANA hardware directory in SAP HANA
infrastructure configurations and operations on Azure and fixed a bug in SAP
HANA Azure virtual machine Premium SSD v2 storage configurations
February 17, 2023: Add support and Sentinel sections, few other minor updates in
RISE with SAP integration
February 02, 2023: Add new HA provider susChkSrv for SAP HANA Scale-out HA on
SUSE and change from SAPHanaSR to SAPHanaSrMultiTarget provider, enabling
HANA multi-target replication
January 27, 2023: Mark Microsoft Entra Domain Services as supported AD solution
in SAP workload on Azure virtual machine supported scenarios after successful
testing
December 28, 2022: Update documents Azure Storage types for SAP workload and
NFS v4.1 volumes on Azure NetApp Files for SAP HANA to provide more details on
ANF deployment processes to achieve proximity and low latency. Introduction of
zonal deployment process of NFS shares on ANF
December 28, 2022: Updated the guide SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver across all topics. Also added VM configuration
examples for different sizes of databases
December 27, 2022: Introducing new configuration for SAP ASE on E96(d)s_v5 in
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
December 23, 2022: Updating Considerations for Azure Virtual Machines DBMS
deployment for SAP workload by cutting references to Azure standard HDD and
SSD. Introducing premium storage v2 and updating a few other sections to more
recent functionalities
December 20, 2022: Update article SAP workload on Azure virtual machine
supported scenarios with table around AD and Microsoft Entra ID support.
Deleting a few references to HANA Large Instances.
December 19, 2022: Update article SAP workload configurations with Azure
Availability Zones related to new functionalities like zonal replication of Azure
Premium Files
December 18, 2022: Add short description and link to intent option of PPG
creation in Azure proximity placement groups for optimal network latency with
SAP applications
December 14, 2022: Fixes in recommendations of capacity for a few VM types in
SAP HANA Azure virtual machine Premium SSD v2 storage configurations
November 30, 2022: Added storage recommendations for Premium SSD v2 into
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
November 22, 2022: Release of Disaster Recovery guidelines for SAP workload on
Azure - Disaster Recovery overview and infrastructure guidelines for SAP workload
and Disaster Recovery recommendation for SAP workload.
November 22, 2022: Update of SAP workloads on Azure: planning and deployment
checklist to add latest recommendations
November 18, 2022: Add a recommendation to use Pacemaker simple mount
configuration for new implementations on SLES 15 in Azure VMs HA for SAP NW
on SLES with simple mount and NFS, Azure VMs HA for SAP NW on SLES with NFS
on Azure File, Azure VMs HA for SAP NW on SLES with Azure NetApp Files and
Azure VMs HA for SAP NW on SLES
November 15, 2022: Change in HA for SAP HANA Scale-up with ANF on SLES, SAP
HANA scale-out with standby node on Azure VMs with ANF on SLES, HA for SAP
HANA scale-up with ANF on RHEL and SAP HANA scale-out with standby node on
Azure VMs with ANF on RHEL to add recommendation to use mount option
nconnect for workloads with higher throughput requirements

November 15, 2022: Add a recommendation for minimum required version of


package resource-agents in High availability of IBM Db2 LUW on Azure VMs on
Red Hat Enterprise Linux Server
November 14, 2022: Provided more details about nconnect mount option in NFS
v4.1 volumes on Azure NetApp Files for SAP HANA
November 14, 2022: Change in HA for SAP HANA scale-up with ANF on RHEL and
SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL to update
suggested timeouts for FileSystem Pacemaker cluster resources
November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster
in High availability of SAP HANA on Azure VMs on SLES, High availability of SAP
HANA Scale-up with ANF on SLES
November 07, 2022: Added monitor operation for azure-lb resource in High
availability of SAP HANA on Azure VMs on SLES, SAP HANA scale-out with HSR
and Pacemaker on SLES, Set up IBM Db2 HADR on Azure virtual machines (VMs),
Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with
simple mount and NFS, Azure VMs high availability for SAP NW on SLES with NFS
on Azure File, Azure VMs high availability for SAP NW on SLES with Azure NetApp
Files, Azure VMs high availability for SAP NetWeaver on SLES, High availability for
NFS on Azure VMs on SLES, Azure VMs high availability for SAP NetWeaver on
SLES multi-SID guide
October 31, 2022: Change in HA for NFS on Azure VMs on SLES to fix script
location for DRBD 9.0
October 31, 2022: Change in SAP HANA scale-out with standby node on Azure
VMs with ANF on SLES to update the guideline for sizing /hana/shared
October 27, 2022: Adding Ev4 and Ev5 VM families and updated OS releases to
table in SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
October 20, 2022: Change in HA for NFS on Azure VMs on SLES and HA for SAP
NW on Azure VMs on SLES for SAP applications to indicate that we're de-
emphasizing SAP reference architectures, utilizing NFS clusters
October 18, 2022: Clarify some considerations around using Azure Availability
Zones in SAP workload configurations with Azure Availability Zones
October 17, 2022: Change in HA for SAP HANA on Azure VMs on SLES and HA for
SAP HANA on Azure VMs on RHEL to add guidance for setting up parameter
AUTOMATED_REGISTER

September 29, 2022: Announcing HANA Large Instances being in sunset mode in
SAP workload on Azure virtual machine supported scenarios and What is SAP
HANA on Azure (Large Instances)?. Adding some statements around Azure
VMware and Microsoft Entra ID support status in SAP workload on Azure virtual
machine supported scenarios
September 27, 2022: Minor changes in HA for SAP ASCS/ERS with NFS simple
mount on SLES 15 for SAP Applications to adjust mount instructions
September 14, 2022 Release of updated SAP on Oracle guide with new and
updated content Azure Virtual Machines Oracle DBMS deployment for SAP
workload
September 8, 2022: Change in SAP HANA scale-out HSR with Pacemaker on Azure
VMs on SLES to add instructions for deploying /hana/shared (only) on NFS on
Azure Files
September 6, 2022: Add managed identity for pacemaker fence agent Set up
Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure on SLES and Setting up
Pacemaker on RHEL in Azure RHEL
August 22, 2022: Release of cost optimization scenario Deploy PAS and AAS with
SAP NetWeaver HA cluster on RHEL
August 09, 2022: Release of scenario HA for SAP ASCS/ERS with NFS simple mount
on SLES 15 for SAP Applications
July 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in
Azure Virtual Machines Oracle DBMS deployment for SAP workload
June 29, 2022: Add recommendation and links to Pacemaker usage for Db2
versions 11.5.6 and higher in the documents IBM Db2 Azure Virtual Machines
DBMS deployment for SAP workload, High availability of IBM Db2 LUW on Azure
VMs on SUSE Linux Enterprise Server with Pacemaker, and High availability of IBM
Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
June 08, 2022: Change in HA for SAP NW on Azure VMs on SLES with ANF and HA
for SAP NW on Azure VMs on RHEL with ANF to adjust timeouts when using
NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker
configuration
June 02, 2022: Change in the SAP Deployment Guide to add a link to RHEL in-place
upgrade documentation
June 02, 2022: Change in HA for SAP NetWeaver on Azure VMs on Windows with
Azure NetApp Files(SMB), HA for SAP NW on Azure VMs on SLES with ANF and HA
for SAP NW on Azure VMs on RHEL with ANF to add sizing considerations
May 11, 2022: Change in Cluster an SAP ASCS/SCS instance on a Windows failover
cluster by using a cluster shared disk in Azure, Prepare the Azure infrastructure for
SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS and
SAP ASCS/SCS instance multi-SID high availability with Windows server failover
clustering and Azure shared disk to update instruction about the usage of Azure
shared disk for SAP deployment with PPG.
May 10, 2022: Change in HA for SAP HANA scale-up with ANF on RHEL, SAP HANA
scale-out HSR with Pacemaker on Azure VMs on RHEL, HA for SAP HANA Scale-up
with Azure NetApp Files on SLES, SAP HANA scale-out with standby node on Azure
VMs with ANF on SLES, SAP HANA scale-out HSR with Pacemaker on Azure VMs
on SLES and SAP HANA scale-out with standby node on Azure VMs with ANF on
RHEL to adjust parameters per SAP note 3024346
April 26, 2022: Changes in Setting up Pacemaker on SUSE Linux Enterprise Server in
Azure to add Azure Identity Python module to installation instructions for Azure
Fence Agent
March 30, 2022: Adding information that Red Hat Gluster Storage is being phased
out GlusterFS on Azure VMs on RHEL
March 30, 2022: Correcting DNN support for older releases of SQL Server in SQL
Server Azure Virtual Machines DBMS deployment for SAP NetWeaver
March 28, 2022: Formatting changes and reorganizing ILB configuration
instructions in: HA for SAP HANA on Azure VMs on SLES, HA for SAP HANA Scale-
up with Azure NetApp Files on SLES, HA for SAP HANA on Azure VMs on RHEL, HA
for SAP HANA scale-up with ANF on RHEL, HA for SAP NW on SLES with NFS on
Azure Files, HA for SAP NW on Azure VMs on SLES with ANF, HA for SAP NW on
Azure VMs on SLES for SAP applications, HA for NFS on Azure VMs on SLES, HA for
SAP NNW on Azure VMs on SLES multi-SID guide, HA for SAP NW on RHEL with
NFS on Azure Files, HA for SAP NW on Azure VMs on RHEL with ANF, HA for SAP
NW on Azure VMs on RHEL for SAP applications and HA for SAP NW on Azure
VMs on RHEL multi-SID guide
March 15, 2022: Corrected rsize and wsize mount option settings for ANF in IBM
Db2 Azure Virtual Machines DBMS deployment for SAP workload
March 1, 2022: Corrected note about database snapshots with multiple database
containers in SAP HANA Large Instances high availability and disaster recovery on
Azure
February 28, 2022: Added E(d)sv5 VM storage configurations to SAP HANA Azure
virtual machine storage configurations
February 13, 2022: Corrected broken links to HANA hardware directory in the
following documents: SAP Business One on Azure Virtual Machines, Available SKUs
for HANA Large Instances, Certification of SAP HANA on Azure (Large Instances),
Installation of SAP HANA on Azure virtual machines, SAP workload planning and
deployment checklist, SAP HANA infrastructure configurations and operations on
Azure, SAP HANA on Azure Large Instance migration to Azure Virtual Machines,
Install and configure SAP HANA (Large Instances) ,on Azure, High availability of
SAP HANA scale-out system on Red Hat Enterprise Linux, High availability for SAP
HANA scale-out system with HSR on SUSE Linux Enterprise Server, High availability
of SAP HANA on Azure VMs on SUSE Linux Enterprise Server, Deploy a SAP HANA
scale-out system with standby node on Azure VMs by using Azure NetApp Files on
SUSE Linux Enterprise Server, SAP workload on Azure virtual machine supported
scenarios, What SAP software is supported for Azure deployments
February 13, 2022: Change in HA for SAP NetWeaver on Azure VMs on Windows
with Azure NetApp Files(SMB) to add instructions about adding the SAP
installation user as Administrators Privilege user to avoid SWPM permission
errors
February 09, 2022: Add more information around 4K sectors usage of Db2 11.5 in
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
February 08, 2022: Style changes in SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver
February 07, 2022: Adding new functionality ANF application volume groups for
HANA in documents NFS v4.1 volumes on Azure NetApp Files for SAP HANA and
Azure proximity placement groups for optimal network latency with SAP
applications
January 30, 2022: Adding context about SQL Server proportional fill and
expectations that SQL Server data files should be the same size and should have
the same free space in SQL Server Azure Virtual Machines DBMS deployment for
SAP NetWeaver
January 24, 2022: Change in HA for SAP NW on SLES with NFS on Azure Files, HA
for SAP NW on Azure VMs on SLES with ANF, HA for SAP NW on Azure VMs on
SLES for SAP applications, HA for NFS on Azure VMs on SLES, HA for SAP NNW on
Azure VMs on SLES multi-SID guide, HA for SAP NW on RHEL with NFS on Azure
Files, HA for SAP NW on Azure VMs on RHEL for SAP applications and HA for SAP
NW on Azure VMs on RHEL with ANF and HA for SAP NW on Azure VMs on RHEL
multi-SID guide to remove cidr_netmask from Pacemaker configuration to allow
the resource agent to determine the value automatically.
January 12, 2022: Change in HA for SAP NetWeaver on Azure VMs on Windows
with Azure NetApp Files(SMB) to remove obsolete information for the SAP kernel
that supports the scenario.
December 08, 2021: Change in SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver to clarify Azure Load Balancer settings.
December 08, 2021: Release of scenario HA of SAP HANA Scale-up with Azure
NetApp Files on SLES
December 07, 2021: Change in Setting up Pacemaker on RHEL in Azure to clarify
that the instructions are applicable for both RHEL 7 and RHEL 8
December 07, 2021: Change in HA for SAP NW on SLES with NFS on Azure Files,
HA for SAP NW on Azure VMs on SLES with ANF and HA for SAP NW on Azure
VMs on SLES for SAP applications to adjust the instructions for configuring SWAP
file.
December 02, 2021: Introduction of new fencing method in Setting up Pacemaker
on SUSE Linux Enterprise Server in Azure using Azure shared disk SBD device
December 01, 2021: Change in SAP ASCS/SCS instance with WSFC and file share,
HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)
and HA for SAP NetWeaver on Azure VMs on Windows with Azure Files(SMB) to
update the SAP kernel version, required to support clustering SAP on Windows
with file share
November 30, 2021: Added Using Windows DFS-N to support flexible SAPMNT
share creation for SMB-based file share
November 22, 2021: Change in HA for SAP NW on SLES with NFS on Azure Files
and HA for SAP NW on RHEL with NFS on Azure Files to clarify the guidelines for
J2EE SAP systems and share consolidations per storage account.
November 16, 2021: Release of high availability guides for SAP ASCS/ERS with NFS
on Azure files HA for SAP NW on SLES with NFS on Azure Files and HA for SAP NW
on RHEL with NFS on Azure Files
November 15, 2021: Introduction of new proximity placement architecture for
zonal deployments in Azure proximity placement groups for optimal network
latency with SAP applications
November 02, 2021: Changed Azure Storage types for SAP workload and SAP ASE
Azure Virtual Machines DBMS deployment for SAP workload to declare SAP ASE
support for NFS on Azure NetApp Files.
November 02, 2021: Changed SAP workload configurations with Azure Availability
Zones to move Singapore SouthEast to regions for active/active configurations
November 02, 2021: Change in High availability of SAP HANA on Azure VMs on
Red Hat Enterprise Linux to update instructions for HANA scale-up Active/Active
(Read Enabled) configuration.
October 26, 2021: Change in SAP HANA scale-out HSR with Pacemaker on Azure
VMs on RHEL to update resource names in HANA scale-out Active/Active (Read
Enabled) configuration
October 19, 2021: Change in SAP HANA scale-out HSR with Pacemaker on Azure
VMs on RHEL to add instructions for HANA scale-out Active/Active (Read Enabled)
configuration
October 11, 2021: Change in Cluster an SAP ASCS/SCS instance on a Windows
failover cluster by using a cluster shared disk in Azure, Prepare the Azure
infrastructure for SAP HA by using a Windows failover cluster and shared disk for
SAP ASCS/SCS and SAP ASCS/SCS instance multi-SID high availability with
Windows server failover clustering and Azure shared disk to add instructions about
zone redundant storage (ZRS) for Azure shared disk support
SAP certifications and configurations running on
Microsoft Azure
Article • 02/10/2023

SAP and Microsoft have a long history of working together in a strong partnership that has mutual benefits for
their customers. Microsoft is constantly updating its platform and submitting new certification details to SAP
in order to ensure Microsoft Azure is the best platform on which to run your SAP workloads. The following
tables outline Azure supported configurations and list of growing SAP certifications. This list is an overview list
that might deviate here and there from the official SAP lists. How to get to the detailed data is documented in
the article What SAP software is supported for Azure deployments

SAP HANA certifications


References:

SAP HANA certified IaaS platforms for SAP HANA support for native Azure VMs and HANA Large
Instances.

SAP Product Supported OS Azure Offerings

Business One on HANA SUSE Linux Enterprise SAP HANA Certified IaaS Platforms

SAP S/4 HANA Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms

Suite on HANA, OLTP Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms

HANA Enterprise for BW, OLAP Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms

SAP BW/4 HANA Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms

SAP NetWeaver certifications


Microsoft Azure is certified for the following SAP products, with full support from Microsoft and SAP.
References:

1928533 - SAP Applications on Azure: Supported Products and Azure VM types for all SAP NetWeaver
based applications, including SAP TREX, SAP LiveCache, and SAP Content Server. And all databases,
excluding SAP HANA.

SAP Product Guest OS RDBMS Virtual Machine Types

SAP Business Windows, SUSE Linux SQL Server, Oracle (Windows 1928533 - SAP Applications on
Suite Software Enterprise, Red Hat Enterprise and Oracle Linux only), DB2, Azure: Supported Products and
Linux, Oracle Linux SAP ASE Azure VM types

SAP Business Windows, SUSE Linux SQL Server, Oracle (Windows 1928533 - SAP Applications on
All-in-One Enterprise, Red Hat Enterprise and Oracle Linux only), DB2, Azure: Supported Products and
Linux, Oracle Linux SAP ASE Azure VM types

SAP Windows N/A 1928533 - SAP Applications on


BusinessObjects Azure: Supported Products and
BI Azure VM types
SAP Product Guest OS RDBMS Virtual Machine Types

SAP NetWeaver Windows, SUSE Linux SQL Server, Oracle (Windows 1928533 - SAP Applications on
Enterprise, Red Hat Enterprise and Oracle Linux only), DB2, Azure: Supported Products and
Linux, Oracle Linux SAP ASE Azure VM types

Other SAP Workload supported on Azure


SAP Guest OS RDBMS Virtual Machine Types
Product

SAP Windows SQL All NetWeaver certified VM types


Business Server SAP Note #928839
One on
SQL Server

SAP BPC Windows All NetWeaver Certified VM types


10.01 MS and Linux SAP Note #2451795
SP08

SAP Windows SAP Note #2145537


Business and Linux
Objects BI
platform

SAP Data SAP Note #2288344


Services
4.2

SAP Windows SQL All NetWeaver certified VM types


Hybris Server, Hybris Documentation
Commerce Oracle
Platform

SAP SLES 12 SAP All NetWeaver certified VM types


Hybris or more HANA Hybris Documentation
Commerce recent
Platform

SAP RHEL 7 SAP All NetWeaver certified VM types


Hybris or more HANA [Hybris
Commerce recent Documentation]https://help.sap.com/viewer/a74589c3a81a4a95bf51d87258c0ab15/6.7.0.0/en-
Platform US/8c71300f866910149b40c88dfc0de431.html)

SAP Windows, SQL All NetWeaver certified VM types


(Hybris) SLES, or Azure Hybris Documentation
Commerce RHEL DB
Platform
1811 and
later
Get started with SAP and Microsoft
integration scenarios
Article • 04/10/2024

According to SAP over 87% of total global commerce is generated by SAP customers
and more SAP systems are running in the cloud each year. The SAP platform provides a
foundation for innovation for many companies and can handle various workloads
natively. Explore our integration section further to learn how you can combine the
Microsoft Azure ecosystem with your SAP workload to accelerate your business
outcomes. Among the scenarios are extensions with Power Platform ("keep the ABAP
core clean"), secured APIs with Azure API Management, automated business processes
with Logic Apps, enriched experiences with SAP Business Technology Platform, native
Microsoft integrations using ABAP Cloud, uniform data blending dashboards with the
Azure Data Platform and more.

For the latest news from the SAP and Azure world, follow the SAP on Microsoft
TechCommunity section and the relevant Azure tags on the SAP Community .

To learn more about the opportunities of extending SAP applications with Azure
services, see this Azure Friday episode:
https://www.youtube-nocookie.com/embed/72kbjv0GJAY

We have over thirty years of partnership between SAP and Microsoft, which is a
foundation to support common goals long-term, including a joint commitment by SAP
and Microsoft to simplify and streamline customers’ journeys to the cloud. For more
information, see:

SAP Partners with Microsoft for First-in-Market Cloud Migration Offerings


SAP and Microsoft Expand Partnership and Integrate Microsoft Teams Across
Solutions
Come Explore the Future , showing how Microsoft and SAP are partnering to
meet the needs of every business.
Collaborating for Success: How SAP and Microsoft are working together to
accelerate customer innovation and transformation

Integration resources
Select an area for resources about how to integrate SAP and Azure in that space.

ノ Expand table
Area Description

Azure OpenAI service Learn how to integrate your SAP workloads with Azure OpenAI service.

Microsoft Copilot Learn how to integrate your SAP workloads with Microsoft Copilots.

SAP RISE managed Learn how to integrate your SAP RISE managed workloads with Azure
workloads services.

Microsoft Office Learn about Office Add-ins in Excel, doing SAP Principal Propagation
with Office 365, SAP Analytics Cloud and Data Warehouse Cloud
integration and more.

Microsoft Teams Discover collaboration scenarios boosting your daily productivity by


interacting with your SAP applications directly from Microsoft Teams.

Microsoft Power Learn about the available out-of-the-box SAP applications enabling
Platform your business users to achieve more with less.

SAP Fiori Increase performance and security of your SAP Fiori applications by
integrating them with Azure services.

Microsoft Entra ID Ensure end-to-end SAP user authentication and authorization with
(formerly Azure Active Microsoft Entra ID. Single sign-on (SSO) and multifactor authentication
Directory) (MFA) are the foundation for a secure and seamless user experience.

Azure Integration Connect your SAP workloads with your end users, business partners,
Services and their systems with world-class integration services. Learn about co-
development efforts that enable SAP Event Mesh to exchange cloud
events with Azure Event Grid, understand how you can achieve high-
availability for services like SAP Cloud Integration, automate your SAP
invoice processing with Logic Apps and Azure AI services and more.

App Development in Apply best-in-class developer tooling to your SAP app developments
any language including and DevOps processes.
ABAP and DevOps

Azure Data Services Learn how to integrate your SAP data with Data Services like Azure
Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power
BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose,
tune performance, efficiently troubleshoot, and more.

Threat Monitoring and Learn how to best secure your SAP workload with Microsoft Defender
Response Automation for Cloud, the SAP certified Microsoft Sentinel solution, and
with Microsoft Security immutable vault for Azure Backup. Prevent incidents from happening,
Services for SAP detect, and respond to threats in real-time.

SAP Business Discover integration scenarios like SAP Private Link to securely and
Technology Platform efficiently connect your BTP apps to your Azure workloads.
(BTP)
Azure OpenAI service
For more information about integration with Azure OpenAI service, see the following
Azure documentation:

Microsoft AI SDK for SAP


ABAP SDK for Azure

Also see these SAP resources:

empower SAP RISE enterprise users with Azure OpenAI in multicloud environment
Consume OpenAI services (GPT) through CAP & SAP BTP, AI Core
SAP SuccessFactors Helps HR Solve Skills Gap with Generative AI | SAP News

Microsoft Copilot
For more information about integration with Microsoft 365 Copilot , see the following
Microsoft resources:

The synergy of market leaders: Exploring Microsoft and SAP’s game-changing


collaboration | blog

Also see these SAP resources:

The future of work is now: An update on generative AI at SAP SuccessFactors


SAP and Microsoft Collaborate on Joint Generative AI Offerings to Help Customers
Address the Talent Gap | SAP News

Microsoft Office
For more information about integration with Microsoft Office, see the following Azure
documentation:

Outbound E-Mail from SAP to Exchange Online


Enable SAP Principal Propagation for live OData feeds with Excel

Also see these SAP resources:

SAP Analysis for Microsoft Office Excel and PowerPoint


SAP Analytics Cloud, add-in for Microsoft Office
Access SAP Data Warehouse Cloud with Microsoft Excel

Microsoft Teams
For more information about integration with Microsoft Teams, see Native SAP apps on
the Teams marketplace . Also see the following SAP resources.

SAP SuccessFactors Learning


SAP Build Work Zone, advanced edition
Embedding SAP Cloud Portal and SAP Build Work Zone into Microsoft Teams
Embed self-hosted SAP Fiori Launchpad into Microsoft Teams
Simplify Supplier forecasting with SAP Integrated Business Planning, Ariba and
Microsoft Teams

Microsoft Power Platform


For more information about integration with Microsoft Power Platform, see the
following Power Automate resources:

Overview of SAP integration


Understand prebuilt solution available for integrating SAP with Power Platform
Finance and operations templates for SAP process mining with Power Automate
Process Advisor
Hyperautomation special video series for SAP based integration and automation
with Power Automate
RPA Playbook for SAP GUI Automation with Power Automate

Also see the following SAP resources:

Snoozing SAP systems with Power Apps


Use SAP Business Rules Service (part of SAP Workflow) to expose SAP business
logic to Power Apps

SAP Fiori
For more information about integration with SAP Fiori, see the following resources:

Monitor SAP Fiori performance with Azure Application Insights


Introduction to the Application Gateway WAF Triage Workbook .

Also see the following SAP resources:

Web Application Firewall Setup for Internet facing SAP Fiori Apps

Microsoft Entra ID (formerly Azure AD)


For more information about integrations with Microsoft Entra ID and Microsoft Entra ID
Governance, see the following Microsoft Entra documentation:

Manage access to your SAP applications


Secure access with SAP Cloud Identity Services and Microsoft Entra ID
SAP workload security - Microsoft Azure Well-Architected Framework
Provision users from SAP SuccessFactors to Active Directory
Provision users from SAP SuccessFactors to Microsoft Entra ID
Write-back users from Microsoft Entra ID to SAP SuccessFactors
Provision users to SAP Cloud Identity Services - Identity Authentication

For how to configure single sign-on, see the following Microsoft Entra documentation
and tutorials:

SAP Cloud Identity Services - Identity Authentication


SAP SuccessFactors
SAP Analytics Cloud
SAP Fiori
SAP Qualtrics
SAP Ariba
SAP Concur Travel and Expense
SAP Business Technology Platform
SAP Business ByDesign
SAP HANA
SAP Cloud for Customer

Also see the following SAP resources:

Azure Application Gateway Setup for Public and Internal SAP URLs
SAPGUI using Kerberos and Microsoft Entra Domain Services

Azure Integration Services


For more information about using SAP with Azure Integration services, see the following
Microsoft resources and Azure documentation:

New SAP events on Azure Event Grid with SAP Event Mesh
Expose SAP Process Orchestration on Azure securely
Connect to SAP from workflows in Azure Logic Apps
Import SAP OData metadata as an API into Azure API Management
Apply SAP Principal Propagation to your Azure hosted APIs
Using Logic Apps (Standard) to connect with SAP BAPIs and RFC
Also see the following SAP resources:

Event-driven architectures for SAP ERP with Azure


Achieve high availability for SAP Cloud Integration (part of SAP Integration Suite)
on Azure
Automate SAP invoice processing using Azure Logic Apps and Azure AI services

App development in any language including ABAP and


DevOps
For more information about integrating SAP with Microsoft services natively, see the
following resources:

the ABAP SDK for Azure


Use SAP's Cloud SDK with Azure app development services
Use community-driven OData SDKs with Azure Functions

Also see the following SAP resources:

SAP BTP ABAP Environment (also known as Steampunk) integration with Microsoft
services
SAP S/4HANA Cloud, private edition – ABAP Environment (also known as
Embedded Steampunk) integration with Microsoft services
dotNET speaks OData too, how to implement Azure App Service with SAP Gateway
Apply cloud native deployment practice blue-green to SAP BTP apps with Azure
DevOps

Azure Data Services


Learn how to choose the best SAP connector for data integration and how to tune
performance including troubleshooting tips on our cloud adoption framework for SAP.
Get started by identifying your SAP data sources here.

Integrate with Azure OpenAI Service from SAP ABAP via the Microsoft SDK for AI .

For more information about integration with Azure Data Services, see the following
Microsoft and Azure resources:

SAP knowledge center for Azure Data Factory and Synapse


Track end-to-end lineage of your SAP data with Microsoft Purview
Replicating SAP data using the CDC connector
SAP CDC Connector and SLT - Blog series - Part 1
Replicating SAP data using the OData connector with Synapse Pipelines
Use SAP HANA in Power BI Desktop
DirectQuery and SAP HANA
Use the SAP BW Connector in Power BI Desktop
Enable SAP Principal Propagation for live OData feeds with Power Query
SAP HANA Connector for Power Query

Also see the following SAP resources:

Integrate SAP Data Warehouse Cloud with Power BI and Azure Synapse Analytics
Extend SAP Integrated Business Planning forecasting algorithms with Azure
Machine Learning

Microsoft Security for SAP


Protect your data, apps, and infrastructure against rapidly evolving cyber threats with
cloud security services from Microsoft. Artificial intelligence (AI) and device learning (ML)
backed capabilities are required to keep up with the pace.

Use Microsoft Defender for Cloud to secure your cloud-infrastructure surrounding the
SAP system including automated responses.

Complimenting that, use the SAP certified solution Microsoft Sentinel to protect your
SAP system and SAP Business Technology Platform (BTP) instance from within using
signals from the SAP Audit Log among others.

Learn more about identity focused integration capabilities that power the analysis on
Defender and Sentinel via the Microsoft Entra ID section.

Leverage the immutable vault for Azure Backup to protect your SAP data from
ransomware attacks.

See the Microsoft Security Copilot working with an SAP Incident in action here .

Microsoft Sentinel for SAP


For more information about SAP certified threat monitoring with Microsoft Sentinel
for SAP, see the following Microsoft resources:

Microsoft Sentinel incident response playbooks for SAP


SAP security content reference
Deploy the Microsoft Sentinel solution for SAP
Deploy Microsoft Sentinel Solution for SAP BTP
Microsoft Sentinel SAP solution data reference
Deploying Microsoft Sentinel SAP agent into an AKS/Kubernetes cluster

Also see the following SAP resources:

How to use Microsoft Sentinel's SOAR capabilities with SAP


Deploy SAP user blocking based on suspicious activity on the SAP backend
Automatically trigger re-activation of the SAP audit log on malicious deactivation
Automatically remediate Sentinel SAP Collector Agent attack

See below video to experience the SAP security orchestration, automation and response
workflow with Sentinel in action:
https://www.youtube-nocookie.com/embed/b-AZnR-nQpg

Microsoft Defender for Cloud

The Defender product family consist of multiple products tailored to provide "cloud
security posture management" (CSPM) and "cloud workload protection" (CWPP) for the
various workload types. Below excerpt serves as entry point to start securing your SAP
system.

Defender for Servers (SAP hosts)


Protect your SAP hosts with Defender including OS specific Endpoint protection
with Microsoft Defender for Endpoint (MDE)
Microsoft Defender for Endpoint on Linux
Microsoft Defender for Endpoint on Windows
Enable Defender for Servers
Defender for Storage (SAP SMB file shares on Azure)
Protect your SAP SMB file shares with Defender
Enable Defender for Storage
Defender for APIs (SAP Gateway, SAP Business Technology Platform, SAP SaaS)
Protect your OpenAPI APIs with Defender for APIs
Enable the Defender for APIs

See SAP's recommendation to use AntiVirus software for SAP hosts and systems on both
Linux and Windows based platforms here . Be aware that the threat landscape has
evolved from file-based attacks to file-less attacks. Therefore, the protection approach
has to evolve beyond pure AntiVirus capabilities too.

For more information about using Microsoft Defender for Endpoint (MDE) via Microsoft
Defender for Server for SAP applications regarding Next-generation protection
(AntiVirus) and Endpoint Detection and Response (EDR) see the following Microsoft
resources:
SAP Applications and Microsoft Defender for Linux | Microsoft TechCommunity
SAP Applications and Microsoft Defender for Windows Server | Microsoft
TechCommunity
Enable the Microsoft Defender for Endpoint integration
Common mistakes to avoid when defining exclusions

Also see the following SAP resources:

3356389 - Antivirus or other security software affecting SAP operations


2808515 - Installing security software on SAP servers running on Linux
1730997 - Unrecommended versions of antivirus software

7 Note

It is not recommended to exclude files, paths or processes from EDR because it


creates blind spots for Defender. If exclusions are required nevertheless, open a
support case with Microsoft Support via the Defender365 Portal specifying
executables and/or paths to exclude. Follow the same process for tuning of real-
time scans.

7 Note

Certification for the SAP Virus Scan Interface (NW-VSI) doesn't apply to MDE,
because it operates outside of the SAP system. It complements Microsoft Sentinel
for SAP, which interacts with the SAP system directly. See more details and the SAP
certification note for Sentinel below.

 Tip

MDE was formerly called Microsoft Defender Advanced Threat Protection (ATP).
Older articles or SAP notes still refer to that name.

 Tip

Microsoft Defender for Server includes Endpoint detection and response (EDR)
features that are provided by Microsoft Defender for Endpoint Plan 2.

Immutable vault for Azure Backup for SAP


For more information about immutable vault for Azure Backup, see the following Azure
documentation:

Backup and restore plan to protect against ransomware


Back up SAP HANA System Replication databases on Azure VMs

SAP BTP
For more information about Azure integration with SAP Business Technology Platform
(BTP), see the following SAP resources:

SAP Discovery Center for Azure Services and Missions


Getting Started with SAP Private Link Service for Azure
SAP Private Link service use cases for SAP Cloud Integration and SAP Launchpad
Service
Automate SAP Cloud Integration flow recovery
Monitor multiple SAP Cloud Integration tenants with Azure Monitor
Route Multi-Region Traffic to SAP BTP Services Intelligently with Azure Traffic
Manager
Distributed Resiliency of SAP CAP applications using SAP HANA Cloud with Azure
Traffic Manager
Federate your data from Azure Data Explorer to SAP Data Warehouse Cloud
Integrate globally available SAP BTP apps with Azure Cosmos DB via OData
Explore your Azure data sources with SAP Data Warehouse Cloud
Building Applications on SAP BTP with Microsoft Services | OpenSAP course

Customer resources
These resources include Customer Engagement Initiatives (CEI), public BETAs, and
Customer Influence programs:

SAP S/4HANA Cloud - MS Teams Integration - Jul 2024 | SAP Customer Influence
SAP Event Mesh integration with Microsoft Azure Event Grid - Aug 2022 | SAP
Customer Influence
SAP Private Link Service GA announcement after public Beta - Jun 2022 | SAP Blogs
SAP Private Link service CEI - Jul 2022 | SAP Customer Influence

Free developer accounts


You can use the following free developer accounts to explore integration scenarios for
Azure and SAP.
Free trial of Azure
Free trial of Azure for students
Free account on SAP BTP trial . Select Singapore for Azure.
GitHub account , which you can use to host your projects.
Microsoft 365 developer program account

Next steps
Discover native SAP applications available on the Microsoft Teams marketplace
Browse the out-of-the-box SAP applications available on Microsoft Power Platform
Understand SAP data integration with Azure - Cloud Adoption Framework
Identify your SAP data sources - Cloud Adoption Framework
Explore joint reference architectures on the SAP Discovery Center
Secure your SAP NetWeaver email needs with Exchange Online
Migrate your legacy SAP middleware to Azure
SAP workload on Azure virtual machine
supported scenarios
Article • 02/10/2023

Designing SAP NetWeaver, Business one, Hybris or S/4HANA systems architecture in


Azure opens many different opportunities for various architectures and tools to use to
get to a scalable, efficient, and highly available deployment. Though dependent on the
operating system or DBMS used, there are restrictions. Also, not all scenarios that are
supported on-premises are supported in the same way in Azure. This document will lead
through the supported non-high-availability configurations and high-availability
configurations and architectures using Azure VMs exclusively.

7 Note

HANA Large Instance service is in sunset mode and doesn't accept new customers
anymore. Providing units for existing HANA Large Instance customers is still
possible. For alternatives, check the offers of HANA certified Azure VMs in the
HANA Hardware Directory . For scenarios that were and still are supported for
existing HANA Large Instance customers with HANA Large Instances, check the
article Supported scenarios for HANA Large Instances.

General platform restrictions


Azure has various platforms besides so called native Azure VMs that are offered as first
party service. HANA Large Instances, which is in sunset mode is one of those platforms.
Azure VMware Services is another of these first party services. Azure VMware Services
in general isn't supported by SAP for hosting SAP workload. Refer to SAP support note
#2138865 - SAP Applications on VMware Cloud: Supported Products and VM
configurations for more details of VMware support on different platforms.

Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS
service with Azure Active Directory Domain Services (traditional AD managed by
Microsoft), and Azure Active Directory. SAP components hosted on Windows OS are
often relying on the usage of Windows Active Directory. In this case the traditional
Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain
Services (still in testing). But these SAP components can't function with the native Azure
Active Directory. Reason is that there are still larger gaps in functionality between Active
Directory in its on-premises form or its SaaS form (Azure Active Directory Domain
Services) and the native Azure Active Directory. This dependency is the reason why
Azure Active Directory accounts aren't supported for applications based on SAP
NetWeaver and S/4 HANA on Windows OS. Traditional Active Directory accounts need
to be used in such scenarios.

AD service Supported applications based on SAP NetWeaver and S/4


HANA on Windows OS

On-premises Windows Active Supported


Directory

Azure Active Directory Supported


Domain Services

Azure Active Directory Not supported

The above doesn't affect the usage of Azure Active Directory accounts for single-sign-
on (SSO) scenarios with SAP applications.

2-Tier configuration
An SAP 2-Tier configuration is considered to be built up out of a combined layer of the
SAP DBMS and application layer that run on the same server or VM unit. The second tier
is considered to be the user interface layer. For a 2-Tier configuration, the DBMS, and
SAP application layer share the resources of the Azure VM. As a result, you need to
configure the different components in a way that these components don't compete for
resources. You also need to be careful to not oversubscribe the resources of the VM.
Such a configuration doesn't provide any high availability, beyond the Azure Service
Level agreements of the different Azure components involved.

A graphical representation of such a configuration can look like:


Such configurations are supported with Windows, Red Hat, SUSE, and Oracle Linux for
the DBMS systems of SQL Server, Oracle, Db2, maxDB, and SAP ASE for production and
non-production cases. For SAP HANA as DBMS, SAP supports such a scenario as stated
in SAP note #1953429 . So far, none of the Linux distros provided sufficient HA
documentation to set up and operate a Pacemaker cluster in such a configuration. As a
result, such type of configurations is supported on Azure only for non-production cases
that don't require a high availability failover cluster.

For all OS/DBMS combinations supported on Azure, this type of configuration is


supported. However, it's mandatory that you set the configuration of the DBMS and the
SAP components in a way that DBMS and SAP components don't compete for memory
and CPU resources and with that exceeds the physical available resources. This needs to
be done by restricting the memory the DBMS is allowed to allocate. You also need to
limit the SAP Extended Memory on application instances. You also need to monitor CPU
consumption of the VM overall to make sure that the components aren't maximizing the
CPU resources.

7 Note

For production SAP systems, we recommend additional high availability and


eventual disaster recovery configurations as described later in this document

3-Tier configuration
In such configurations, you separate the SAP application layer and the DBMS layer into
different VMs. You usually do that for larger systems and out of reasons of being more
flexible on the resources of the SAP application layer. In the most simple setup, There's
no high availability beyond the Azure Service Level agreements of the different Azure
components involved.

The graphical representation looks like:


This type of configuration is supported on Windows, Red Hat, SUSE, and Oracle Linux for
the DBMS systems of SQL Server, Oracle, Db2, SAP HANA, maxDB, and SAP ASE for
production and non-production cases. For simplification, we didn't distinguish between
SAP Central Services and SAP dialog instances in the SAP application layer. In this simple
3-Tier configuration, there would be no high availability protection for SAP Central
Services.

7 Note

For production SAP systems, we recommend additional high availability and


eventual disaster recovery configurations as described later in this document

Multiple DBMS instances per VM


In this configuration type, you host multiple DBMS instances per Azure VM. The
motivation can be to have less operating systems to maintain and with that reduced
costs. Other motivations are to have more flexibility and more efficiency by sharing
resources of a larger VM or HANA Large Instance unit among multiple DBMS instances.
So far these configurations were showing up mostly for non-production systems.

A configuration like that could look like:


This type of DBMS deployment is supported for:

SQL Server on Windows


IBM Db2. Find details in the article Multiple instances (Linux, UNIX)
For Oracle. For details see SAP support note #1778431 and related SAP notes
For SAP HANA, multiple instances on one VM, SAP calls this deployment method
MCOS, is supported. For details see the SAP article Multiple SAP HANA Systems on
One Host (MCOS)

Running multiple database instances on one host, you need to make sure that the
different instances aren't competing for resources and thereby exceed the physical
resource limits of the VM. This is especially true for memory where you need to cap the
memory anyone of the instances sharing the VM can allocate. That also might be true
for the CPU resources the different database instances can consume. All the database
systems mentioned have configurations that allow limiting memory allocation and CPU
resources on an instance level. In order to have support for such a configuration for
Azure VMs, it's expected that the disks or volumes that are used for the data and
log/redo log files of the databases that are managed by the different instances are
separate. Or in other words data or log/redo log files of databases that are managed by
different DBMS instance aren't supposed to share the same disks or volumes.

7 Note

For production SAP systems, we recommend additional high availability and


eventual disaster recovery configurations as described later in this document. VMs
with multiple DBMS instances aren't supported with the high availability
configurations described later in this document.

Multiple SAP Dialog instances in one VM


In many cases, multiple dialog instances got deployed on bare metal servers or even in
VMs running in private clouds. Reason for such configurations was to tailor certain SAP
dialog instances to certain workload, business functionality, or workload types. Reason
for not isolating those instances into separate VMs was the effort of operating system
maintenance and operations. Or in numerous cases the costs in case the hoster or
operator of the VM is asking for a monthly fee per VM operated and administrated. In
Azure, a scenario of hosting multiple SAP dialog instances within a single VM us
supported for production and non-production purposes on the operating systems of
Windows, Red Hat, SUSE, and Oracle Linux. The SAP kernel parameter PHYS_MEMSIZE,
available on Windows and modern Linux kernels, should be set if multiple SAP
Application Server instances are running on a single VM. it's also advised limiting the
expansion of SAP Extended Memory on operating systems, like Windows where
automatic growth of the SAP extended Memory is implemented. This can be done with
the SAP profile parameter em/max_size_MB .

At 3-Tier configuration where multiple SAP dialog instances are run within Azure VMs
can look like:
For simplification, we didn't distinguish between SAP Central Services and SAP dialog
instances in the SAP application layer. In this simple 3-Tier configuration, there would be
no high availability protection for SAP Central Services. For production systems, it's not
recommended to leave SAP Central Services unprotected. For specifics on so called
multi-SID configurations around SAP Central Instances and high-availability of such
multi-SID configurations, see later sections of this document.

High Availability protection for the SAP DBMS


layer
As you look to deploy SAP production systems, you need to consider hot standby type
of high availability configurations. Especially with SAP HANA, where data needs to be
loaded into memory before being able to get the full performance and scalability back,
Azure service healing isn't an ideal measure for high availability.

In general, Microsoft supports only high availability configurations and software


packages that are described in the SAP workload scenarios. You can read the same
statement in SAP note #1928533 . Microsoft won't provide support for other high
availability third-party software frameworks that aren't documented by Microsoft with
SAP workload. In such cases, the third-party supplier of the high availability framework is
the supporting party for the high availability configuration who needs to be engaged by
you as a customer into the support process. Exceptions are going to be mentioned in
this article.

In general, Microsoft supports a limited set of high availability configurations on Azure


VMs or HANA Large Instances units.

For Azure VMs, the following high availability configurations are supported on DBMS
level:
SAP HANA System Replication based on Linux Pacemaker on SUSE and Red Hat.
See the detailed articles:
High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server
High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
SAP HANA scale-out n+m configurations using Azure NetApp Files on SUSE and
Red Hat. Details are listed in these articles:
Deploy a SAP HANA scale-out system with standby node on Azure VMs by
using Azure NetApp Files on SUSE Linux Enterprise Server}
Deploy a SAP HANA scale-out system with standby node on Azure VMs by
using Azure NetApp Files on Red Hat Enterprise Linux
SQL Server Failover cluster based on Windows Scale-Out File Services. Though
recommendation for production systems is to use SQL Server Always On instead of
clustering. SQL Server Always On provides better availability using separate
storage. Details are described in this article:
Configure a SQL Server failover cluster instance on Azure virtual machines
SQL Server Always On is supported with the Windows operating system for SQL
Server on Azure. This configuration is the default recommendation for production
SQL Server instances on Azure. Details are described in these articles:
Introducing SQL Server Always On availability groups on Azure virtual machines.
Configure an Always On availability group on Azure virtual machines in different
regions.
Configure a load balancer for an Always On availability group in Azure.
Oracle Data Guard for Windows and Oracle Linux. Details for Oracle Linux can be
found in this article:
Implement Oracle Data Guard on an Azure Linux virtual machine
IBM Db2 HADR on SUSE and RHEL Detailed documentation for SUSE and RHEL
using Pacemaker is provided here:
High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
with Pacemaker
High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux
Server
SAP ASE and SAP maxDB configuration as detailed in these documents:
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP MaxDB, liveCache, and Content Server deployment on Azure VMs
HANA Large Instances high availability scenarios are detailed in:
Supported scenarios for HANA Large Instances- HSR with fencing for high
availability
Supported scenarios for HANA Large Instances - Host auto failover (1+1)

) Important
For none of the scenarios described above, we support configurations of multiple
DBMS instances in one VM. Means in each of the cases, only one database instance
can be deployed per VM and protected with the described high availability
methods. Protecting multiple DBMS instances under the same Windows or
Pacemaker failover cluster is NOT supported at this point in time. Also Oracle Data
Guard is supported for single instance per VM deployment cases only.

Various database systems allow hosting multiple databases under one DBMS instance.
Like with SAP HANA, multiple databases can be hosted in multiple database containers
(MDC). For cases where these multi-database configurations are working within one
failover cluster resource, these configurations are supported. Configurations that aren't
supported are cases where multiple cluster resources would be required. As for
configurations where you would define multiple SQL Server Availability Groups, under
one SQL Server instance.

Dependent on the DBMS an/or operating systems, components like Azure load balancer
might or might not be required as part of the solution architecture.

Specifically for maxDB, the storage configuration needs to be different. With maxDB, the
data and log files needs to be located on shared storage for high availability
configurations. Only for maxDB, shared storage is supported for high availability. For all
other DBMS, separate storage stacks per node are the only supported disk
configurations.
Other high availability frameworks are known to exist and are known to run on
Microsoft Azure as well. However, Microsoft didn't test those frameworks. If you want to
build your high availability configuration with those frameworks, you need to work with
the provider of that software to:

Develop a deployment architecture


Deployment of the architecture
Support of the architecture

) Important

Microsoft Azure Marketplace offers a variety of soft appliances that provide storage
solutions on top of Azure native storage. These soft appliances can be used to
create NFS shares as well that theoretically could be used in the SAP HANA scale-
out deployments where a standby node is required. Due to various reasons, none
of these storage soft appliances is supported for any of the DBMS deployments by
Microsoft and SAP on Azure. Deployments of DBMS on SMB shares isn't supported
at all at this point in time. Deployments of DBMS on NFS shares is limited to NFS
4.1 shares on Azure NetApp Files .

High Availability for SAP Central Service


SAP Central Services is a second single point of failure of your SAP configuration. As a
result, you would need to protect these Central Services processes as well. The offer
supported and documented for SAP workload reads like:

Windows Failover Cluster Server using Windows Scale-out File Services for sapmnt
and global transport directory. Details are described in the article:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file
share in Azure
Prepare Azure infrastructure for SAP high availability by using a Windows
failover cluster and file share for SAP ASCS/SCS instances
Windows Failover Cluster Server using SMB share based on Azure NetApp Files
for sapmnt and global transport directory. Details are listed in the article:
High availability for SAP NetWeaver on Azure VMs on Windows with Azure
NetApp Files(SMB) for SAP applications
Windows Failover Cluster Server based on SIOS Datakeeper . Though documented
by Microsoft, you need a support relationship with SIOS, so, that you can engage
with SIOS support when using this solution. Details are described in the article:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a
cluster shared disk in Azure
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for SAP ASCS/SCS
Pacemaker on SUSE operating system with creating a highly available NFS share
using two SUSE VMs and drdb for file replication. Details are documented in the
article
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server for SAP applications
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
Pacemaker SUSE operating system with using NFS shares provided by Azure
NetApp Files . Details are documented in
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with Azure NetApp Files for SAP applications
Pacemaker on Red Hat operating system with NFS share hosted on a glusterfs
cluster. Details can be found in the articles
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
Pacemaker on Red Hat operating system with NFS share hosted on Azure NetApp
Files . Details are described in the article
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications

Of the listed solutions, you need a support relationship with SIOS to support the
Datakeeper product and to engage with SIOS directly if problems are encountered.

Dependent on the way you licensed the Windows, Red Hat, and/or SUSE OS, you could
also be required to have a support contract with your OS provider to have full support
of the listed high availability configurations.

The configuration can as well be displayed like:


On the right hand side of the graphics, the highly available SAP Central Services is
shown. Besides having the SAP Central services protected with a failover cluster
framework that can fail over in failure scenarios. There's a necessity for a highly available
NFS or SMB share, or a Windows shared disk to make sure the sapmnt and global
transport directory are available independent of the existence of a single VM. Additional
some of the solutions, like Windows Failover Cluster Server and Pacemaker are going to
require an Azure load balancer to direct or redirect traffic to a healthy node.

In the list shown, There's no mentioning of the Oracle Linux operating system. Oracle
Linux doesn't support Pacemaker as a cluster framework. If you want to deploy your SAP
system on Oracle Linux and you need a high availability framework for Oracle Linux, you
need to work with third-party suppliers. One of the suppliers is SIOS with their
Protection Suite for Linux that is supported by SAP on Azure. For more information read
SAP note #1662610 - Support details for SIOS Protection Suite for Linux for more
details.

Supported storage with the SAP Central Services


scenarios listed above
Since only a subset of Azure storage types is providing highly available NFS or SMB
shares that quality for the usage in our SAP Central Services cluster scenarios a list of
supported storage types

Windows Failover Cluster Server with Windows Scale-out File Server can be
deployed on all native Azure storage types, except Azure NetApp Files. However,
recommendation is to use Premium Storage due to superior service level
agreements in throughput and IOPS.
Windows Failover Cluster Server with SMB on Azure NetApp Files is supported on
Azure NetApp Files. SMB shares hosted on Azure Premium File services are
supported for this scenario as well. Azure Standard Files isn't supported
Windows Failover Cluster Server with windows shared disk based on SIOS
Datakeeper can be deployed on all native Azure storage types, except Azure
NetApp Files. However, recommendation is to use Premium Storage due to
superior service level agreements in throughput and IOPS.
SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported.
SUSE or Red Hat Pacemaker using NFS shares on Azure Premium Files using LRS or
ZRS s supported. Azure Standard Files isn't supported
SUSE Pacemaker using a drdb configuration between two VMs is supported using
native Azure storage types, except Azure NetApp Files. However, we recommend
using one of the first party services with Azure Premium Files or Azure NetApp
Files.
Red Hat Pacemaker using glusterfs for providing NFS share is supported using
native Azure storage types, except Azure NetApp Files. However, we recommend
using one of the first party services with Azure Premium Files or Azure NetApp
Files.

) Important

Microsoft Azure Marketplace offers a variety of soft appliances that provide storage
solutions on top of Azure native storage. These storage soft appliances can be used
to create NFS or SMB shares as well that theoretically could be used in the failover
clustered SAP Central Services as well. These solutions aren't directly supported for
SAP workload by Microsoft. If you decide to use such a solution to create your NFS
or SMB share, support for the SAP Central Service configuration needs to be
provided by the third-party owning the software in the storage soft appliance.

Multi-SID SAP Central Services failover clusters


To reduce the number of VMs that are needed in large SAP landscapes, SAP allows
running SAP Central Services instances of multiple different SAP systems in failover
cluster configuration. Imagine cases where you've 30 or more NetWeaver or S/4HANA
production systems. Without multi-SID clustering, these configurations would require 60
or more VMs in 30 or more Windows or Pacemaker failover cluster configurations.
Deploying multiple SAP central services across two nodes in a failover cluster
configuration can reduce the number of VMs significantly. However, deploying multiple
SAP Central services instances on a single two node cluster configuration also has some
disadvantages. Issues around a single VM in the cluster configuration apply to multiple
SAP systems. Maintenance on the guest OS running in the cluster configuration requires
more coordination since multiple production SAP systems are affected. Tools like SAP
LaMa aren't supporting multi-SID clustering in their system cloning process.

On Azure, a multi-SID cluster configuration is supported for the Windows operating


system with ENSA1 and ENSA2. Recommendation isn't to combine the older Enqueue
Replication Service architecture (ENSA1) with the new architecture (ENSA2) on one
multi-SID cluster. Details about such an architecture are documented in the articles

SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover
Clustering and shared disk on Azure
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover
Clustering and file share on Azure

For SUSE, a multi-SID cluster based on Pacemaker is supported as well. So far the
configuration is supported for:

A maximum of five SAP ASCS/SCS instances


The old enqueue replication server ice architecture (ENSA1)
Two node Pacemaker cluster configurations

The configuration is documented in High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP applications multi-SID guide

A multi-SID cluster with Enqueue Replication server schematically looks like


SAP HANA scale-out scenarios
SAP HANA scale-out scenarios are supported for a subset of the HANA certified Azure
VMs as listed in the SAP HANA hardware directory . All the VMs marked with 'Yes' in
the column 'Clustering' can be used for either OLAP or S/4HANA scale-out.
Configurations without standby are supported with the Azure Storage types of:

Azure Premium Storage v1, including Azure Write accelerator for the /hana/log
volume
Azure Premium Storage v2
Ultra disk
Azure NetApp Files

SAP HANA scale-out configurations for OLAP or S/4HANA with standby node(s) are
exclusively supported with NFS shared hosted on Azure NetApp Files.

For further information on exact storage configurations with or without standby node,
check the articles:

SAP HANA Azure virtual machine storage configurations


Deploy a SAP HANA scale-out system with standby node on Azure VMs by using
Azure NetApp Files on SUSE Linux Enterprise Server
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using
Azure NetApp Files on Red Hat Enterprise Linux
SAP support note #2080991
Disaster Recovery Scenario
There's a variety of disaster recovery scenarios that are supported. We define Disaster
architectures as architectures, which should compensate for a complete Azure region
going off the grid. This means we need the disaster recovery target to be a different
Azure region as target to run your SAP landscape. We separate methods and
configurations in DBMS layer and non-DBMS layer.

DBMS layer
For the DBMS layer, configurations using the DBMS native replication mechanisms, like
Always On, Oracle Data Guard, Db2 HADR, SAP ASE Always-On, or HANA System
Replication are supported. it's mandatory that the replication stream in such cases is
asynchronous, instead of synchronous as in typical high availability scenarios that are
deployed within a single Azure region. A typical example of such a supported DBMS
disaster recovery configuration is described in the article SAP HANA availability across
Azure regions. The second graphic in that section describes a scenario with HANA as an
example. The main databases supported for SAP applications are all able to be deployed
in such a scenario.

it's supported to use a smaller VM as target instance in the disaster recovery region
since that VM doesn't experience the full workload traffic. Doing so, you need to keep
the following considerations in mind:

Smaller VM types don't allow that many disks attached than smaller VMs
Smaller VMs have less network and storage throughput
Resizing across VM families can be a problem when the Different VMs are
collected in one Azure Availability Set or when the resizing should happen
between the M-Series family and Mv2 family of VMs
CPU and memory consumption for the database instance being able to receive the
stream of changes with minimal delay and enough CPU and memory resources to
apply these changes with minimal delay to the data

More details on limitations of different VM sizes can be found on the VM sizes page

Another supported method of deploying a DR target is to have a second DBMS instance


installed on a VM that runs a non-production DBMS instance of a non-production SAP
instance. This can be a bit more challenging since you need to figure out what on
memory, CPU resources, network bandwidth, and storage bandwidth is needed for the
particular target instances that should function as main instance in the DR scenario.
Especially in HANA it's highly recommended that you're configuring the instance that
functions as DR target on a shared host so that the data isn't pre-loaded into the DR
target instance.

7 Note

Usage of Azure Site Recovery has not been tested for DBMS deployments under
SAP workload. As a result it's not supported for the DBMS layer of SAP systems at
this point in time. Other methods of replications by Microsoft and SAP that aren't
listed aren't supported. Using third party software for replicating the DBMS layer of
SAP systems between different Azure Regions, needs to be supported by the
vendor of the software and will not be supported through Microsoft and SAP
support channels.

Non-DBMS layer
For the SAP application layer and eventual shares or storage locations that are needed,
the two major scenarios are used by customers:

The disaster recovery targets in the second Azure region aren't being used for any
production or non-production purposes. In this scenario, the VMs that function as
disaster recovery target are ideally not deployed and the image and changes to
the images of the production SAP application layer is replicated to the disaster
recovery region. A functionality that can perform such a task is Azure Site
Recovery. Azure Site Recovery support an Azure-to-Azure replication scenario like
this.
The disaster recovery targets are VMs that are actually in use by non-production
systems. The whole SAP landscape is spread across two different Azure regions
with production systems usually in one region and non-production systems in
another region. In many customer deployments, the customer has a non-
production system that is equivalent to a production system. The customer has
production application instances pre-installed on the application layer non-
production systems. In a failover event, the non-production instances would be
shut down, the virtual names of the production VMs moved to the non-production
VMs (after assigning new IP addresses in DNS), and the pre-installed production
instances are getting started

SAP Central Services clusters


SAP Central Services clusters that are using shared disks (Windows), SMB shares
(Windows) or NFS shares are a bit harder to replicate. On the Windows side, Windows
Storage Replication is a possible solution. On Linux, rsync is a viable solution. Also cross
region replication of Azure NetApp Files is a viable solution.

Non-supported scenario
There's a list of scenarios, which aren't supported for SAP workload on Azure
architectures. Not supported means SAP and Microsoft are not able to deliver support
for these configurations and need to defer to an eventual involved third-party that
provided software to establish such architectures. Two of the categories are:

Storage soft appliances: There are various storage soft appliances in the market.
Some of the vendors offer own documentation on how to use their storage soft
appliances on Azure related to SAP software. Support of configurations or
deployments involving such storage soft appliances needs to be provided by the
vendor of the storage soft appliance. This fact is also manifested in SAP support
note #2015553
High Availability frameworks: Only Pacemaker and Windows Server Failover Cluster
are supported high availability frameworks for SAP workload on Azure. As
mentioned earlier, the solution of SIOS Datakeeper is described and documented
by Microsoft. Nevertheless, the components of SIOS Datakeeper need to be
supported through SIOS as the vendor providing those components. SAP also
listed other certified high availability frameworks in various SAP notes. Some of
them were certified by the third-party vendor for Azure as well. Nevertheless,
support for configurations using those products need to be provided by the
product vendor. Different vendors have different integration into the SAP support
processes. You should clarify what support process works best for the particular
vendor before deciding to use the product with SAP configurations deployed on
Azure.
Shared disk clusters where database files are residing on the shared disks aren't
supported, except for maxDB. For all other database, the supported solution is to
have separate storage locations instead of an SMB or NFS share or shared disk to
configure high-availability scenarios

Other scenarios, which aren't supported are scenarios like:

Deployment scenarios that introduce a larger network latency between the SAP
application tier and the SAP DBMS tier as in NetWeaver, S/4HANA and e.g. Hybris .
This includes:
Deploying one of the tiers on-premises whereas the other tier is deployed in
Azure
Deploying the SAP application tier of a system in a different Azure region than
the DBMS tier
Deploying one tier in datacenters that are co-located to Azure and the other tier
in Azure, except where such an architecture pattern is provided by an Azure
native service
Deploying network virtual appliances between the SAP application tier and the
DBMS layer
Using storage that is hosted in datacenters co-located to Azure datacenter for
the SAP DBMS tier or SAP global transport directory
Deploying the two layers with two different cloud vendors. For example,
deploying the DBMS tier in Oracle Cloud Infrastructure and the application tier
in Azure
Multi-Instance HANA Pacemaker cluster configurations
Windows Cluster configurations with shared disks through SOFS or SMB on ANF
for SAP databases supported on Windows. Instead we recommend the usage of
native high availability replication of the particular databases and use separate
storage stacks
Deployment of SAP databases supported on Linux with database files that are
located in NFS shares on top of ANF except for SAP HANA, Oracle on Oracle Linux,
and Db2 on Suse and Red Hat
Deployment of Oracle DBMS on any other guest OS than Windows and Oracle
Linux. See also SAP support note #2039619

Scenario(s) that we didn't test and therefore have no experience with list like:

Azure Site Recovery replicating DBMS layer VMs. As a result, we recommend using
the database native asynchronous replication functionality for potential disaster
recovery configuration

Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP
NetWeaver
What SAP software is supported for
Azure deployments
Article • 02/10/2023

This article describes how you can find out what SAP software is supported for Azure
deployments and what the necessary operating system releases or DBMS releases are.

Evaluating, whether your current SAP software is supported and what OS and DBMS
releases are supported with your SAP software in Azure, you are going to need access
to:

SAP support notes


SAP Product availability Matrix

General restrictions for SAP workload


Azure IaaS services that can be used for SAP workload are limited to x86-64 or x64
hardware. There is no Sparc or Power CPU based offers that apply to SAP workload.
Customers who run on their applications on operating systems proprietary to hardware
architectures like IBM mainframe or AS400, or where the operating systems HP-UX,
Solaris or AIX are in use, need to change their SAP applications including DBMS to one
of the following operating systems:

Windows server 64bit for the x86-64 platform


SUSE linux 64bit for the x86-64 platform
Red hat Linux 64Bit for the x86-64 platform
Oracle Linux 64bit for the x86-64 platform

In combination with SAP software, no other OS releases or Linux distributions are


supported. Exact details on specific versions and cases are documented later in the
document.

You start here


The starting point for you is SAP support note #1928533 . As you go through this SAP
note from top to bottom, several areas of supported software and VMs are shown

The first section lists the minimum requirements for operating releases that are
supported with SAP software in Azure VMs in general. If you are not reaching those
minimum requirements and run older releases of these operating systems, you need to
upgrade your OS release to such a minimum release or even more recent releases. It is
correct that Azure in general would support older releases of some of those operating
systems. But the restrictions or minimum releases as listed are based on tests and
qualifications executed and are not going to be extended further back.

7 Note

There are some specific VM types, HANA Large Instances or SAP workloads that are
going to require more recent OS releases. Cases like that will be mentioned
throughout the document. Cases like that are clearly documented either in SAP
notes or other SAP publications.

The section following lists general SAP platforms that are supported with the releases
that are supported and more important the SAP kernels that are supported. It lists
NetWeaver/ABAP or Java stacks that are supported AND, which need minimum kernel
releases. More recent ABAP stacks are supported on Azure, but do not need minimum
kernel releases since changes for Azure got implemented from the start of the
development of the more recent stacks

You need to check:

Whether the SAP applications you are running, are covered by the minimum
releases stated. If not, you need to define a new target release, check in the SAP
Product Availability Matrix, what operating system builds and DBMS combinations
are supported with the new target release. So, that you can choose the right
operating system release and DBMS release
Whether you need to update your SAP kernels in a move to Azure
Whether you need to update SAP Support Packages. Especially Basis Support
Packages that can be required for cases where you are required to move to a more
recent DBMS release

The next section goes into more details on other SAP products and DBMS releases that
are supported by SAP on Azure for Windows and Linux.

7 Note

The minimum releases of the different DBMS is carefully chosen and might not
always reflect the whole spectrum of DBMS releases the different DBMS vendors
support on Azure in general. Many SAP workload related considerations were taken
into account to define those minimum releases. There is no effort to test and
qualify older DBMS releases.
7 Note

The minimum releases listed are representing older version of operating systems
and database releases. We highly encourage to use most recent operating system
releases and database releases. In a lot of cases, more recent operating system and
database releases took the usage case of running in public cloud into consideration
and adapted code to optimize for running in public cloud or more specifically
Azure

Oracle DBMS support


Operating system, Oracle DBMS releases and Oracle functionality supported on Azure
are specifically listed in SAP support note #2039619 . Essence out of that note can be
summarized like:

Minimum Oracle release supported on Azure VMs that are certified for NetWeaver
is Oracle 11g Release 2 Patchset 3 (11.2.0.4)
As guest operating systems only Windows and Oracle Linux qualify. Exact releases
of the OS and related minimum DBMS releases are listed in the note
The support of Oracle Linux extends to the Oracle DBMS client as well. This means
that all SAP components, like dialog instances of the ABAP or Java Stack need to
run on Oracle Linux as well. Only SAP components within such an SAP system that
would not connect to the Oracle DBMS would be allowed to run a different Linux
operating system
Oracle RAC is not supported
Oracle ASM is supported for some of the cases. Details are listed in the note
Non-Unicode SAP systems are only supported with application servers running
with Windows guest OS. The guest operating system of the DBMS can be Oracle
Linux or Windows. Reason for this restriction is apparent when checking the SAP
Product Availability Matrix (PAM). For Oracle Linux, SAP never released non-
Unicode SAP kernels

Knowing the DBMS releases that are supported with the targeted Azure infrastructure
you need to check the SAP Product Availability Matrix on whether the OS releases and
DBMS required are supported with your SAP product releases you intended to run.

Oracle Linux
Most prominent asked question around Oracle Linux is whether SAP supports the Red
Hat kernel that is integral part of Oracle Linux as well. For details read SAP support note
#1565179 .

Other database than SAP HANA


Support of non-HANA databases for SAP workload is documented in SAP support note
#1928533 .

SAP HANA support


In Azure there are two services, which can be used to run HANA database:

Azure Virtual Machines


HANA Large Instances

For running SAP HANA, SAP has more and stronger conditions infrastructure needs to
meet than for running NetWeaver or other SAP applications and DBMS. As a result a
smaller number of Azure VMs qualify for running the SAP HANA DBMS. The list of
supported Azure infrastructure supported for SAP HANA can be found in the so called
SAP HANA hardware directory .

7 Note

The units starting with the letter 'S' are HANA Large Instances units.

7 Note

SAP has no specific certification dependent on the SAP HANA major releases.
Contrary to common opinion, the column Certification scenario in the HANA
certified IaaS platforms , the column makes no statement about the HANA
major or minor release certified. You need to assume that all the units listed that
can be used for HANA 1.0 and HANA 2.0 as long as the certified operating system
releases for the specific units are supported by HANA 1.0 releases as well.

For the usage of SAP HANA, different minimum OS releases may apply than for the
general NetWeaver cases. You need to check out the supported operating systems for
each unit individually since those might vary. You do so by clicking on each unit. More
details will appear. One of the details listed is the different operating systems supported
for this specific unit.
7 Note

Azure HANA Large Instance units are more restrictive with supported operating
systems compared to Azure VMs. On the other hand Azure VMs may enforce more
recent operating releases as minimum releases. This is especially true for some of
the larger VM units that required changes to Linux kernels

Knowing the supported OS for the Azure infrastructure, you need to check SAP support
note #2235581 for the exact SAP HANA releases and patch levels that are supported
with the Azure units you are targeting.

) Important

The step of checking the exact SAP HANA releases and patch levels supported is
very important. In a lot of cases, support of a certain OS release is dependent on a
specific patch level of the SAP HANA executables.

As you know the specific HANA releases you can run on the targeted Azure
infrastructure, you need to check in the SAP Product Availability Matrix to find out
whether there are restrictions with the SAP product releases that support the HANA
releases you filtered out

Certified Azure VMs and HANA Large Instance


units and business transaction throughput
Besides evaluating supported operating system releases, DBMS releases and dependent
support SAP software releases for Azure infrastructure units, you have the need to
qualify these units by business transaction throughput, which is expressed in the unit
'SAP' by SAP. All the SAP sizing depends on SAPS calculations. Evaluating existing SAP
systems, you usually can, with the help of your infrastructure provider, calculate the
SAPS of the units. For the DBMS layer as well as for the application layer. In other cases
where new functionality is created, a sizing exercise with SAP can reveal the required
SAPS numbers for the application layer and the DBMS layer. As infrastructure provider
Microsoft is obliged to provide the SAP throughput characterization of the different
units that are either NetWeaver and/or HANA certified.

For Azure VMs, these SAPS throughput numbers are documented in SAP support note
#1928533 . For Azure HANA Large Instance units, the SAPS throughput numbers are
documented in SAP support note #2316233
Looking into SAP support note #1928533 , the following remarks apply:

For M-Series Azure VMs and Mv2-Series Azure VMs, different minimum OS
releases apply than for other Azure VM types. The requirement for more recent
OS releases is based on changes the different operating system vendors had to
provide in their operating system releases to either enable their operating systems
running on the specific Azure VM types or optimize performance and throughput
of SAP workload on those VM types
There are two tables that specify different VM types. The second table specifies
SAPS throughput for Azure VM types that support Azure standard Storage only.
DBMS deployment on the units specified in the second table of the note is not
supported

Other SAP products supported on Azure


In general the assumption is that with the state of hyperscale clouds like Azure, most of
the SAP software should run without functional problems in Azure. Nevertheless and
opposite to private cloud visualization, SAP still expresses support for the different SAP
products explicitly for the different hyerpscale cloud providers. As a result there are
different SAP support notes indicating support for Azure for different SAP products.

For Business Objects BI platform, SAP support note #2145537 gives a list of SAP
Business Objects products supported on Azure. If there are questions around
components or combinations of software releases and OS releases that seem not to be
listed or supported and which are more recent than the minimum releases listed, you
need to open an SAP support request against the component you inquire support for.

For Business Objects Data Services, SAP support note #22288344 explains minimum
support of SAP Data Services running on Azure.

7 Note

As indicated in the SAP support note, you need to check in the SAP PAM to identify
the correct support package level to be supported on Azure

SAP Datahub/Vora support in Azure Kubernetes Services (AKS) is detailed in SAP


support note #2464722

Support for SAP BPC 10.1 SP08 is described in SAP support note #2451795

Support for SAP Hybris Commerce Platform on Azure is detailed in the Hybris
Documentation . As of supported DBMS for SAP Hybris Commerce Platform, it lists
like:

SQL Server and Oracle on the Windows operating system platform. Same
minimum releases apply as for SAP NetWeaver. See SAP support note #1928533
for details
SAP HANA on Red Hat and SUSE Linux. SAP HANA certified VM types are required
as documented earlier in this document. SAP (Hybris) Commerce Platform is
considered OLTP workload
SQL Azure DB as of SAP (Hybris) Commerce Platform version 1811

Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP
NetWeaver
SAP workloads on Azure: planning and
deployment checklist
Article • 06/14/2023

This checklist is designed for customers moving SAP applications to Azure infrastructure
as a service. SAP applications in this document represent SAP products running the SAP
kernel, including SAP NetWeaver, S/4HANA, BW and BW/4 and others. Throughout the
duration of the project, a customer and/or SAP partner should review the checklist. It's
important to note that many of the checks are completed at the beginning of the
project and during the planning phase. After the deployment is done, straightforward
changes on deployed Azure infrastructure or SAP software releases can become
complex.

Review the checklist at key milestones during your project. Doing so will enable you to
detect small problems before they become large problems. You'll also have enough time
to re-engineer and test any necessary changes. Don't consider this checklist complete.
Depending on your situation, you might need to perform additional more checks.

The checklist doesn't include tasks that are independent of Azure. For example, SAP
application interfaces change during a move to the Azure platform or to a hosting
provider. SAP documentation and support notes will also contain further tasks, which
are not Azure specific but need to be part of your overall planning checklist.

This checklist can also be used for systems that are already deployed. New features or
changed recommendations might apply to your environment. It's useful to review the
checklist periodically to ensure you're aware of new features in the Azure platform.

Main content in this document is organized in tabs, in a typical project's chronological


order. See content of each tab and consider each next tab to build on top of actions
done and learnings obtained in the previous phase. For production migration, the
content of all tabs should be considered and not just production tab only. To help you
map typical project phases with the phase definition used in this article, consult the
below table.

ノ Expand table

Deployment checklist Example project phases or milestones


phases

Preparation and planning Project kick-off / design and definition phase


phase
Deployment checklist Example project phases or milestones
phases

Pilot phase Early validation / proof of concept / pilot

Non-production phase Completion of the detailed design / non-production environment


builds / testing phase

Production preparation Dress rehearsal / user acceptance testing / mock cut-over / go-live
phase checks

Go-live phase Production cut-over and go-live

Post-production phase Hypercare / transition to business as usual

Planning phase

Project preparation and planning phase


During this phase, you plan the migration of your SAP workload to the Azure
platform. Documents such as planning guide for SAP in Azure and Cloud Adoption
Framework for SAP cover many topics and help as information in your preparation.
At a minimum, during this phase you need to create the following documents,
define, and discuss the following elements of the migration:

High-level design document

This document should contain:

The current inventory of SAP components and applications, and a target


application inventory for Azure.
A responsibility assignment matrix (RACI) that defines the responsibilities and
assignments of the parties involved. Start at a high level, and work to more
granular levels throughout planning and the first deployments.
A high-level solution architecture. Best practices and example architectures
from Azure Architecture Center should be consulted.
A decision about which Azure regions to deploy to. See the list of Azure
regions , and list of regions with availability zone support. To learn which
services are available in each region, see products available by region .
A networking architecture to connect from on-premises to Azure. Start to
familiarize yourself with the Azure enterprise scale landing zone concept.
Security principles for running high-impact business data in Azure. To learn
about data security, start with the Azure security documentation.
Storage strategy to cover block devices (Managed Disk) and shared
filesystems (such as Azure Files or Azure NetApp Files) that should be further
refined to file-system sizes and layouts in the technical design document.

Technical design document


This document should contain:

A block diagram for the solution showing the SAP and non-SAP applications
and services
An SAP Quicksizer project based on business document volumes. The
output of the Quicksizer is then mapped to compute, storage, and networking
components in Azure. Alternatively to SAP Quicksizer, diligent sizing based on
current workload of source SAP systems. Taking into account the available
information, such as DBMS workload reports, SAP EarlyWatch Reports,
compute and storage performance indicators.
Business continuity and disaster recovery architecture.
Detailed information about OS, DB, kernel, and SAP support pack versions. It's
not necessarily true that every OS release supported by SAP NetWeaver or
S/4HANA is supported on Azure VMs. The same is true for DBMS releases.
Check the following sources to align and if necessary, upgrade SAP releases,
DBMS releases, and OS releases to ensure SAP and Azure support. You need
to have release combinations supported by SAP and Azure to get full support
from SAP and Microsoft. If necessary, you need to plan for upgrading some
software components. More details on supported SAP, OS, and DBMS
software are documented here:
What SAP software is supported for Azure deployments
SAP note 1928533 - SAP Applications on Microsoft Azure: Supported
Products and Azure VM types . This note defines the minimum OS and
DBMS releases supported on Azure VMs. Note also provides the SAP sizing
for SAP-supported Azure VMs.
SAP note 2015553 - SAP on Microsoft Azure: Support prerequisites . This
note defines prerequisites around Azure storage. networking, monitoring,
and support relationship needed with Microsoft.
SAP note 2039619 . This note defines the Oracle support matrix for Azure.
Oracle supports only Windows and Oracle Linux as guest operating systems
on Azure for SAP workloads. This support statement also applies for the
SAP application layer that runs SAP instances, as long they contain Oracle
Client.
SAP HANA-supported Azure VMs are listed on the SAP website . Details
for each entry contain specifics and requirements, including supported OS
version. This might not match latest OS version as per SAP note 2235581 .
SAP Product Availability Matrix .

Further included in same technical document(s) should be:

Storage Architecture high level decisions based on Azure storage types for
SAP workload
Managed Disks attached to each VM
Filesystem layouts and sizing
SMB and/or NFS volume layout and sizes, mount points where applicable
High availability, backup and disaster recovery architecture
Based on RTO and RPO, define what the high availability and disaster
recovery architecture needs to look like.
Understand the use of different deployment types for optimal protection.
Considerations for Azure Virtual Machines DBMS deployment for SAP
workloads and related documents. In Azure, using a shared disk
configuration for the DBMS layer as, for example, described for SQL Server,
isn't supported. Instead, use solutions like:
SQL Server Always On
HANA System Replication
Oracle Data Guard
IBM Db2 HADR
For disaster recovery across Azure regions, review the solutions offered by
different DBMS vendors. Most of them support asynchronous replication or
log shipping.
For the SAP application layer, determine whether you'll run your business
regression test systems, which ideally are replicas of your production
deployments, in the same Azure region or in your DR region. In the second
case, you can target that business regression system as the DR target for
your production deployments.
Look into Azure Site Recovery as a method for replicating the SAP
application layer into the Azure DR region. For more information, see a set-
up disaster recovery for a multi-tier SAP NetWeaver app deployment.
For projects required to remain in a single region for compliance reasons,
consider a combined HADR configuration by using Azure Availability Zones.
An inventory of all SAP interfaces and the connected systems (SAP and non-
SAP).
Design of foundation services. This design should include the following items,
many of which are covered by the landing zone accelerator for SAP:
Network topology within Azure and assignment of different SAP
environment
Active Directory and DNS design.
Identity management solution for both end users and administration
Azure role-based access control (Azure RBAC) structure for teams that
manage infrastructure and SAP applications in Azure.
Azure resource naming strategy
Security operations for Azure resources and workloads within
Security concept for protecting your SAP workload. This should include all
aspects – networking and perimeter monitoring, application and database
security, operating systems securing, and any infrastructure measures
required, such as encryption. Identify the requirements with your compliance
and security teams.
Microsoft recommends either Professional Direct, Premier or Unified Support
contract. Identify your escalation paths and contacts for support with
Microsoft. For SAP support requirements, see SAP note 2015553 .
The number of Azure subscriptions and core quota for the subscriptions. Open
support requests to increase quotas of Azure subscriptions as needed.
Data reduction and data migration plan for migrating SAP data into Azure. For
SAP NetWeaver systems, SAP has guidelines on how to limit the volume of
large amounts of data. See this SAP guide about data management in SAP
ERP systems. Some of the content also applies to NetWeaver and S/4HANA
systems in general.
An automated deployment approach. Many customers start with scripts, using
a combination of PowerShell, CLI, Ansible and Terraform. Microsoft developed
solutions for SAP deployment automation are:
Azure Center for SAP solutions – Azure service to deploy and operate a SAP
system’s infrastructure
SAP on Azure Deployment Automation, an open-source orchestration tool
for deploying and maintaining SAP environments

7 Note

Define a regular design and deployment review cadence between you as the
customer, the system integrator, Microsoft, and other involved parties.

Automated checks and insights in SAP


landscape
Several of the checks above are checked in automated way with SAP on Azure Quality
Check Tool . These checks can be executed automated with the provided open-source
project. While no automatic remediation of issues found is performed, the tool will warn
about configuration against Microsoft recommendations.

 Tip

Same quality checks and additional insights are executed regularly when SAP
systems are deployed or registered with Azure Center for SAP solution as well and
are part of the service.

Further tools to allow easier deployment checks and document findings, plan next
remediation steps and generally optimize your SAP on Azure landscape are:

Azure Well-Architected Framework review An assessment of your workload


focusing on the five main pillars of reliability, security, cost optimization, operation
excellence and performance efficiency. Supports SAP workloads and
recommended to running a review at start and after every project phase.
Azure Inventory Checks for SAP An open source Azure Monitor workbook, which
shows your Azure inventory with intelligence to highlight configuration drift and
improve quality.

Next steps
See these articles:

" Azure planning and implementation for SAP NetWeaver


" Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
" Azure Virtual Machines deployment for SAP NetWeaver
Plan and implement an SAP deployment
on Azure
Article • 05/30/2023

In Azure, organizations can get the cloud resources and services they need without
completing a lengthy procurement cycle. But running your SAP workload in Azure
requires knowledge about the available options and careful planning to choose the
Azure components and architecture to power your solution.

Azure offers a comprehensive platform for running your SAP applications. Azure
infrastructure as a service (IaaS) and platform as a service (PaaS) offerings combine to
give you optimal choices for a successful deployment of your entire SAP enterprise
landscape.

This article complements SAP documentation and SAP Notes, the primary sources for
information about how to install and deploy SAP software on Azure and other platforms.

Definitions
Throughout this article, we use the following terms:

SAP component: An individual SAP application like SAP S/4HANA, SAP ECC, SAP
BW, or SAP Solution Manager. An SAP component can be based on traditional
Advanced Business Application Programming (ABAP) or Java technologies, or it
can be an application that's not based on SAP NetWeaver, like SAP
BusinessObjects.
SAP environment: Multiple SAP components that are logically grouped to perform
a business function, such as development, quality assurance, training, disaster
recovery, or production.
SAP landscape: The entire set of SAP assets in an organization's IT landscape. The
SAP landscape includes all production and nonproduction environments.
SAP system: The combination of a database management system (DBMS) layer
and an application layer. Two examples are an SAP ERP development system and
an SAP BW test system. In an Azure deployment, these two layers can't be
distributed between on-premises and Azure. An SAP system must be either
deployed on-premises or deployed in Azure. However, you can operate different
systems within an SAP landscape in either Azure or on-premises.

Resources
The entry point for documentation that describes how to host and run an SAP workload
on Azure is Get started with SAP on an Azure virtual machine. In the article, you find
links to other articles that cover:

SAP workload specifics for storage, networking, and supported options.


SAP DBMS guides for various DBMS systems on Azure.
SAP deployment guides, both manual and automated.
High availability and disaster recovery details for an SAP workload on Azure.
Integration with SAP on Azure with other services and third-party applications.

) Important

For prerequisites, the installation process, and details about specific SAP
functionality, it's important to read the SAP documentation and guides carefully.
This article covers only specific tasks for SAP software that's installed and operated
on an Azure virtual machine (VM).

The following SAP Notes form the base of the Azure guidance for SAP deployments:

Note number Title

1928533 SAP Applications on Azure: Supported Products and Sizing

2015553 SAP on Azure: Support Prerequisites

2039619 SAP Applications on Azure using the Oracle Database

2233094 DB6: SAP Applications on Azure Using IBM Db2 for Linux, UNIX, and Windows

1999351 Troubleshooting Enhanced Azure Monitoring for SAP

1409604 Virtualization on Windows: Enhanced Monitoring

2191498 SAP on Linux with Azure: Enhanced Monitoring

2731110 Support of Network Virtual Appliances (NVA) for SAP on Azure

For general default and maximum limitations of Azure subscriptions and resources, see
Azure subscription and service limits, quotas, and constraints.

Scenarios
SAP services often are considered among the most mission-critical applications in an
enterprise. The applications' architecture and operations are complex, and it's important
to ensure that all requirements for availability and performance are met. An enterprise
typically thinks carefully about which cloud provider to choose to run such business-
critical business processes.

Azure is the ideal public cloud platform for business-critical SAP applications and
business processes. Most current SAP software, including SAP NetWeaver and SAP
S/4HANA systems, can be hosted in the Azure infrastructure today. Azure offers more
than 800 CPU types and VMs that have many terabytes of memory.

For descriptions of supported scenarios and some scenarios that aren't supported, see
SAP on Azure VMs supported scenarios. Check these scenarios and the conditions that
are indicated as not supported as you plan the architecture that you want to deploy to
Azure.

To successfully deploy SAP systems to Azure IaaS or to IaaS in general, it's important to
understand the significant differences between the offerings of traditional private clouds
and IaaS offerings. A traditional host or outsourcer adapts infrastructure (network,
storage, and server type) to the workload that a customer wants to host. In an IaaS
deployment, it's the customer's or partner's responsibility to evaluate their potential
workload and choose the correct Azure components of VMs, storage, and network.

To gather data for planning your deployment to Azure, it's important to:

Determine what SAP products and versions are supported in Azure.


Evaluate whether the operating system releases you plan to use are supported with
the Azure VMs you would choose for your SAP products.
Determine what DBMS releases on specific VMs are supported for your SAP
products.
Evaluate whether upgrading or updating your SAP landscape is necessary to align
with the required operating system and DBMS releases for achieving a supported
configuration.
Evaluate whether you need to move to different operating systems to deploy in
Azure.

Details about supported SAP components on Azure, Azure infrastructure units, and
related operating system releases and DBMS releases are explained in SAP software that
is supported for Azure deployments. The knowledge that you gain from evaluating
support and dependencies between SAP releases, operating system releases, and DBMS
releases has a substantial impact on your efforts to move your SAP systems to Azure.
You learn whether significant preparation efforts are involved, for example, whether you
need to upgrade your SAP release or switch to a different operating system.

First steps to plan a deployment


The first step in deployment planning isn't to look for VMs that are available to run SAP
applications.

The first steps to plan a deployment are to work with compliance and security teams in
your organization to determine what the boundary conditions are for deploying which
type of SAP workload or business process in a public cloud. The process can be time-
consuming, but it's critical groundwork to complete.

If your organization has already deployed software in Azure, the process might be easy.
If your company is more at the beginning of the journey, larger discussions might be
necessary to figure out the boundary conditions, security conditions, and enterprise
architecture that allows certain SAP data and SAP business processes to be hosted in a
public cloud.

Plan for compliance


For a list of Microsoft compliance offers that can help you plan for your compliance
needs, see Microsoft compliance offerings.

Plan for security


For information about SAP-specific security concerns, like data encryption for data at
rest or other encryption in an Azure service, see Azure encryption overview and Security
for your SAP landscape.

Organize Azure resources


Together with the security and compliance review, if you haven't done this task yet, plan
how you organize your Azure resources. The process includes making decisions about:

A naming convention that you use for each Azure resource, such as for VMs and
resource groups.
A subscription and management group design for your SAP workload, such as
whether multiple subscriptions should be created per workload, per deployment
tier, or for each business unit.
Enterprise-wide usage of Azure Policy for subscriptions and management groups.

To help you make the right decisions, many details of enterprise architecture are
described in the Azure Cloud Adoption Framework.

Don't underestimate the initial phase of the project in your planning. Only when you
have agreements and rules in place for compliance, security, and Azure resource
organization should you advance your deployment planning.

The next steps are planning geographical placement and the network architecture that
you deploy in Azure.

Azure geographies and regions


Azure services are available within separate Azure regions. An Azure region is a
collection of datacenters. The datacenters contain the hardware and infrastructure that
host and run the Azure services that are available in the region. The infrastructure
includes a large number of nodes that function as compute nodes or storage nodes, or
which run network functionality.

For a list of Azure regions, see Azure geographies . For an interactive map, see Azure
global infrastructure .

Not all Azure regions offer the same services. Depending on the SAP product you want
to run, your sizing requirements, and the operating system and DBMS you need, it's
possible that a particular region doesn't offer the VM types that are required for your
scenario. For example, if you're running SAP HANA, you usually need VMs of the various
M-series VM families. These VM families are deployed in only a subset of Azure regions.

As you start to plan and think about which regions to choose as primary region and
eventually secondary region, you need to investigate whether the services that you need
for your scenarios are available in the regions you're considering. You can learn exactly
which VM types, Azure storage types, and other Azure services are available in each
region in Products available by region .

Azure paired regions


In an Azure paired region, replication of certain data is enabled by default between the
two regions. For more information, see Cross-region replication in Azure: Business
continuity and disaster recovery.

Data replication in a region pair is tied to types of Azure storage that you can configure
to replicate into a paired region. For details, see Storage redundancy in a secondary
region.

The storage types that support paired region data replication are storage types that
aren't suitable for SAP components and a DBMS workload. The usability of the Azure
storage replication is limited to Azure Blob Storage (for backup purposes), file shares
and volumes, and other high-latency storage scenarios.
As you check for paired regions and the services that you want to use in your primary or
secondary regions, it's possible that the Azure services or VM types that you intend to
use in your primary region aren't available in the paired region that you want to use as a
secondary region. Or you might determine that an Azure paired region isn't acceptable
for your scenario because of data compliance reasons. For those scenarios, you need to
use a nonpaired region as a secondary or disaster recovery region, and you need to set
up some of the data replication yourself.

Availability zones
Many Azure regions use availability zones to physically separate locations within an
Azure region. Each availability zone is made up of one or more datacenters that are
equipped with independent power, cooling, and networking. An example of using an
availability zone to enhance resiliency is deploying two VMs in two separate availability
zones in Azure. Another example is to implement a high-availability framework for your
SAP DBMS system in one availability zone and deploy SAP (A)SCS in another availability
zone, so you get the best SLA in Azure.

For more information about VM SLAs in Azure, check the latest version of Virtual
Machines SLAs . Because Azure regions develop and extend rapidly, the topology of
the Azure regions, the number of physical datacenters, the distance between
datacenters, and the distance between Azure availability zones evolves. Network latency
changes as infrastructure changes.

Follow the guidance in SAP workload configurations with Azure availability zones when
you choose a region that has availability zones. Also determine which zonal deployment
model is best suited for your requirements, the region you choose, and your workload.

Fault domains
Fault domains represent a physical unit of failure. A fault domain is closely related to the
physical infrastructure that's contained in datacenters. Although a physical blade or rack
can be considered a fault domain, there isn't a direct one-to-one mapping between a
physical computing element and a fault domain.

When you deploy multiple VMs as part of one SAP system, you can indirectly influence
the Azure fabric controller to deploy your VMs to different fault domains, so that you
can meet requirements for availability SLAs. However, you don't have direct control of
the distribution of fault domains over an Azure scale unit (a collection of hundreds of
compute nodes or storage nodes and networking) or the assignment of VMs to a
specific fault domain. To maneuver the Azure fabric controller to deploy a set of VMs
over different fault domains, you need to assign an Azure availability set to the VMs at
deployment time. For more information, see Availability sets.

Update domains
Update domains represent a logical unit that sets how a VM in an SAP system that
consists of multiple VMs is updated. When a platform update occurs, Azure goes
through the process of updating these update domains one by one. By spreading VMs
at deployment time over different update domains, you can protect your SAP system
from potential downtime. Similar to fault domains, an Azure scale unit is divided into
multiple update domains. To maneuver the Azure fabric controller to deploy a set of
VMs over different update domains, you need to assign an Azure availability set to the
VMs at deployment time. For more information, see Availability sets.

Availability sets
Azure VMs within one Azure availability set are distributed by the Azure fabric controller
over different fault domains. The distribution over different fault domains is to prevent
all VMs of an SAP system from being shut down during infrastructure maintenance or if
a failure occurs in one fault domain. By default, VMs aren't part of an availability set. You
can add a VM in an availability set only at deployment time or when a VM is redeployed.

To learn more about Azure availability sets and how availability sets relate to fault
domains, see Azure availability sets.
) Important

Availability zones and availability sets in Azure are mutually exclusive. You can
deploy multiple VMs to a specific availability zone or to an availability set. But not
both the availability zone and the availability set can be assigned to a VM.

You can combine availability sets and availability zones if you use proximity
placement groups.

As you define availability sets and try to mix various VMs of different VM families within
one availability set, you might encounter problems that prevent you from including a
specific VM type in an availability set. The reason is that the availability set is bound to a
scale unit that contains a specific type of compute host. A specific type of compute host
can run only on certain types of VM families.

For example, you create an availability set, and you deploy the first VM in the availability
set. The first VM that you add to the availability set is in the Edsv5 VM family. When you
try to deploy a second VM, a VM that's in the M family, this deployment fails. The reason
is that Edsv5 family VMs don't run on the same host hardware as the VMs in the M
family.

The same problem can occur if you're resizing VMs. If you try to move a VM out of the
Edsv5 family and into a VM type that's in the M family, the deployment fails. If you
resize to a VM family that can't be hosted on the same host hardware, you must shut
down all the VMs that are in your availability set and resize them all to be able to run on
the other host machine type. For information about SLAs of VMs that are deployed in an
availability set, see Virtual Machines SLAs .

Virtual machine scale sets with flexible orchestration


Virtual machine scale sets with flexible orchestration provide a logical grouping of
platform-managed virtual machines. You have an option to create scale set within region
or span it across availability zones. On creating, the flexible scale set within a region with
platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be
distributed across specified number of fault domains in the same region. On the other
hand, creating the flexible scale set across availability zones with
platformFaultDomainCount=1 (FD=1) would distribute VMs across specified zone and
the scale set would also distribute VMs across different fault domains within the zone on
a best effort basis.
For SAP workload only flexible scale set with FD=1 is supported. The advantage of
using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional
availability zone deployment is that the VMs deployed with the scale set would be
distributed across different fault domains within the zone in a best-effort manner. To
learn more about SAP workload deployment with scale set, see flexible virtual machine
scale deployment guide.

When deploying a high availability SAP workload on Azure, it's important to take into
account the various deployment types available, and how they can be applied across
different Azure regions (such as across zones, in a single zone, or in a region with no
zones). For more information, see High availability deployment options for SAP
workload.

 Tip

Currently there is no direct way to migration SAP workload deployed in availability


sets or Availability zones to flexible scale with FD=1. To make the switch, you need
to re-create the VM and disk with zone constraints from existing resources in place.
An open-source project includes PowerShell functions that you can use as a
sample to change a VM deployed in availability set or availability zone to flexible
scale set with FD=1. A blog post shows you how to modify a HA or non-HA SAP
system deployed in availability set or availability zone to flexible scale set with
FD=1.

Proximity placement groups


Network latency between individual SAP VMs can have significant implications for
performance. The network roundtrip time between SAP application servers and the
DBMS especially can have significant impact on business applications. Optimally, all
compute elements running your SAP VMs are located as closely as possible. This option
isn't possible in every combination, and Azure might not know which VMs to keep
together. In most situations and regions, the default placement fulfills network roundtrip
latency requirements.

When default placement doesn't meet network roundtrip requirements within an SAP
system, proximity placement groups can address this need. You can use proximity
placement groups with the location constraints of Azure region, availability zone, and
availability set to increase resiliency. With a proximity placement group, combining both
availability zone and availability set while setting different update and failure domains is
possible. A proximity placement group should contain only a single SAP system.
Although deployment in a proximity placement group can result in the most latency-
optimized placement, deploying by using a proximity placement group also has
drawbacks. Some VM families can't be combined in one proximity placement group, or
you might run into problems if you resize between VM families. The constraints of VM
families, regions, and availability zones might not support colocation. For details, and to
learn about the advantages and potential challenges of using a proximity placement
group, see Proximity placement group scenarios.

VMs that don't use proximity placement groups should be the default deployment
method in most situations for SAP systems. This default is especially true for zonal (a
single availability zone) and cross-zonal (VMs that are distributed between two
availability zones) deployments of an SAP system. Using proximity placement groups
should be limited to SAP systems and Azure regions when required only for
performance reasons.

Azure networking
Azure has a network infrastructure that maps to all scenarios that you might want to
implement in an SAP deployment. In Azure, you have the following capabilities:

Access to Azure services and access to specific ports in VMs that applications use.
Direct access to VMs via Secure Shell (SSH) or Windows Remote Desktop (RDP) for
management and administration.
Internal communication and name resolution between VMs and by Azure services.
On-premises connectivity between an on-premises network and Azure networks.
Communication between services that are deployed in different Azure regions.

For detailed information about networking, see Azure Virtual Network.

Designing networking usually is the first technical activity that you undertake when you
deploy to Azure. Supporting a central enterprise architecture like SAP frequently is part
of the overall networking requirements. In the planning stage, you should document the
proposed networking architecture in as much detail as possible. If you make a change at
a later point, like changing a subnet network address, you might have to move or delete
deployed resources.

Azure virtual networks


A virtual network is a fundamental building block for your private network in Azure. You
can define the address range of the network and separate the range into network
subnets. A network subnet can be available for an SAP VM to use or it can be dedicated
to a specific service or purpose. Some Azure services, like Azure Virtual Network and
Azure Application Gateway, require a dedicated subnet.

A virtual network acts as a network boundary. Part of the design that's required when
you plan your deployment is to define the virtual network, subnets, and private network
address ranges. You can't change the virtual network assignment for resources like
network interface cards (NICs) for VMs after the VMs are deployed. Making a change to
a virtual network or to a subnet address range might require you to move all deployed
resources to a different subnet.

Your network design should address several requirements for SAP deployment:

No network virtual appliances , such as a firewall, are placed in the


communication path between the SAP application and the DBMS layer of SAP
products via the SAP kernel, such as S/4HANA or SAP NetWeaver.
Network routing restrictions are enforced by network security groups (NSGs) on
the subnet level. Group IPs of VMs into application security groups (ASGs) that are
maintained in the NSG rules, and provide role, tier, and SID groupings of
permissions.
SAP application and database VMs run in the same virtual network, within the
same or different subnets of a single virtual network. Use different subnets for
application and database VMs. Alternatively, use dedicated application and DBMS
ASGs to group rules that are applicable to each workload type within the same
subnet.
Accelerated networking is enabled on all network cards of all VMs for SAP
workloads where technically possible.
Ensure secure access for dependency on central services, including for name
resolution (DNS), identity management (Windows Server Active Directory
domains/Azure Active Directory), and administrative access.
Provide access to and by public endpoints, as needed. Examples include for Azure
management for ClusterLabs Pacemaker operations in high availability or for Azure
services like Azure Backup.
Use multiple NICs only if they're necessary to create designated subnets that have
their own routes and NSG rules.

For examples of network architecture for SAP deployment, see the following articles:

SAP S/4HANA on Linux in Azure


SAP NetWeaver on Windows in Azure
Inbound and outbound internet communication for SAP on Azure

Virtual network considerations


Some virtual networking configurations have specific considerations to be aware of.

The configuring of network virtual appliances in the communication path


between the SAP application layer and the DBMS layer of SAP components by
using the SAP kernel, such as S/4HANA or SAP NetWeaver, isn't supported.

Network virtual appliances in communication paths can easily double the network
latency between two communication partners. They also can restrict throughput in
critical paths between the SAP application layer and the DBMS layer. In some
scenarios, network virtual appliances can cause Pacemaker Linux clusters to fail.

The communication path between the SAP application layer and the DBMS layer
must be a direct path. The restriction doesn't include ASG and NSG rules if the ASG
and NSG rules allow a direct communication path.

Other scenarios in which network virtual appliances aren't supported are:


Communication paths between Azure VMs that represent Pacemaker Linux
cluster nodes and SBD devices as described in High availability for SAP
NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications.
Communication paths between Azure VMs and a Windows Server scale-out file
share that's set up as described in Cluster an SAP ASCS/SCS instance on a
Windows failover cluster by using a file share in Azure.

Segregating the SAP application layer and the DBMS layer into different Azure
virtual networks isn't supported. We recommend that you segregate the SAP
application layer and the DBMS layer by using subnets within the same Azure
virtual network instead of by using different Azure virtual networks.

If you set up an unsupported scenario that segregates two SAP system layers in
different virtual networks, the two virtual networks must be peered.

Network traffic between two peered Azure virtual networks is subject to transfer
costs. Each day, a huge volume of data that consists of many terabytes is
exchanged between the SAP application layer and the DBMS layer. You can incur
substantial cost if the SAP application layer and the DBMS layer are segregated
between two peered Azure virtual networks.

Name resolution and domain services

Resolving host name to IP address through DNS is often a crucial element for SAP
networking. You have many options to configure name and IP resolution in Azure.
Often, an enterprise has a central DNS solution that's part of the overall architecture.
Several options for implementing name resolution in Azure natively, instead of by
setting up your own DNS servers, are described in Name resolution for resources in
Azure virtual networks.

As with DNS services, there might be a requirement for Windows Server Active Directory
to be accessible by the SAP VMs or services.

IP address assignment
An IP address for a NIC remains claimed and used throughout the existence of a VM's
NIC. The rule applies to both dynamic and static IP assignment. It remains true whether
the VM is running or is shut down. Dynamic IP assignment is released if the NIC is
deleted, if the subnet changes, or if the allocation method changes to static.

It's possible to assign fixed IP addresses to VMs within an Azure virtual network. IP
addresses often are reassigned for SAP systems that depend on external DNS servers
and static entries. The IP address remains assigned, either until the VM and its NIC is
deleted or until the IP address is unassigned. You need to take into account the overall
number of VMs (running and stopped) when you define the range of IP addresses for
the virtual network.

For more information, see Create a VM that has a static private IP address.

7 Note

You should decide between static and dynamic IP address allocation for Azure VMs
and their NICs. The guest operating system of the VM will obtain the IP that's
assigned to the NIC when the VM boots. You shouldn't assign static IP addresses in
the guest operating system to a NIC. Some Azure services like Azure Backup rely on
the fact that at least the primary NIC is set to DHCP and not to static IP addresses
inside the operating system. For more information, see Troubleshoot Azure VM
backup.

Secondary IP addresses for SAP host name virtualization

Each Azure VM's NIC can have multiple IP addresses assigned to it. A secondary IP can
be used for an SAP virtual host name, which is mapped to a DNS A record or DNS PTR
record. A secondary IP address must be assigned to the Azure NIC's IP configuration. A
secondary IP also must be configured within the operating system statically because
secondary IPs often aren't assigned through DHCP. Each secondary IP must be from the
same subnet that the NIC is bound to. A secondary IP can be added and removed from
an Azure NIC without stopping or deallocating the VM. To add or remove the primary IP
of a NIC, the VM must be deallocated.

7 Note

On secondary IP configurations, the Azure load balancer's floating IP address is not


supported. The Azure load balancer is used by SAP high-availability architectures
with Pacemaker clusters. In this scenario, the load balancer enables the SAP virtual
host names. For general guidance about using virtual host names, see SAP Note
962955 .

Azure Load Balancer with VMs running SAP

A load balancer typically is used in high-availability architectures to provide floating IP


addresses between active and passive cluster nodes. You also can use a load balancer
for a single VM to hold a virtual IP address for an SAP virtual host name. Using a load
balancer for a single VM is an alternative to using a secondary IP address on a NIC or to
using multiple NICs in the same subnet.

The standard load balancer modifies the default outbound access path because its
architecture is secure by default. VMs that are behind a standard load balancer might no
longer be able to reach the same public endpoints. Some examples are an endpoint for
an operating system update repository or a public endpoint of Azure services. For
options to provide outbound connectivity, see Public endpoint connectivity for VMs by
using the Azure standard load balancer.

 Tip

The basic load balancer should not be used with any SAP architecture in Azure. The
basic load balancer is scheduled to be retired.

Multiple vNICs per VM

You can define multiple virtual network interface cards (vNICs) for an Azure VM, with
each vNIC assigned to any subnet in the same virtual network as the primary vNIC. With
the ability to have multiple vNICs, you can start to set up network traffic separation, if
necessary. For example, client traffic is routed through the primary vNIC and some
admin or back-end traffic is routed through a second vNIC. Depending on the operating
system and the image you use, traffic routes for NICs inside the operating system might
need to be set up for correct routing.

The type and size of a VM determines how many vNICs a VM can have assigned. For
information about functionality and restrictions, see Assign multiple IP addresses to VMs
by using the Azure portal.

Adding vNICs to a VM doesn't increase available network bandwidth. All network


interfaces share the same bandwidth. We recommend that you use multiple NICs only if
VMs need to access private subnets. We recommend a design pattern that relies on NSG
functionality and that simplifies the network and subnet requirements. The design
should use as few network interfaces as possible, and optimally just one. An exception is
HANA scale-out, in which a secondary vNIC is required for the HANA internal network.

2 Warning

If you use multiple vNICs on a VM, we recommend that you use a primary NIC's
subnet to handle user network traffic.

Accelerated networking

To further reduce network latency between Azure VMs, we recommend that you confirm
that Azure accelerated networking is enabled on every VM that runs an SAP workload.
Although accelerated networking is enabled by default for new VMs, per the
deployment checklist, you should verify the state. The benefits of accelerated
networking are greatly improved networking performance and latencies. Use it when
you deploy Azure VMs for SAP workloads on all supported VMs, especially for the SAP
application layer and the SAP DBMS layer. The linked documentation contains support
dependencies on operating system versions and VM instances.

On-premises connectivity
SAP deployment in Azure assumes that a central, enterprise-wide network architecture
and communication hub are in place to enable on-premises connectivity. On-premises
network connectivity is essential to allow users and applications to access the SAP
landscape in Azure to access other central organization services, such as the central
DNS, domain, and security and patch management infrastructure.

You have many options to provide on-premises connectivity for your SAP on Azure
deployment. The networking deployment most often is a hub-spoke network topology,
or an extension of the hub-spoke topology, a global virtual WAN.
For on-premises SAP deployments, we recommend that you use a private connection
over Azure ExpressRoute. For smaller SAP workloads, remote regions, or smaller offices,
VPN on-premises connectivity is available. Using ExpressRoute with a VPN site-to-site
connection as a failover path is a possible combination of both services.

Outbound and inbound internet connectivity


Your SAP landscape requires connectivity to the internet, whether it's to receive
operating system repository updates, to establish a connection to the SAP SaaS
applications on their public endpoints, or to access an Azure service via its public
endpoint. Similarly, you might be required to provide access for your clients to SAP Fiori
applications, with internet users accessing services that are provided by your SAP
landscape. Your SAP network architecture requires you to plan for the path toward the
internet and for any incoming requests.

Secure your virtual network by using NSG rules, by using network service tags for known
services, and by establishing routing and IP addressing to your firewall or other network
virtual appliance. All of these tasks or considerations are part of the architecture.
Resources in private networks need to be protected by network Layer 4 and Layer 7
firewalls.

Communication paths with the internet are the focus of a best practices architecture.

Azure VMs for SAP workloads


Some Azure VM families are especially suitable for SAP workloads, and some more
specifically to an SAP HANA workload. The way to find the correct VM type and its
capability to support your SAP workload is described in What SAP software is supported
for Azure deployments. Also, SAP Note 1928533 lists all certified Azure VMs and their
performance capabilities as measured by the SAP Application Performance Standard
(SAPS) benchmark and limitations, if they apply. The VM types that are certified for an
SAP workload don't use over-provisioning for CPU and memory resources.

Beyond looking only at the selection of supported VM types, you need to check whether
those VM types are available in a specific region based on Products available by
region . At least as important is to determine whether the following capabilities for a
VM fit your scenario:

CPU and memory resources


Input/output operations per second (IOPS) bandwidth
Network capabilities
Number of disks that can be attached
Ability to use certain Azure storage types

To get this information for a specific FM family and type, see Sizes for virtual machines
in Azure.

Pricing models for Azure VMs


For a VM pricing model, you can choose the option you prefer to use:

A pay-as-you-go pricing model


A one-year reserved or savings plan
A three-year reserved or savings plan
A spot pricing model

To get detailed information about VM pricing for different Azure services, operating
systems, and regions, see Virtual machines pricing .

To learn about the pricing and flexibility of one-year and three-year savings plans and
reserved instances, see these articles:

What are Azure savings plans for compute?


What are Azure Reservations?
Virtual machine size flexibility with Reserved VM Instances
How the Azure reservation discount is applied to virtual machines

For more information about spot pricing, see Azure Spot Virtual Machines .

Pricing for the same VM type might vary between Azure regions. Some customers
benefit from deploying to a less expensive Azure region, so information about pricing
by region can be helpful as you plan.

Azure also offers the option to use a dedicated host. Using a dedicated host gives you
more control of patching cycles for Azure services. You can schedule patching to
support your own schedule and cycles. This offer is specifically for customers who have a
workload that doesn't follow the normal cycle of a workload. For more information, see
Azure dedicated hosts.

Using an Azure dedicated host is supported for an SAP workload. Several SAP customers
who want to have more control over infrastructure patching and maintenance plans use
Azure dedicated hosts. For more information about how Microsoft maintains and
patches the Azure infrastructure that hosts VMs, see Maintenance for virtual machines in
Azure.
Operating system for VMs
When you deploy new VMs for an SAP landscape in Azure, either to install or to migrate
an SAP system, it's important to choose the correct operation system for your workload.
Azure offers a large selection of operating system images for Linux and Windows and
many suitable options for SAP systems. You also can create or upload custom images
from your on-premises environment, or you can consume or generalize from image
galleries.

For details and information about the options that are available:

Find Azure Marketplace images by using the Azure CLI or Azure PowerShell.
Create custom images for Linux or Windows.
Use VM Image Builder.

Plan for an operating system update infrastructure and its dependencies for your SAP
workload, if needed. Consider using a repository staging environment to keep all tiers of
an SAP landscape (sandbox, development, preproduction, and production) in sync by
using the same versions of patches and updates during your update time period.

Generation 1 and generation 2 VMs


In Azure, you can deploy a VM as either generation 1 or generation 2. Support for
generation 2 VMs in Azure lists the Azure VM families that you can deploy as generation
2. The article also lists functional differences between generation 1 and generation 2
VMs in Azure.

When you deploy a VM, the operating system image that you choose determines
whether the VM will be a generation 1 or a generation 2 VM. The latest versions of all
operating system images for SAP that are available in Azure (Red Hat Enterprise Linux,
SuSE Enterprise Linux, and Windows or Oracle Enterprise Linux) are available in both
generation 1 and generation 2. It's important to carefully select an image based on the
image description to deploy the correct generation of VM. Similarly, you can create
custom operating system images as generation 1 or generation 2, and they affect the
VM's generation when the VM is deployment.

7 Note

We recommend that you use generation 2 VMs in all your SAP deployments in
Azure, regardless of VM size. All the latest Azure VMs for SAP are generation 2-
capable or are limited to only generation 2. Some VM families currently support
only generation 2 VMs. Some VM families that will be available soon might support
only generation 2.

You can determine whether a VM is generation 1 or only generation 2 based on the


selected operating system image. You can't change an existing VM from one
generation to the another generation.

Changing a deployed VM from generation 1 to generation 2 isn't possible in Azure. To


change the VM generation, you must deploy a new VM that is the generation that you
want and reinstall your software on the new generation of VM. This change affects only
the base VHD image of the VM and has no impact on the data disks or attached
Network File System (NFS) or Server Message Block (SMB) shares. Data disks, NFS
shares, or SMB shares that originally were assigned to a generation 1 VM can be
attached to a new generation 2 VM.

Some VM families, like the Mv2-series, support only generation 2. The same
requirement might be true for new VM families in the future. In that scenario, an existing
generation 1 VM can't be resized to work with the new VM family. In addition to the
Azure platform's generation 2 requirements, your SAP components might have
requirements that are related to a VM's generation. To learn about any generation 2
requirements for the VM family you choose, see SAP Note 1928533 .

Performance limits for Azure VMs


As a public cloud, Azure depends on sharing infrastructure in a secured manner
throughout its customer base. To enable scaling and capacity, performance limits are
defined for each resource and service. On the compute side of the Azure infrastructure,
it's important to consider the limits that are defined for each VM size.

Each VM has a different quota on disk and network throughput, the number of disks
that can be attached, whether it has local temporary storage that has its own
throughput and IOPS limits, memory size, and how many vCPUs are available.

7 Note

When you make decisions about VM size for an SAP solution on Azure, you must
consider the performance limits for each VM size. The quotas that are described in
the documentation represent the theoretical maximum attainable values. The
performance limit of IOPS per disk might be achieved with small input/output (I/O)
values (for example, 8 KB), but it might not be achieved with large I/O values (for
example, 1 MB).
Like VMs, the same performance limits exist for each storage type for an SAP workload
and for all other Azure services.

When you plan for and choose VMs to use in your SAP deployment, consider these
factors:

Start with the memory and CPU requirements. Separate out the SAPS requirements
for CPU power into the DBMS part and the SAP application parts. For existing
systems, the SAPS related to the hardware that you use often can be determined
or estimated based on existing SAP Standard Application Benchmarks . For newly
deployed SAP systems, complete a sizing exercise to determine the SAPS
requirements for the system.

For existing systems, the I/O throughput and IOPS on the DBMS server should be
measured. For new systems, the sizing exercise for the new system also should give
you a general idea of the I/O requirements on the DBMS side. If you're unsure, you
eventually need to conduct a proof of concept.

Compare the SAPS requirement for the DBMS server with the SAPS that the
different VM types of Azure can provide. The information about the SAPS of the
different Azure VM types is documented in SAP Note 1928533 . The focus should
be on the DBMS VM first because the database layer is the layer in an SAP
NetWeaver system that doesn't scale out in most deployments. In contrast, the
SAP application layer can be scaled out. Individual DBMS guides describe the
recommended storage configurations.

Summarize your findings for:


The number of Azure VMs that you expect to use.
Individual VM family and VM SKUs for each SAP layer: DBMS, (A)SCS, and
application server.
I/O throughput measures or calculated storage capacity requirements.

HANA Large Instances service


Azure offers compute capabilities to run a scale-up or scale-out large HANA database
on a dedicated offering called SAP HANA on Azure Large Instances. This offering
extends the VMs that are available in Azure.

7 Note

The HANA Large Instances service is in sunset mode and doesn't accept new
customers. Providing units for existing HANA Large Instances customers is still
possible.

Storage for SAP on Azure


Azure VMs use various storage options for persistence. In simple terms, the VMs can be
divided into persistent and temporary or non-persistent storage types.

You can choose from multiple storage options for SAP workloads and for specific SAP
components. For more information, see Azure storage for SAP workloads. The article
covers the storage architecture for every part of SAP: operating system, application
binaries, configuration files, database data, log and trace files, and file interfaces with
other applications, whether stored on disk or accessed on file shares.

Temporary disk on VMs


Most Azure VMs for SAP offer a temporary disk that isn't a managed disk. Use a
temporary disk only for expendable data. The data on a temporary disk might be lost
during unforeseen maintenance events or during VM redeployment. The performance
characteristics of the temporary disk make them ideal for swap/page files of the
operating system.

No application or nonexpendable operating system data should be stored on a


temporary disk. In Windows environments, the temporary drive is typically accessed as
drive D. In Linux systems, the mount point often is /dev/sdb device, /mnt, or
/mnt/resource.

Some VMs don't offer a temporary drive. If you plan to use these VM sizes for SAP, you
might need to increase the size of the operating system disk. For more information, see
SAP Note 1928533 . For VMs that have a temporary disk, get information about the
temporary disk size and the IOPS and throughput limits for each VM series in Sizes for
virtual machines in Azure.

You can't directly resize between a VM series that has temporary disks and a VM series
that doesn't have temporary disks. Currently, a resize between two such VM families
fails. A resolution is to re-create the VM that doesn't have a temporary disk in the new
size by using an operating system disk snapshot. Keep all other data disks and the
network interface. Learn how to resize a VM size that has a local temporary disk to a VM
size that doesn't.

Network shares and volumes for SAP


SAP systems usually require one or more network file shares. The file shares typically are
one of the following options:

An SAP transport directory (/usr/sap/trans or TRANSDIR).


SAP volumes or shared sapmnt or saploc volumes to deploy multiple application
servers.
High-availability architecture volumes for SAP (A)SCS, SAP ERS, or a database
(/hana/shared).
File interfaces that run third-party applications for file import and export.

In these scenarios, we recommend that you use an Azure service, such as Azure Files or
Azure NetApp Files. If these services aren't available in the regions you choose, or if they
aren't available for your solution architecture, alternatives are to provide NFS or SMB file
shares from self-managed, VM-based applications or from third-party services. See SAP
Note 2015553 about limitations to SAP support if you use third-party services for
storage layers in an SAP system in Azure.

Due to the often critical nature of network shares, and because they often are a single
point of failure in a design (for high availability) or process (for the file interface), we
recommend that you rely on each Azure native service for its own availability, SLA, and
resiliency. In the planning phase, it's important to consider these factors:

NFS or SMB share design, including which shares to use per SAP system ID (SID),
per landscape, and per region.
Subnet sizing, including the IP requirement for private endpoints or dedicated
subnets for services like Azure NetApp Files.
Network routing to SAP systems and connected applications.
Use of a public or private endpoint for Azure Files.

For information about requirements and how to use an NFS or SMB share in a high-
availability scenario, see High availability.

7 Note

If you use Azure Files for your network shares, we recommend that you use a
private endpoint. In the unlikely event of a zonal failure, your NFS client
automatically redirects to a healthy zone. You don't have to remount the NFS or
SMB shares on your VMs.

Security for your SAP landscape


To protect your SAP workload on Azure, you need to plan multiple aspects of security:

Network segmentation and the security of each subnet and network interface.
Encryption on each layer within the SAP landscape.
Identity solution for end-user and administrative access and single sign-on
services.
Threat and operation monitoring.

The topics in this chapter aren't an exhaustive list of all available services, options, and
alternatives. It does list several best practices that should be considered for all SAP
deployments in Azure. There are other aspects to cover depending on your enterprise or
workload requirements. For more information about security design, see the following
resources for general Azure guidance:

Azure Well-Architected Framework: Security pillar


Azure Cloud Adoption Framework: Security

Secure virtual networks by using security groups


Planning your SAP landscape in Azure should include some degree of network
segmentation, with virtual networks and subnets dedicated only to SAP workloads. Best
practices for subnet definition are described in Networking and in other Azure
architecture guides. We recommend that you use NSGs with ASGs within NSGs to
permit inbound and outbound connectivity. When you design ASGs, each NIC on a VM
can be associated with multiple ASGs, so you can create different groups. For example,
create an ASG for DBMS VMs, which contains all database servers across your landscape.
Create another ASG for all VMs (application and DBMS) of a single SAP SID. This way,
you can define one NSG rule for the overall database ASG and another, more specific
rule only for the SID-specific ASG.

NSGs don't restrict performance with the rules that you define for the NSG. For
monitoring traffic flow, you can optionally activate NSG flow logging with logs evaluated
by an information event management (SIEM) or intrusion detection system (IDS) of your
choice to monitor and act on suspicious network activity.

 Tip

Activate NSGs only on the subnet level. Although NSGs can be activated on both
the subnet level and the NIC level, activation on both is very often a hindrance in
troubleshooting situations when analyzing network traffic restrictions. Use NSGs on
the NIC level only in exceptional situations and when required.
Private endpoints for services
Many Azure PaaS services are accessed by default through a public endpoint. Although
the communication endpoint is located on the Azure back-end network, the endpoint is
exposed to the public internet. Private endpoints are a network interface inside your
own private virtual network. Through Azure Private Link, the private endpoint projects
the service into your virtual network. Selected PaaS services are then privately accessed
through the IP inside your network. Depending on the configuration, the service can
potentially be set to communicate through private endpoint only.

Using a private endpoint increases protection against data leakage, and it often
simplifies access from on-premises and peered networks. In many situations, the
network routing and process to open firewall ports, which often are needed for public
endpoints, is simplified. The resources are inside your network already because they're
accessed by a private endpoint.

To learn which Azure services offer the option to use a private endpoint, see Private Link
available services. For NFS or SMB with Azure Files, we recommend that you always use
private endpoints for SAP workloads. To learn about charges that are incurred by using
the service, see Private endpoint pricing . Some Azure services might optionally
include the cost with the service. This information is included in a service's pricing
information.

Encryption
Depending on your corporate policies, encryption beyond the default options in Azure
might be required for your SAP workloads.

Encryption for infrastructure resources


By default, managed disks and blob storage in Azure are encrypted with a platform-
managed key (PMK). In addition, bring your own key (BYOK) encryption for managed
disks and blob storage is supported for SAP workloads in Azure. For managed disk
encryption, you can choose from different options, depending on your corporate
security requirements. Azure encryption options include:

Storage-side encryption (SSE) PMK (SSE-PMK)


SSE customer-managed key (SSE-CMK)
Double encryption at rest
Host-based encryption
For more information, including a description of Azure Disk Encryption, see a
comparison of Azure encryption options.

7 Note

Currently, don't use host-based encryption on a VM that's in the M-series VM


family when running with Linux due to a potential performance limitation. The use
of SSE-CMK encryption for managed disks is unaffected by this limitation.

For SAP deployments on Linux systems, don't use Azure Disk Encryption. Azure Disk
Encryption entails encryption running inside the SAP VMs by using CMKs from Azure
Key Vault. For Linux, Azure Disk Encryption doesn't support the operating system images
that are used for SAP workloads. Azure Disk Encryption can be used on Windows
systems with SAP workloads, but don't combine Azure Disk Encryption with database
native encryption. We recommend that you use database native encryption instead of
Azure Disk Encryption. For more information, see the next section.

Similar to managed disk encryption, Azure Files encryption at rest (SMB and NFS) is
available with PMKs or CMKs.

For SMB network shares, carefully review Azure Files and operating system
dependencies with SMB versions because the configuration affects support for in-transit
encryption.

) Important

The importance of a careful plan to store and protect the encryption keys if you use
customer-managed encryption can't be overstated. Without encryption keys,
encrypted resources like disks are inaccessible and can lead to data loss. Carefully
consider protecting the keys and access to the keys to only privileged users or
services.

Encryption for SAP components

Encryption on the SAP level can be separated into two layers:

DBMS encryption
Transport encryption

For DBMS encryption, each database that's supported for an SAP NetWeaver or an SAP
S/4HANA deployment supports native encryption. Transparent database encryption is
entirely independent of any infrastructure encryption that's in place in Azure. You can
use SSE and database encryption at the same time. When you use encryption, the
location, storage, and safekeeping of encryption keys is critically important. Any loss of
encryption keys leads to data loss because you won't be able to start or recover your
database.

Some databases might not have a database encryption method or might not require a
dedicated setting to enable. For other databases, DBMS backups might be encrypted
implicitly when database encryption is activated. See the following SAP documentation
to learn how to enable and use transparent database encryption:

SAP HANA Data and Log Volume Encryption


SQL Server: SAP Note 1380493
Oracle: SAP Note 974876
IBM Db2: SAP Note 1555903
SAP ASE: SAP Note 1972360

Contact SAP or your DBMS vendor for support on how to enable, use, or troubleshoot
software encryption.

) Important

It can't be overstated how important it is to have a careful plan to store and protect
your encryption keys. Without encryption keys, the database or SAP software might
be inaccessible and you might lose data. Carefully consider how to protect the keys.
Allow access to the keys only by privileged users or services.

Transport or communication encryption can be applied to SQL Server connections


between SAP engines and the DBMS. Similarly, you can encrypt connections from the
SAP presentation layer (SAPGui secure network connection or SNC) or an HTTPS
connection to a web front end. See the applications vendor's documentation to enable
and manage encryption in transit.

Threat monitoring and alerting


To deploy and use threat monitoring and alerting solutions, begin by using your
organization's architecture. Azure services provide threat protection and a security view
that you can incorporate into your overall SAP deployment plan. Microsoft Defender for
Cloud addresses the threat protection requirement. Defender for Cloud typically is part
of an overall governance model for an entire Azure deployment, not just for SAP
components.
For more information about security information event management (SIEM) and security
orchestration automated response (SOAR) solutions, see Microsoft Sentinel solutions for
SAP integration.

Security software inside SAP VMs


SAP Note 2808515 for Linux and SAP Note 106267 for Windows describe
requirements and best practices when you use virus scanners or security software on
SAP servers. We recommend that you follow the SAP recommendations when you
deploy SAP components in Azure.

High availability
SAP high availability in Azure has two components:

Azure infrastructure high availability: High availability of Azure compute (VMs),


network, and storage services, and how they can increase SAP application
availability.

SAP application high availability: How it can be combined with the Azure
infrastructure high availability by using service healing. An example that uses high
availability in SAP software components:
An SAP (A)SCS and SAP ERS instance
The database server

For more information about high availability for SAP in Azure, see the following articles:

Supported scenarios: High-availability protection for the SAP DBMS layer


Supported scenarios: High availability for SAP Central Services
Supported scenarios: Supported storage for SAP Central Services scenarios
Supported scenarios: Multi-SID SAP Central Services failover clusters
Azure Virtual Machines high availability for SAP NetWeaver
High-availability architecture and scenarios for SAP NetWeaver
Utilize Azure infrastructure VM restart to achieve higher availability of an SAP
system without clustering
SAP workload configurations with Azure availability zones
Public endpoint connectivity for virtual machines by using Azure Standard Load
Balancer in SAP high-availability scenarios

Pacemaker on Linux, and Windows Server failover clustering are the only high-
availability frameworks for SAP workloads that are directly supported by Microsoft on
Azure. Any other high-availability framework isn't supported by Microsoft and will need
design, implementation details, and operations support from the vendor. For more
information, see Supported scenarios for SAP in Azure.

Disaster recovery
Often, SAP applications are among the most business-critical processes in an enterprise.
Based on their importance and the time required to be operational again after an
unforeseen interruption, business continuity and disaster recovery (BCDR) scenarios
should be carefully planned.

To learn how to address this requirement, see Disaster recovery overview and
infrastructure guidelines for SAP workload.

Backup
As part of your BCDR strategy, backup for your SAP workload must be an integral part
of any planned deployment. The backup solution must cover all layers of an SAP
solution stack: VM, operating system, SAP application layer, DBMS layer, and any shared
storage solution. Backup for Azure services that are used by your SAP workload, and for
other crucial resources like encryption and access keys also must be part of your backup
and BCDR design.

Azure Backup offers PaaS solutions for backup:

VM configuration, operating system, and SAP application layer (data resizing on


managed disks) through Azure Backup for VM. Review the support matrix to verify
that your architecture can use this solution.
SQL Server and SAP HANA database data and log backup. It includes support for
database replication technologies, such as HANA system replication or SQL Always
On, and cross-region support for paired regions.
File share backup through Azure Files. Verify support for NFS or SMB and other
configuration details.

Alternatively, if you deploy Azure NetApp Files, backup options are available on the
volume level, including SAP HANA and Oracle DBMS integration with a scheduled
backup.

Azure Backup solutions offer a soft-delete option to prevent malicious or accidental


deletion and to prevent data loss. Soft-delete is also available for file shares that you
deploy by using Azure Files.
Backup options are available for a solution that you create and manage yourself, or if
you use third-party software. An option is to use the services with Azure Storage,
including by using immutable storage for blob data. This self-managed option currently
would be required as a DBMS backup option for some databases like SAP ASE or IBM
Db2.

Use the recommendations in Azure best practices to protect and validate against
ransomware attacks.

 Tip

Ensure that your backup strategy includes protecting your deployment automation,
encryption keys for Azure resources, and transparent database encryption if used.

Cross-region backup
For any cross-region backup requirement, determine the Recovery Time Objective (RTO)
and Recovery Point Objective (RPO) that's offered by the solution and whether it
matches your BCDR design and needs.

SAP migration to Azure


It isn't possible to describe all migration approaches and options for the large variety of
SAP products, version dependencies, and native operating system and DBMS
technologies that are available. The project team for your organization and
representatives from your service provider side should consider several techniques for a
smooth SAP migration to Azure.

Test performance during migration. An important part of SAP migration planning


is technical performance testing. The migration team needs to allow sufficient time
and availability for key personnel to run application and technical testing of the
migrated SAP system, including connected interfaces and applications. For a
successful SAP migration, it's critical to compare the premigration and post-
migration runtime and accuracy of key business processes in a test environment.
Use the information to optimize the processes before you migrate the production
environment.

Use Azure services for SAP migration. Some VM-based workloads are migrated
without change to Azure by using services like Azure Migrate or Azure Site
Recovery, or a third-party tool. Diligently confirm that the operating system version
and the SAP workload it runs are supported by the service.
Often, any database workload is intentionally not supported because a service
can't guarantee database consistency. If the DBMS type is supported by the
migration service, the database change or churn rate often is too high. Most busy
SAP systems won't meet the change rate that migration tools allow. Issues might
not be seen or discovered until production migration. In many situations, some
Azure services aren't suitable for migrating SAP systems. Azure Site Recovery and
Azure Migrate don't have validation for a large-scale SAP migration. A proven SAP
migration methodology is to rely on DBMS replication or SAP migration tools.

A deployment in Azure instead of a basic VM migration is preferable and easier to


accomplish than an on-premises migration. Automated deployment frameworks
like Azure Center for SAP solutions and Azure deployment automation framework
allow quick execution of automated tasks. To migrate your SAP landscape to a new
deployed infrastructure by using DBMS native replication technologies like HANA
system replication, DBMS backup and restore, or SAP migration tools uses
established technical knowledge of your SAP system.

Infrastructure scale-up. During an SAP migration, having more infrastructure


capacity can help you deploy more quickly. The project team should consider
scaling up the VM size to provide more CPU and memory. The team also should
consider scaling up VM aggregate storage and network throughput. Similarly, on
the VM level, consider storage elements like individual disks to increase
throughput with on-demand bursting and performance tiers for Premium SSD v1.
Increase IOPS and throughput values if you use Premium SSD v2 above the
configured values. Enlarge NFS and SMB file shares to increase performance limits.
Keep in mind that Azure manage disks can't be reduced in size, and that reduction
in size, performance tiers, and throughput KPIs can have various cool-down times.

Optimize network and data copy. Migrating an SAP system to Azure always
involves moving a large amount of data. The data might be database and file
backups or replication, an application-to-application data transfer, or an SAP
migration export. Depending on the migration process you use, you need to
choose the correct network path to move the data. For many data move
operations, using the internet instead of a private network is the quickest path to
copy data securely to Azure storage.

Using ExpressRoute or a VPN can lead to bottlenecks:


The migration data uses too much bandwidth and interferes with user access to
workloads that are running in Azure.
Network bottlenecks on-premises, like a firewall or throughput limiting, often
are discovered only during migration.
Regardless of the network connection that's used, single-stream network
performance for a data move often is low. To increase the data transfer speed over
multiple TCP streams, use tools that can support multiple streams. Apply
optimization techniques that are described in SAP documentation and in many
blog posts on this topic.

 Tip

In the planning stage, it's important to consider any dedicated migration networks
that you'll use for large data transfers to Azure. Examples include backups or
database replication or using a public endpoint for data transfers to Azure storage.
The impact of the migration on network paths for your users and applications
should be expected and mitigated. As part of your network planning, consider all
phases of the migration and the cost of a partially productive workload in Azure
during migration.

Support and operations for SAP


A few other areas are important to consider before and during SAP deployment in
Azure.

Azure VM extension for SAP


Azure Monitoring Extension, Enhanced Monitoring, and Azure Extension for SAP all refer
to a VM extension that you need to deploy to provide some basic data about the Azure
infrastructure to the SAP host agent. SAP notes might refer to the extension as
Monitoring Extension or Enhanced monitoring. In Azure, it's called Azure Extension for
SAP. For support purposes, the extension must be installed on all Azure VMs that run an
SAP workload. To learn more, see Azure VM extension for SAP.

SAProuter for SAP support


Operating an SAP landscape in Azure requires connectivity to and from SAP for support
purposes. Typically, connectivity is in the form of an SAProuter connection, either if it's
through an encryption network channel over the internet or via a private VPN
connection to SAP. For best practices and for an example implementation of SAProuter
in Azure, see your architecture scenario in Inbound and outbound internet connections
for SAP on Azure.
Next steps
Deploy an SAP workload on Azure
Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
SAP workloads on Azure: Planning and deployment checklist
Virtual machine scale sets for SAP workload
Azure Virtual Machines deployment for
SAP NetWeaver
Article • 04/25/2023

Azure Virtual Machines is the solution for organizations that need compute and storage
resources, in minimal time, and without lengthy procurement cycles. You can use Azure
Virtual Machines to deploy classical applications, like SAP NetWeaver-based
applications, in Azure. Extend an application's reliability and availability without
additional on-premises resources. Azure Virtual Machines supports cross-premises
connectivity, so you can integrate Azure Virtual Machines into your organization's on-
premises domains, private clouds, and SAP system landscape.

In this article, we cover the steps to deploy SAP applications on virtual machines (VMs)
in Azure, including alternate deployment options and troubleshooting. This article builds
on the information in Azure Virtual Machines planning and implementation for SAP
NetWeaver. It also complements SAP installation documentation and SAP Notes, which
are the primary resources for installing and deploying SAP software.

Prerequisites
Setting up an Azure virtual machine for SAP software deployment involves multiple
steps and resources. Before you start, make sure that you meet the prerequisites for
installing SAP software on virtual machines in Azure.

Local computer
To manage Windows or Linux VMs, you can use a PowerShell script and the Azure portal.
For both tools, you need a local computer running Windows 7 or a later version of
Windows. If you want to manage only Linux VMs and you want to use a Linux computer
for this task, you can use Azure CLI.

Internet connection
To download and run the tools and scripts that are required for SAP software
deployment, you must be connected to the Internet. The Azure VM that is running the
Azure Extension for SAP also needs access to the Internet. If the Azure VM is part of an
Azure virtual network or on-premises domain, make sure that the relevant proxy settings
are set, as described in Configure the proxy.
Microsoft Azure subscription
You need an active Azure account.

Topology and networking


You need to define the topology and architecture of the SAP deployment in Azure:

Azure storage accounts to be used


Virtual network where you want to deploy the SAP system
Resource group to which you want to deploy the SAP system
Azure region where you want to deploy the SAP system
SAP configuration (two-tier or three-tier)
VM sizes and the number of additional data disks to be mounted to the VMs
SAP Correction and Transport System (CTS) configuration

Create and configure Azure storage accounts (if required) or Azure virtual networks
before you begin the SAP software deployment process. For information about how to
create and configure these resources, see Azure Virtual Machines planning and
implementation for SAP NetWeaver.

SAP sizing
Know the following information, for SAP sizing:

Projected SAP workload, for example, by using the SAP Quick Sizer tool, and the
SAP Application Performance Standard (SAPS) number
Required CPU resource and memory consumption of the SAP system
Required input/output (I/O) operations per second
Required network bandwidth of eventual communication between VMs in Azure
Required network bandwidth between on-premises assets and the Azure-deployed
SAP system

Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application
resources in your Azure subscription. For more information, see Azure Resource
Manager overview.

Resources
SAP resources
When you are setting up your SAP software deployment, you need the following SAP
resources:

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure

SAP Note 2015553 lists prerequisites for SAP-supported SAP software


deployments in Azure.

SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.

SAP Note 1409604 has the required SAP Host Agent version for Windows in
Azure.

SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.

SAP Note 2243692 has information about SAP licensing on Linux in Azure.

SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.

SAP Note 2002167 has general information about Red Hat Enterprise Linux 7.x.

SAP Note 2069760 has general information about Oracle Linux 7.x.

SAP Note 1999351 has additional troubleshooting information for the Azure
Extension for SAP.

SAP Note 1597355 has general information about swap-space for Linux.

SAP on Azure SCN page has news and a collection of useful resources.

SAP Community WIKI has all required SAP Notes for Linux.

SAP-specific PowerShell cmdlets that are part of Azure PowerShell.

SAP-specific Azure CLI commands that are part of Azure CLI.

Windows resources
These Microsoft articles cover SAP deployments in Azure:

Azure Virtual Machines planning and implementation for SAP NetWeaver


Azure Virtual Machines deployment for SAP NetWeaver (this article)
Azure Virtual Machines DBMS deployment for SAP NetWeaver

Deployment scenarios for SAP software on


Azure VMs
You have multiple options for deploying VMs and associated disks in Azure. It's
important to understand the differences between deployment options, because you
might take different steps to prepare your VMs for deployment based on the
deployment type you choose.

Scenario 1: Deploying a VM from the Azure Marketplace


for SAP
You can use an image provided by Microsoft or by a third party in the Azure
Marketplace to deploy your VM. The Marketplace offers some standard OS images of
Windows Server and different Linux distributions. You also can deploy an image that
includes database management system (DBMS) SKUs, for example, Microsoft SQL
Server. For more information about using images with DBMS SKUs, see Azure Virtual
Machines DBMS deployment for SAP NetWeaver.

The following flowchart shows the SAP-specific sequence of steps for deploying a VM
from the Azure Marketplace:

Create a virtual machine by using the Azure portal


The easiest way to create a new virtual machine with an image from the Azure
Marketplace is by using the Azure portal.

1. Navigate to Create a resource in the Azure portal . Or, in the Azure portal menu,
select + New.
2. Select Compute, and then select the type of operating system you want to deploy.
For example, Windows Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12),
Red Hat Enterprise Linux 7.2 (RHEL 7.2), or Oracle Linux 7.2. The default list view
does not show all supported operating systems. Select see all for a full list. For
more information about supported operating systems for SAP software
deployment, see SAP Note 1928533 .
3. On the next page, review terms and conditions.
4. In the Select a deployment model box, select Resource Manager.
5. Select Create.

The wizard guides you through setting the required parameters to create the virtual
machine, in addition to all required resources, like network interfaces and storage
accounts. Some of these parameters are:

1. Basics:

Name: The name of the resource (the virtual machine name).


VM disk type: Select the disk type of the OS disk. If you want to use Premium
Storage for your data disks, we recommend using Premium Storage for the
OS disk as well.
Username and password or SSH public key: Enter the username and
password of the user that is created during the provisioning. For a Linux
virtual machine, you can enter the public Secure Shell (SSH) key that you use
to sign in to the machine.
Subscription: Select the subscription that you want to use to provision the
new virtual machine.
Resource group: The name of the resource group for the VM. You can enter
either the name of a new resource group or the name of a resource group
that already exists.
Location: Where to deploy the new virtual machine. If you want to connect
the virtual machine to your on-premises network, make sure you select the
location of the virtual network that connects Azure to your on-premises
network. For more information, see Microsoft Azure networking.

2. Size:

For a list of supported VM types, see SAP Note 1928533 . Be sure you select the
correct VM type if you want to use Azure Premium Storage. Not all VM types
support Premium Storage. For more information, see Azure storage for SAP
workloads.

3. Settings:

Storage
Disk Type: Select the disk type of the OS disk. If you want to use Premium
Storage for your data disks, we recommend using Premium Storage for the
OS disk as well.
Use managed disks: If you want to use Managed Disks, select Yes. For
more information about Managed Disks, see chapter Managed Disks in the
planning guide.
Storage account: Select an existing storage account or create a new one.
Not all storage types work for running SAP applications. For more
information about storage types, see Storage structure of a VM for RDBMS
Deployments.
Network
Virtual network and Subnet: To integrate the virtual machine with your
intranet, select the virtual network that is connected to your on-premises
network.
Public IP address: Select the public IP address that you want to use, or
enter parameters to create a new public IP address. You can use a public IP
address to access your virtual machine over the Internet. Make sure that
you also create a network security group to help secure access to your
virtual machine.
Network security group: For more information, see Control network traffic
flow with network security groups.
Extensions: You can install virtual machine extensions by adding them to the
deployment. You do not need to add extensions in this step. The extensions
required for SAP support are installed later. See chapter Configure the Azure
Extension for SAP in this guide.
High Availability: Select an availability set, or enter the parameters to create a
new availability set. For more information, see Azure availability sets.
Monitoring
Boot diagnostics: You can select Disable for boot diagnostics.
Guest OS diagnostics: You can select Disable for monitoring diagnostics.

4. Summary:

Review your selections, and then select OK.

Your virtual machine is deployed in the resource group you selected.

Create a virtual machine by using a template


You can create a virtual machine by using one of the SAP templates published in the
azure-quickstart-templates GitHub repository . You also can manually create a virtual
machine by using the Azure portal, PowerShell, or Azure CLI.
Two-tier configuration (only one virtual machine) template (sap-2-tier-
marketplace-image)

To create a two-tier system by using only one virtual machine, use this template.

Two-tier configuration (only one virtual machine) template - Managed Disks


(sap-2-tier-marketplace-image-md)

To create a two-tier system by using only one virtual machine and Managed Disks,
use this template.

Three-tier configuration (multiple virtual machines) template (sap-3-tier-


marketplace-image)

To create a three-tier system by using multiple virtual machines, use this template.

Three-tier configuration (multiple virtual machines) template - Managed Disks


(sap-3-tier-marketplace-image-md)

To create a three-tier system by using multiple virtual machines and Managed


Disks, use this template.

In the Azure portal, enter the following parameters for the template:

1. Basics:

Subscription: The subscription to use to deploy the template.


Resource group: The resource group to use to deploy the template. You can
create a new resource group, or you can select an existing resource group in
the subscription.
Location: Where to deploy the template. If you selected an existing resource
group, the location of that resource group is used.

2. Settings:

SAP System ID: The SAP System ID (SID).

OS type: The operating system you want to deploy, for example, Windows
Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise
Linux 7.2 (RHEL 7.2), or Oracle Linux 7.2.

The list view does not show all supported operating systems. For more
information about supported operating systems for SAP software
deployment, see SAP Note 1928533 .

SAP system size: The size of the SAP system.


The number of SAPS the new system provides. If you are not sure how many
SAPS the system requires, ask your SAP Technology Partner or System
Integrator.

System availability (three-tier template only): The system availability.

Select HA for a configuration that is suitable for a high-availability


installation. Two database servers and two servers for ABAP SAP Central
Services (ASCS) are created.

Storage type (two-tier template only): The type of storage to use.

For larger systems, we highly recommend using Azure Premium Storage. For
more information about storage types, see these resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine
workloads
Introduction to Microsoft Azure Storage

Admin username and Admin password: A username and password. A new


user is created, for signing in to the virtual machine.

New or existing subnet: Determines whether a new virtual network and


subnet are created or an existing subnet is used. If you already have a virtual
network that is connected to your on-premises network, select Existing.

Subnet ID: If you want to deploy the VM into an existing VNet where you
have a subnet defined the VM should be assigned to, name the ID of that
specific subnet. The ID usually looks like this: /subscriptions/<subscription
id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>

3. Terms and conditions:


Review and accept the legal terms.

4. Select Purchase.

The Azure VM Agent is deployed by default when you use an image from the Azure
Marketplace.

Configure proxy settings


Depending on how your on-premises network is configured, you might need to set up
the proxy on your VM. If your VM is connected to your on-premises network via VPN or
ExpressRoute, the VM might not be able to access the Internet, and won't be able to
download the required VM extensions or collect Azure infrastructure information for the
SAP Host agent via the SAP extension for Azure. For more information, see Configure
the proxy.

Join a domain (Windows only)


If your Azure deployment is connected to an on-premises Active Directory or DNS
instance via an Azure site-to-site VPN connection or ExpressRoute (this is called cross-
premises in Azure Virtual Machines planning and implementation for SAP NetWeaver), it
is expected that the VM is joining an on-premises domain. For more information about
considerations for this task, see Join a VM to an on-premises domain (Windows only).

Configure VM Extension
To be sure SAP supports your environment, set up the Azure Extension for SAP as
described in Configure the Azure Extension for SAP.

Post-deployment steps
After you create the VM and the VM is deployed, you need to install the required
software components in the VM. Because of the deployment/software installation
sequence in this type of VM deployment, the software to be installed must already be
available, either in Azure, on another VM, or as a disk that can be attached. Or, consider
using a cross-premises scenario, in which connectivity to the on-premises assets
(installation shares) is given.

After you deploy your VM in Azure, follow the same guidelines and tools to install the
SAP software on your VM as you would in an on-premises environment. To install SAP
software on an Azure VM, both SAP and Microsoft recommend that you upload and
store the SAP installation media on Azure VHDs or Managed Disks, or that you create an
Azure VM that works as a file server that has all the required SAP installation media.

Scenario 2: Deploying a VM with a custom image for SAP


Because different versions of an operating system or DBMS have different patch
requirements, the images you find in the Azure Marketplace might not meet your needs.
You might instead want to create a VM by using your own OS/DBMS VM image, which
you can deploy again later. You use different steps to create a private image for Linux
than to create one for Windows.

Windows

To prepare a Windows image that you can use to deploy multiple virtual machines,
the Windows settings (like Windows SID and hostname) must be abstracted or
generalized on the on-premises VM. You can use sysprep to do this.

Linux

To prepare a Linux image that you can use to deploy multiple virtual machines,
some Linux settings must be abstracted or generalized on the on-premises VM. You
can use waagent -deprovision to do this. For more information, see Capture a Linux
virtual machine running on Azure and the Azure Linux agent user guide.

You can prepare and create a custom image, and then use it to create multiple new VMs.
This is described in Azure Virtual Machines planning and implementation for SAP
NetWeaver. Set up your database content either by using SAP Software Provisioning
Manager to install a new SAP system (restores a database backup from a disk that's
attached to the virtual machine) or by directly restoring a database backup from Azure
storage, if your DBMS supports it. For more information, see Azure Virtual Machines
DBMS deployment for SAP NetWeaver. If you have already installed an SAP system on
your on-premises VM (especially for two-tier systems), you can adapt the SAP system
settings after the deployment of the Azure VM by using the System Rename procedure
supported by SAP Software Provisioning Manager (SAP Note 1619720 ). Otherwise,
you can install the SAP software after you deploy the Azure VM.

The following flowchart shows the SAP-specific sequence of steps for deploying a VM
from a custom image:

Create a virtual machine by using the Azure portal

The easiest way to create a new virtual machine from a Managed Disk image is by using
the Azure portal. For more information on how to create a Manage Disk Image, read
Capture a managed image of a generalized VM in Azure
1. Navigate to Images in the Azure portal . Or, in the Azure portal menu, select
Images.
2. Select the Managed Disk image you want to deploy and click on Create VM

The wizard guides you through setting the required parameters to create the virtual
machine, in addition to all required resources, like network interfaces and storage
accounts. Some of these parameters are:

1. Basics:

Name: The name of the resource (the virtual machine name).


VM disk type: Select the disk type of the OS disk. If you want to use Premium
Storage for your data disks, we recommend using Premium Storage for the
OS disk as well.
Username and password or SSH public key: Enter the username and
password of the user that is created during the provisioning. For a Linux
virtual machine, you can enter the public Secure Shell (SSH) key that you use
to sign in to the machine.
Subscription: Select the subscription that you want to use to provision the
new virtual machine.
Resource group: The name of the resource group for the VM. You can enter
either the name of a new resource group or the name of a resource group
that already exists.
Location: Where to deploy the new virtual machine. If you want to connect
the virtual machine to your on-premises network, make sure you select the
location of the virtual network that connects Azure to your on-premises
network. For more information, see Microsoft Azure networking in Azure
Virtual Machines planning and implementation for SAP NetWeaver.

2. Size:

For a list of supported VM types, see SAP Note 1928533 . Be sure you select the
correct VM type if you want to use Azure Premium Storage. Not all VM types
support Premium Storage. For more information, see Azure storage for SAP
workloads.

3. Settings:

Storage
Disk Type: Select the disk type of the OS disk. If you want to use Premium
Storage for your data disks, we recommend using Premium Storage for the
OS disk as well.
Use managed disks: If you want to use Managed Disks, select Yes. For
more information about Managed Disks, see chapter Managed Disks in the
planning guide.
Network
Virtual network and Subnet: To integrate the virtual machine with your
intranet, select the virtual network that is connected to your on-premises
network.
Public IP address: Select the public IP address that you want to use, or
enter parameters to create a new public IP address. You can use a public IP
address to access your virtual machine over the Internet. Make sure that
you also create a network security group to help secure access to your
virtual machine.
Network security group: For more information, see Control network traffic
flow with network security groups.
Extensions: You can install virtual machine extensions by adding them to the
deployment. You do not need to add extension in this step. The extensions
required for SAP support are installed later. See chapter Configure the Azure
Extension for SAP in this guide.
High Availability: Select an availability set, or enter the parameters to create a
new availability set. For more information, see Azure availability sets.
Monitoring
Boot diagnostics: You can select Disable for boot diagnostics.
Guest OS diagnostics: You can select Disable for monitoring diagnostics.

4. Summary:

Review your selections, and then select OK.

Your virtual machine is deployed in the resource group you selected.

Create a virtual machine by using a template

To create a deployment by using a private OS image from the Azure portal, use one of
the following SAP templates. These templates are published in the azure-quickstart-
templates GitHub repository . You also can manually create a virtual machine, by using
PowerShell.

Two-tier configuration (only one virtual machine) template (sap-2-tier-user-


image)

To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disk
Image (sap-2-tier-user-image-md)

To create a two-tier system by using only one virtual machine and a Managed Disk
image, use this template.

Three-tier configuration (multiple virtual machines) template (sap-3-tier-user-


image)

To create a three-tier system by using multiple virtual machines or your own OS


image, use this template.

Three-tier configuration (multiple virtual machines) template - Managed Disk


Image (sap-3-tier-user-image-md)

To create a three-tier system by using multiple virtual machines or your own OS


image and a Managed Disk image, use this template.

In the Azure portal, enter the following parameters for the template:

1. Basics:

Subscription: The subscription to use to deploy the template.


Resource group: The resource group to use to deploy the template. You can
create a new resource group or select an existing resource group in the
subscription.
Location: Where to deploy the template. If you selected an existing resource
group, the location of that resource group is used.

2. Settings:

SAP System ID: The SAP System ID.

OS type: The operating system type you want to deploy (Windows or Linux).

SAP system size: The size of the SAP system.

The number of SAPS the new system provides. If you are not sure how many
SAPS the system requires, ask your SAP Technology Partner or System
Integrator.

System availability (three-tier template only): The system availability.

Select HA for a configuration that is suitable for a high-availability


installation. Two database servers and two servers for ASCS are created.
Storage type (two-tier template only): The type of storage to use.

For larger systems, we highly recommend using Azure Premium Storage. For
more information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure virtual machine
workloads
Introduction to Microsoft Azure Storage

User image VHD URI (unmanaged disk image template only): The URI of the
private OS image VHD, for example,
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.

User image storage account (unmanaged disk image template only): The
name of the storage account where the private OS image is stored, for
example, <accountname> in
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.

userImageId (managed disk image template only): ID of the Managed Disk


image you want to use

Admin username and Admin password: The username and password.

A new user is created, for signing in to the virtual machine.

New or existing subnet: Determines whether a new virtual network and


subnet is created or an existing subnet is used. If you already have a virtual
network that is connected to your on-premises network, select Existing.

Subnet ID: If you want to deploy the VM into an existing VNet where you
have a subnet defined the VM should be assigned to, name the ID of that
specific subnet. The ID usually looks like this: /subscriptions/<subscription
id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>

3. Terms and conditions:


Review and accept the legal terms.

4. Select Purchase.

Install the VM Agent (Linux only)


To use the templates described in the preceding section, the Linux Agent must already
be installed in the user image, or the deployment will fail. Download and install the VM
Agent in the user image as described in Download, install, and enable the Azure VM
Agent. If you don't use the templates, you also can install the VM Agent later.

Join a domain (Windows only)

If your Azure deployment is connected to an on-premises Active Directory or DNS


instance via an Azure site-to-site VPN connection or Azure ExpressRoute (this is called
cross-premises in Azure Virtual Machines planning and implementation for SAP
NetWeaver), it is expected that the VM is joining an on-premises domain. For more
information about considerations for this step, see Join a VM to an on-premises domain
(Windows only).

Configure proxy settings

Depending on how your on-premises network is configured, you might need to set up
the proxy on your VM. If your VM is connected to your on-premises network via VPN or
ExpressRoute, the VM might not be able to access the Internet, and won't be able to
download the required VM extensions or collect Azure infrastructure information for the
SAP Host agent via the SAP extension for Azure, see Configure the proxy.

Configure Azure VM Extension for SAP


To be sure SAP supports your environment, set up the Azure Extension for SAP as
described in Configure the Azure Extension for SAP.

Scenario 3: Moving an on-premises VM by using a non-


generalized Azure VHD with SAP
In this scenario, you plan to move a specific SAP system from an on-premises
environment to Azure. You can do this by uploading the VHD that has the OS, the SAP
binaries, and eventually the DBMS binaries, plus the VHDs with the data and log files of
the DBMS, to Azure. Unlike the scenario described in Scenario 2: Deploying a VM with a
custom image for SAP, in this case, you keep the hostname, SAP SID, and SAP user
accounts in the Azure VM, because they were configured in the on-premises
environment. You do not need to generalize the OS. This scenario applies most often to
cross-premises scenarios where part of the SAP landscape runs on-premises and part of
it runs on Azure.
In this scenario, the VM Agent is not automatically installed during deployment. Because
the VM Agent and the Azure Extension for SAP are required to run SAP NetWeaver on
Azure, you need to download, install, and enable both components manually after you
create the virtual machine.

For more information about the Azure VM Agent, see the following resources.

Windows

Azure Virtual Machine Agent overview

Linux

Azure Linux Agent User Guide

The following flowchart shows the sequence of steps for moving an on-premises VM by
using a non-generalized Azure VHD:

If the disk is already uploaded and defined in Azure (see Azure Virtual Machines
planning and implementation for SAP NetWeaver), do the tasks described in the next
few sections.

Create a virtual machine


To create a deployment by using a private OS disk through the Azure portal, use the SAP
template published in the azure-quickstart-templates GitHub repository . You also can
manually create a virtual machine, by using PowerShell.

Two-tier configuration (only one virtual machine) template (sap-2-tier-user-


disk)

To create a two-tier system by using only one virtual machine, use this template.

Two-tier configuration (only one virtual machine) template - Managed Disk (sap-
2-tier-user-disk-md)

To create a two-tier system by using only one virtual machine and a Managed Disk,
use this template.
In the Azure portal, enter the following parameters for the template:

1. Basics:

Subscription: The subscription to use to deploy the template.


Resource group: The resource group to use to deploy the template. You can
create a new resource group or select an existing resource group in the
subscription.
Location: Where to deploy the template. If you selected an existing resource
group, the location of that resource group is used.

2. Settings:

SAP System ID: The SAP System ID.

OS type: The operating system type you want to deploy (Windows or Linux).

SAP system size: The size of the SAP system.

The number of SAPS the new system provides. If you are not sure how many
SAPS the system requires, ask your SAP Technology Partner or System
Integrator.

Storage type (two-tier template only): The type of storage to use.

For larger systems, we highly recommend using Azure Premium Storage. For
more information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine
workloads
Introduction to Microsoft Azure Storage

OS disk VHD URI (unmanaged disk template only): The URI of the private OS
disk, for example,
https://<accountname>.blob.core.windows.net/vhds/osdisk.vhd.

OS disk Managed Disk ID (managed disk template only): The ID of the


Managed Disk OS disk, /subscriptions/92d102f7-81a5-4df7-9877-
54987ba97dd9/resourceGroups/group/providers/Microsoft.Compute/disks/
WIN

New or existing subnet: Determines whether a new virtual network and


subnet are created, or an existing subnet is used. If you already have a virtual
network that is connected to your on-premises network, select Existing.
Subnet ID: If you want to deploy the VM into an existing VNet where you
have a subnet defined the VM should be assigned to, name the ID of that
specific subnet. The ID usually looks like this: /subscriptions/<subscription
id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>

3. Terms and conditions:


Review and accept the legal terms.

4. Select Purchase.

Install the VM Agent

To use the templates described in the preceding section, the VM Agent must be
installed on the OS disk, or the deployment will fail. Download and install the VM Agent
in the VM, as described in Download, install, and enable the Azure VM Agent.

If you don't use the templates described in the preceding section, you can also install
the VM Agent afterwards.

Join a domain (Windows only)


If your Azure deployment is connected to an on-premises Active Directory or DNS
instance via an Azure site-to-site VPN connection or ExpressRoute (this is called cross-
premises in Azure Virtual Machines planning and implementation for SAP NetWeaver), it
is expected that the VM is joining an on-premises domain. For more information about
considerations for this task, see Join a VM to an on-premises domain (Windows only).

Configure proxy settings

Depending on how your on-premises network is configured, you might need to set up
the proxy on your VM. If your VM is connected to your on-premises network via VPN or
ExpressRoute, the VM might not be able to access the Internet, and won't be able to
download the required VM extensions or collect Azure infrastructure information for the
SAP Host agent via the SAP extension for Azure, see Configure the proxy.

Configure Azure VM Extension for SAP


To be sure SAP supports your environment, set up the Azure Extension for SAP as
described in Configure the Azure Extension for SAP.
Detailed tasks for SAP software deployment
This section has detailed steps for doing specific tasks in the configuration and
deployment process.

Join a VM to an on-premises domain (Windows only)


If you deploy SAP VMs in a cross-premises scenario, where on-premises Active Directory
and DNS are extended in Azure, it is expected that the VMs are joining an on-premises
domain. The detailed steps you take to join a VM to an on-premises domain, and the
additional software required to be a member of an on-premises domain, varies by
customer. Usually, to join a VM to an on-premises domain, you need to install additional
software, like antimalware software, and backup or monitoring software.

In this scenario, you also need to make sure that if Internet proxy settings are forced
when a VM joins a domain in your environment, the Windows Local System Account (S-
1-5-18) in the Guest VM has the same proxy settings. The easiest option is to force the
proxy by using a domain Group Policy, which applies to systems in the domain.

Download, install, and enable the Azure VM Agent


For virtual machines that are deployed from an OS image that is not generalized (for
example, an image that doesn't originate in the Windows System Preparation, or
sysprep, tool), you need to manually download, install, and enable the Azure VM Agent.

If you deploy a VM from the Azure Marketplace, this step is not required. Images from
the Azure Marketplace already have the Azure VM Agent.

Windows

1. Download the Azure VM Agent:


a. Download the Azure VM Agent installer package .
b. Store the VM Agent MSI package locally on a personal computer or server.
2. Install the Azure VM Agent:
a. Connect to the deployed Azure VM by using Remote Desktop Protocol (RDP).
b. Open a Windows Explorer window on the VM and select the target directory for
the MSI file of the VM Agent.
c. Drag the Azure VM Agent Installer MSI file from your local computer/server to
the target directory of the VM Agent on the VM.
d. Double-click the MSI file on the VM.
3. For VMs that are joined to on-premises domains, make sure that eventual Internet
proxy settings also apply to the Windows Local System account (S-1-5-18) in the
VM, as described in Configure the proxy. The VM Agent runs in this context and
needs to be able to connect to Azure.

No user interaction is required to update the Azure VM Agent. The VM Agent is


automatically updated, and does not require a VM restart.

Linux
Use the following commands to install the VM Agent for Linux:

SUSE Linux Enterprise Server (SLES)

Console

sudo zypper install WALinuxAgent

Red Hat Enterprise Linux (RHEL) or Oracle Linux

Console

sudo yum install WALinuxAgent

If the agent is already installed, to update the Azure Linux Agent, do the steps described
in Update the Azure Linux Agent on a VM to the latest version from GitHub.

Configure the proxy


The steps you take to configure the proxy in Windows are different from the way you
configure the proxy in Linux.

Windows
Proxy settings must be set up correctly for the Local System account to access the
Internet. If your proxy settings are not set by Group Policy, you can configure the
settings for the Local System account.

1. Go to Start, enter gpedit.msc, and then select Enter.


2. Select Computer Configuration > Administrative Templates > Windows
Components > Internet Explorer. Make sure that the setting Make proxy settings
per-machine (rather than per-user) is disabled or not configured.
3. In Control Panel, go to Network and Sharing Center > Internet Options.
4. On the Connections tab, select the LAN settings button.
5. Clear the Automatically detect settings check box.
6. Select the Use a proxy server for your LAN check box, and then enter the proxy
address and port.
7. Select the Advanced button.
8. In the Exceptions box, enter the IP address 168.63.129.16. Select OK.

Linux

Configure the correct proxy in the configuration file of the Microsoft Azure Guest Agent,
which is located at \etc\waagent.conf.

Set the following parameters:

1. HTTP proxy host. For example, set it to proxy.corp.local.

Console

HttpProxy.Host=<proxy host>

2. HTTP proxy port. For example, set it to 80.

Console

HttpProxy.Port=<port of the proxy host>

3. Restart the agent.

Console

sudo service waagent restart

If you want to use the Azure repositories, make sure that the traffic to these repositories
is not going through your on-premises intranet. If you created user-defined routes to
enable forced tunneling, make sure that you add a route that routes traffic to the
repositories directly to the Internet, and not through your site-to-site VPN connection.

The VM Extension for SAP also needs to be able to access the internet. Please make sure
to install the new VM Extension for SAP and follow the steps in Configure the Azure VM
extension for SAP solutions with Azure CLI in the VM Extension for SAP installation
guide to configure the proxy.

SLES

You also need to add routes for the IP addresses listed in \etc\regionserverclnt.cfg.
The following figure shows an example:

RHEL
You also need to add routes for the IP addresses of the hosts listed in
\etc\yum.repos.d\rhui-load-balancers. For an example, see the preceding figure.

Oracle Linux

There are no repositories for Oracle Linux on Azure. You need to configure your
own repositories for Oracle Linux or use the public repositories.

For more information about user-defined routes, see User-defined routes and IP
forwarding.

Azure Extension for SAP

7 Note

General Support Statement:


Support for the Azure Extension for SAP is provided through SAP support channels.
If you need assistance with the Azure Extension for SAP, please open a support case
with SAP Support .

When you've prepared the VM as described in Deployment scenarios of VMs for SAP on
Azure, the Azure VM Agent is installed on the virtual machine. The next step is to deploy
the Azure Extension for SAP, which is available in the Azure Extension Repository in the
global Azure datacenters. For more information, see Configure the Azure Extension for
SAP.

Next steps
Learn about RHEL for SAP in-place upgrade
SAP Business One on Azure Virtual
Machines
Article • 02/10/2023

This document provides guidance to deploy SAP Business One on Azure Virtual
Machines. The documentation is not a replacement for installation documentation of
Business one for SAP. The documentation should cover basic planning and deployment
guidelines for the Azure infrastructure to run Business One applications on.

Business One supports two different databases:

SQL Server - see SAP Note #928839 - Release Planning for Microsoft SQL Server
SAP HANA - for exact SAP Business One support matrix for SAP HANA, checkout
the SAP Product Availability Matrix

Regarding SQL Server, the basic deployment considerations as documented in the Azure
Virtual Machines DBMS deployment for SAP NetWeaver applies. for SAP HANA,
considerations are mentioned in this document.

Prerequisites
To use this guide, you need basic knowledge of the following Azure components:

Azure virtual machines on Windows


Azure virtual machines on Linux
Azure networking and virtual networks management with PowerShell
Azure networking and virtual networks with CLI
Manage Azure disks with the Azure CLI

Even if you are interested in business One only, the document Azure Virtual Machines
planning and implementation for SAP NetWeaver can be a good source of information.

The assumption is that you as the instance deploying SAP Business One are:

Familiar with installing SAP HANA on a given infrastructure like a VM


Familiar installing the SAP Business One application on an infrastructure like Azure
VMs
Familiar with operating SAP Business One and the DBMS system chosen
Familiar with deploying infrastructure in Azure

All these areas will not be covered in this document.


Besides Azure documentation you should be aware of main SAP Notes, which refer to
Business One or which are central Notes from SAP for business One:

528296 - General Overview Note for SAP Business One Releases and Related
Products
2216195 - Release Updates Note for SAP Business One 9.2, version for SAP
HANA
2483583 - Central Note for SAP Business One 9.3
2483615 - Release Updates Note for SAP Business One 9.3
2483595 - Collective Note for SAP Business One 9.3 General Issues
2027458 - Collective Consulting Note for SAP HANA-Related Topics of SAP
Business One, version for SAP HANA

Business One Architecture


Business One is an application that has two tiers:

A client tier with a 'fat' client


A database tier that contains the database schema for a tenant

A better overview which components are running in the client part and which parts are
running in the server part is documented in SAP Business One Administrator's Guide

Since there is heavy latency critical interaction between the client tier and the DBMS tier,
both tiers need to be located in Azure when deploying in Azure. it is usual that the users
then RDS into one or multiple VMs running an RDS service for the Business One client
components.

Sizing VMs for SAP Business One


Regarding the sizing of the client VM(s), the resource requirements are documented by
SAP in the document SAP Business One Hardware Requirements Guide . For Azure,
you need to focus and calculate with the requirements stated in chapter 2.4 of the
document.

As Azure virtual machines for hosting the Business One client components and the
DBMS host, only VMs that are SAP NetWeaver supported are allowed. To find the list of
SAP NetWeaver supported Azure VMs, read SAP Note #1928533 .

Running SAP HANA as DBMS backend for Business One, only VMs, which are listed for
Business on HANA in the HANA certified IaaS platform list are supported for HANA.
The Business One client components are not affected by this stronger restriction for the
SAP HANA as DBMS system.

Operating system releases to use for SAP Business One


In principle, it is always best to use the most recent operating system releases. Especially
in the Linux space, new Azure functionality was introduced with different more recent
minor releases of Suse and Red Hat. On the Windows side, using Windows Server 2016
is highly recommended.

Deploying infrastructure in Azure for SAP


Business One
In the next few chapters, the infrastructure pieces that matter for deploying SAP.

Azure network infrastructure


The network infrastructure you need to deploy in Azure depends on whether you deploy
a single Business One system for yourself. Or whether you are a hoster who hosts
dozens of Business One systems for customers. There also might be slight changes in
the design on whether how you connect to Azure. Going through different possibilities,
one design where you have a VPN connectivity into Azure and where you extend your
Active Directory through VPN or ExpressRoute into Azure.

The simplified configuration presented introduces several security instances that allow
to control and limit routing. It starts with
The router/firewall on the customer on-premises side.
The next instance is the Azure Network Security Group that you can use to
introduce routing and security rules for the Azure VNet that you run your SAP
Business one configuration in.
In order to avoid that users of Business One client can as well see the server that
runs the Business One server, which runs the database, you should separate the
VM hosting the Business one client and the business one server in two different
subnets within the VNet.
You would use Azure NSG assigned to the two different subnets again in order to
limit access to the Business one server.

A more sophisticated version of an Azure network configuration is based on the Azure


documented best practices of hub and spoke architecture. The architecture pattern of
hub and spoke would change the first simplified configuration to one like this:

For cases where the users are connecting through the internet without any private
connectivity into Azure, the design of the network in Azure should be aligned with the
principles documented in the Azure reference architecture for DMZ between Azure and
the Internet.

Business One database server


For the database type, SQL Server and SAP HANA are available. Independent of the
DBMS, you should read the document Considerations for Azure Virtual Machines DBMS
deployment for SAP workload to get a general understanding of DBMS deployments in
Azure VMs and the related networking and storage topics.

Though emphasized in the specific and generic database documents already, you
should make yourself familiar with:

Manage the availability of Windows virtual machines in Azure and Manage the
availability of Linux virtual machines in Azure
SLA for Virtual Machines

These documents should help you to decide on the selection of storage types and high
availability configuration.

In principle you should:

Use Premium SSDs over Standard HDDs. To learn more about the available disk
types, see our article Select a disk type
Use Azure Managed disks over unmanaged disks
Make sure that you have sufficient IOPS and I/O throughput configured with your
disk configuration
Combine /hana/data and /hana/log volume in order to have a cost efficient
storage configuration

SQL Server as DBMS


For deploying SQL Server as DBMS for Business One, go along the document SQL Server
Azure Virtual Machines DBMS deployment for SAP NetWeaver.

Rough sizing estimates for the DBMS side for SQL Server are:

Number of users vCPUs Memory Example VM types

up to 20 4 16 GB D4s_v3, E4s_v3

up to 40 8 32 GB D8s_v3, E8s_v3

up to 80 16 64 GB D16s_v3, E16s_v3

up to 150 32 128 GB D32s_v3, E32s_v3

The sizing listed above should give an idea where to start with. It may be that you need
less or more resources, in which case an adaption on Azure is easy. A change between
VM types is possible with just a restart of the VM.

SAP HANA as DBMS


Using SAP HANA as DBMS the following sections you should follow the considerations
of the document SAP HANA on Azure operations guide.

For high availability and disaster recovery configurations around SAP HANA as database
for Business One in Azure, you should read the documentation SAP HANA high
availability for Azure virtual machines and the documentation pointed to from that
document.
For SAP HANA backup and restore strategies, you should read the document Backup
guide for SAP HANA on Azure Virtual Machines and the documentation pointed to from
that document.

Business One client server


For these components storage considerations are not the primary concern. nevertheless,
you want to have a reliable platform. Therefore, you should use Azure Premium Storage
for this VM, even for the base VHD. Sizing the VM, with the data given in SAP Business
One Hardware Requirements Guide . For Azure, you need to focus and calculate with
the requirements stated in chapter 2.4 of the document. As you calculate the
requirements, you need to compare them against the following documents to find the
ideal VM for you:

Sizes for Windows virtual machines in Azure


SAP Note #1928533

Compare number of CPUs and memory needed to what is documented by Microsoft.


Also keep network throughput in mind when choosing the VMs.
SAP LaMa connector for Azure
Article • 04/18/2023

Many customers use SAP Landscape Management (LaMa) to operate and monitor their
SAP landscape. Since version 3.0 SP05, SAP LaMa includes a connector to Azure by
default. You can use this connector to deallocate and start virtual machines (VMs), copy
and relocate managed disks, and delete managed disks. With these basic operations,
you can relocate, copy, clone, and refresh SAP systems by using SAP LaMa.

This guide describes how to set up the SAP LaMa connector for Azure. It also describes
how to create and configure virtual machines that you can use to install adaptive SAP
systems.

7 Note

The connector is available only in SAP LaMa Enterprise Edition.

Resources
The following SAP Notes are related to the topic of SAP LaMa on Azure:

Note number Title

2343511 Microsoft Azure connector for SAP Landscape Management (LaMa)

2350235 SAP Landscape Management 3.0 - Enterprise Edition

You can find more information in the SAP Help Portal for SAP LaMa .

7 Note

If you need support for SAP LaMa or the connector for Azure, open an incident with
SAP on component BC-VCM-LVM-HYPERV.

General remarks
Be sure to enable Automatic Mountpoint Creation in Setup > Settings > Engine.

If SAP LaMa mounts volumes by using SAP Adaptive Extensions (SAPACEXT) on a


virtual machine, the mount point must exist if this setting is not enabled.
Use a separate subnet, and don't use dynamic IP addresses to prevent IP address
"stealing" when you're deploying new VMs and SAP instances are unprepared.

If you use dynamic IP address allocation in the subnet that SAP LaMa also uses,
preparing an SAP system with SAP LaMa might fail. If an SAP system is unprepared,
the IP addresses are not reserved and might get allocated to other virtual
machines.

If you sign in to managed hosts, don't block file systems from being unmounted.

If you sign in to a Linux virtual machine and change the working directory to a
directory in a mount point (for example, /usr/sap/AH1/ASCS00/exe), the volume
can't be unmounted and a relocate or unprepare operation fails.

Be sure to disable CLOUD_NETCONFIG_MANAGE on SUSE SLES Linux virtual machines.


For more information, see SUSE KB 7023633 .

Set up the SAP LaMa connector for Azure


The connector for Azure is included in SAP LaMa as of version 3.0 SP05. We recommend
always installing the latest support package and patch for SAP LaMa 3.0.

The connector for Azure uses the Azure Resource Manager API to manage your Azure
resources. SAP LaMa can use a service principal or a managed identity to authenticate
against this API. If your SAP LaMa instance is running on an Azure VM, we recommend
using a managed identity.

Use a service principal to get access to the Azure API


Follow these steps to create a service principal for the SAP LaMa connector for Azure:

1. Go to the Azure portal .


2. Open the Azure Active Directory pane.
3. Select App registrations.
4. Select New registration.
5. Enter a name, and then select Register.
6. Select the new app, and then on the Settings tab, select Certificates & secrets.
7. Create a new client secret, enter a description for a new key, select when the secret
should expire, and then select Save.
8. Write down the value. You'll use it as the password for the service principal.
9. Write down the application ID. You'll use it as the username of the service principal.
By default, the service principal doesn't have permissions to access your Azure
resources. Assign the Contributor role to the service principal at resource group scope
for all resource groups that contain SAP systems that SAP LaMa should manage. For
detailed steps, see Assign Azure roles using the Azure portal.

Use a managed identity to get access to the Azure API


To be able to use a managed identity, your SAP LaMa instance has to run on an Azure
VM that has a system-assigned or user-assigned identity. For more information about
managed identities, read What are managed identities for Azure resources? and
Configure managed identities for Azure resources on a VM using the Azure portal.

By default, the managed identity doesn't have permissions to access your Azure
resources. Assign the Contributor role to the VM identity at resource group scope for all
resource groups that contain SAP systems that SAP LaMa should manage. For detailed
steps, see Assign Azure roles using the Azure portal.

In your configuration of the SAP LaMa connector for Azure, select Use Managed
Identity to enable the use of the managed identity. If you want to use a system-
assigned identity, leave the User Name field empty. If you want to use a user-assigned
identity, enter its ID in the User Name field.

Create a new connector in SAP LaMa


Open the SAP LaMa website and go to Infrastructure. On the Cloud Managers tab,
select Add. Select Microsoft Azure Cloud Adapter, and then select Next. Enter the
following information:

Label: Choose a name for the connector instance.

User Name: Enter the service principal application ID or the ID of the user-
assigned identity of the virtual machine.

Password: Enter the service principal key/password. You can leave this field empty
if you use a system-assigned or user-assigned identity.

URL: Keep the default https://management.azure.com/ .

Monitoring Interval (Seconds): Enter an interval of at least 300.

Use Managed Identity: Select to enable SAP LaMa to use a system-assigned or


user-assigned identity to authenticate against the Azure API.

Subscription ID: Enter the Azure subscription ID.


Azure Active Directory Tenant ID: Enter the ID of the Active Directory tenant.

Proxy host: Enter the host name of the proxy if SAP LaMa needs a proxy to
connect to the internet.

Proxy port: Enter the TCP port of the proxy.

Change Storage Type to save costs: Enable this setting if the Azure adapter should
change the storage type of the managed disks to save costs when the disks are not
in use.

For data disks that are referenced in an SAP instance configuration, the adapter
changes the disk type to Standard Storage during an instance unprepare operation
and back to the original storage type during an instance prepare operation.

If you stop a virtual machine in SAP LaMa, the adapter changes the storage type of
all attached disks, including the OS disk, to Standard Storage. If you start a virtual
machine in SAP LaMa, the adapter changes the storage type back to the original
storage type.

Select Test Configuration to validate your input. You should see the following message
at the bottom of the website:

"Connection successful: Connection to Microsoft cloud was successful. 7 resource


groups found (only 10 groups requested)."

Provision a new adaptive SAP system


You can manually deploy a new virtual machine or use one of the Azure templates in the
quickstart repository . The repository contains templates for SAP NetWeaver ASCS ,
SAP NetWeaver application servers , and the database . You can also use these
templates to provision new hosts as part of a system copy, clone, or similar activity.

We recommend using a separate subnet for all virtual machines that you want to
manage with SAP LaMa. We also recommend that you don't use dynamic IP addresses
to prevent IP address "stealing" when you're deploying new virtual machines and SAP
instances are unprepared.

7 Note

If possible, remove all virtual machine extensions. They might cause long runtimes
for detaching disks from a virtual machine.
Make sure that the user <hanasid>adm, the user <sapsid>adm, and the group sapsys
exist on the target machine with the same ID and group ID, or use LDAP. Enable and
start the Network File Sharing (NFS) server on the virtual machines that should be used
to run SAP NetWeaver ABAP Central Services (ASCS) or SAP Central Services (SCS).

Manual deployment
SAP LaMa communicates with the virtual machine by using the SAP Host Agent. If you
deploy the virtual machines manually or are not using the Azure Resource Manager
template from the quickstart repository, be sure to install the latest SAP Host Agent and
the SAP Adaptive Extensions. For more information about the required patch levels for
Azure, see SAP Note 2343511 .

Manual deployment of a Linux virtual machine

Create a new virtual machine with one of the supported operating systems listed in SAP
Note 2343511 . Add more IP configurations for the SAP instances. Each instance needs
at least one IP address and must be installed using a virtual host name.

The SAP NetWeaver ASCS instance needs disks for /sapmnt/<SAPSID>,


/usr/sap/<SAPSID>, /usr/sap/trans, and /usr/sap/<sapsid>adm. The SAP NetWeaver
application servers don't need more disks. Everything related to the SAP instance must
be stored on ASCS and exported via NFS. Otherwise, you currently can't add more
application servers by using SAP LaMa.
Manual deployment for SAP HANA

Create a new virtual machine with one of the supported operating systems for SAP
HANA, as listed in SAP Note 2343511 . Add one extra IP configuration for SAP HANA
and one per HANA tenant.

SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log.
Manual deployment for Oracle Database on Linux
Create a new virtual machine with one of the supported operating systems for Oracle
databases, as listed in SAP Note 2343511 . Add one extra IP configuration for the
Oracle database.

The Oracle database needs disks for /oracle, /home/oraod1, and /home/oracle.

Manual deployment for Microsoft SQL Server


Create a new virtual machine with one of the supported operating systems for Microsoft
SQL Server, as listed in SAP Note 2343511 . Add one extra IP configuration for the SQL
Server instance.

The SQL Server database server needs disks for the database data and log files. It also
needs disks for c:\usr\sap.
Be sure to install a supported Microsoft ODBC driver for SQL Server on a virtual machine
that you want to use as a target for relocating an SAP NetWeaver application server or
as a system copy/clone target. SAP LaMa can't relocate SQL Server itself, so a virtual
machine that you want to use for these purposes needs SQL Server preinstalled.

Deploy a virtual machine by using an Azure template


Download the following latest available archives from the SAP Software Download
Center for the operating system of the virtual machines:

SAPCAR 7.21
SAP Host Agent 7.21
SAP Adaptive Extension 1.0 EXT

Also download the following components from the Microsoft Download Center :

Microsoft Visual C++ 2010 Redistributable Package (x64) (Windows only)


Microsoft ODBC Driver for SQL Server (SQL Server only)

The components are required for template deployment. The easiest way to make them
available to the template is to upload them to an Azure storage account and create a
shared access signature (SAS).

The templates have the following parameters:

sapSystemId : The SAP system ID (SID). It's used to create the disk layout (for

example, /usr/sap/<sapsid>).
computerName : The computer name of the new virtual machine. SAP LaMa also uses

this parameter. When you use this template to provision a new virtual machine as
part of a system copy, SAP LaMa waits until the host with this computer name can
be reached.

osType : The type of the operating system that you want to deploy.

dbtype : The type of the database. This parameter is used to determine how many

extra IP configurations need to be added and how the disk layout should look.

sapSystemSize : The size of the SAP system that you want to deploy. It's used to

determine the type and size of the virtual machine instance.

adminUsername : The username for the virtual machine.

adminPassword : The password for the virtual machine. You can also provide a public

key for SSH.

sshKeyData : The public SSH key for the virtual machine. It's supported only for

Linux operating systems.

subnetId : The ID of the subnet that you want to use.

deployEmptyTarget : An empty target that you can deploy if you want to use the

virtual machine as a target for an instance relocation or something similar. In this


case, no additional disks or IP configurations are attached.

sapcarLocation : The location for the SAPCAR application that matches the
operating system that you deploy. SAPCAR is used to extract the archives that you
provide in other parameters.

sapHostAgentArchiveLocation : The location of the SAP Host Agent archive. The SAP

Host Agent is deployed as part of this template deployment.

sapacExtLocation : The location of the SAP Adaptive Extensions. SAP Note


2343511 lists the minimum patch level required for Azure.

vcRedistLocation : The location of the Variant Configuration runtime that's


required to install the SAP Adaptive Extensions. This parameter is required only for
Windows.

odbcDriverLocation : The location of the ODBC driver that you want to install. Only
the Microsoft ODBC driver for SQL Server is supported.

sapadmPassword : The password for the sapadm user.


sapadmId : The Linux user ID of the sapadm user. It's not required for Windows.

sapsysGid : The Linux group ID of the sapsys group. It's not required for Windows.

_artifactsLocation : The base URI, which contains artifacts that this template

requires. When you deploy the template by using the accompanying scripts, a
private location in the subscription is used and this value is automatically
generated. You need this URI only if you don't deploy the template from GitHub.

_artifactsLocationSasToken : The SAS token required to access


_artifactsLocation . When you deploy the template by using the accompanying

scripts, an SAS token is automatically generated. You need this token only if you
don't deploy the template from GitHub.

SAP HANA
The following examples assume that you install the SAP HANA system with SID HN1 and
the SAP NetWeaver system with SID AH1. The virtual host names are:

hn1-db for the HANA instance


ah1-db for the HANA tenant that the SAP NetWeaver system uses
ah1-ascs for SAP NetWeaver ASCS
ah1-di-0 for the first SAP NetWeaver application server

Install SAP NetWeaver ASCS for SAP HANA by using Azure


managed disks
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount
the IP address of the virtual host name of ASCS. The recommended way is to use
SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP
address after a reboot.

Linux

Bash

# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual


hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-ascs -n
255.255.255.128

Windows
Bash

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network


interface> -h <virtual hostname or IP address> -n <subnet mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h
ah1-ascs -n 255.255.255.128

Run SWPM. For ASCS Instance Host Name, use ah1-ascs.

Linux

Add the following profile parameter to the SAP Host Agent profile, which is located at
/usr/sap/hostctrl/exe/host_profile. For more information, see SAP Note 2628497 .

Bash

acosprep/nfs_paths=/home/ah1adm,/usr/sap/trans,/sapmnt/AH1,/usr/sap/AH1

Install SAP NetWeaver ASCS for SAP HANA on Azure NetApp Files
Azure NetApp Files provides NFS for Azure. In the context of SAP LaMa, this simplifies
the creation of the ASCS instances and the subsequent installation of application
servers. Previously, the ASCS instance also had to act as an NFS server, and the
parameter acosprep/nfs_paths had to be added to the host profile of the SAP Host
Agent.

Network requirements

Azure NetApp Files requires a delegated subnet, which must be part of the same virtual
network as the SAP servers. Here's an example for such a configuration:

1. Create the virtual network and the first subnet.


2. Create the delegated subnet for Microsoft.NetApp/volumes.
3. Create a NetApp account in the Azure portal.
Within the NetApp account, the capacity pool specifies the size and type of disks
for each pool.

4. Define the NFS volumes.

Because one pool might contain volumes for multiple systems, choose a self-
explaining naming scheme. Adding the SID helps to group related volumes
together.

For the ASCS and AS instances, you need the following mounts: /sapmnt/<SID>,
/usr/sap/<SID>, and /home/<sid>adm. Optionally, you need /usr/sap/trans for the
central transport directory, which is at least used by all systems of one landscape.
5. Repeat the preceding steps for the other volumes.
6. Mount the volumes to the systems where the initial installation with SAP SWPM is
performed:

a. Create the mount points. In this case, the SID is AN1, so you run the following
commands:

Bash

mkdir -p /home/an1adm
mkdir -p /sapmnt/AN1
mkdir -p /usr/sap/AN1
mkdir -p /usr/sap/trans

b. Mount the Azure NetApp Files volumes by using the following commands:

Bash

# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp


9.9.9.132:/an1-home-sidadm /home/an1adm
# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp
9.9.9.132:/an1-sapmnt-sid /sapmnt/AN1
# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp
9.9.9.132:/an1-usr-sap-sid /usr/sap/AN1
# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp
9.9.9.132:/global-usr-sap-trans /usr/sap/trans

You can also look up the mount commands from the portal. The local mount
points need to be adjusted.

c. Run the df -h command. Check the output to verify that you mounted the
volumes correctly.

7. Perform the installation with SWPM. The same steps must be performed for at
least one AS instance.

After the successful installation, the system must be discovered within SAP LaMa.
The mount points should look like the following screenshot for the ASCS and AS
instances.
7 Note

This is an example. The IP addresses and export path are different from the
ones that you used before.

Install SAP HANA

If you install SAP HANA by using the SAP HANA database lifecycle manager (HDBLCM)
command-line tool, use the --hostname parameter to provide a virtual host name.

Add the IP address of the virtual host name of the database to a network interface. The
recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT,
be sure to remount the IP address after a reboot.

Add another virtual host name and IP address for the name that the application servers
use to connect to the HANA tenant:

Bash

# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual


hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h hn1-db -n 255.255.255.128
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-db -n 255.255.255.128

Run the database instance installation of SWPM on the application server VM, not on
the HANA VM. In the Database for SAP System dialog, for Database Host, use ah1-db.

Install SAP NetWeaver Application Server for SAP HANA

Before you start SWPM, you need to mount the IP address of the virtual host name of
the application server. The recommended way is to use SAPACEXT. If you mount the IP
address by using SAPACEXT, be sure to remount the IP address after a reboot.

Linux
Bash

# /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual


hostname or IP address> -n <subnet mask>
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-di-0 -n
255.255.255.128

Windows

Bash

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network


interface> -h <virtual hostname or IP address> -n <subnet mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h
ah1-di-0 -n 255.255.255.128

We recommend that you use the SAP NetWeaver profile parameter


dbs/hdb/hdb_use_ident to set the identity that's used to find the key in the SAP HANA

user store (hdbuserstore). You can add this parameter manually after the database
instance installation with SWPM or run SWPM with the following code:

Bash

# from https://blogs.sap.com/2015/04/14/sap-hana-client-software-different-
ways-to-set-the-connectivity-data/
/sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_USE_IDENT=SYSTEM_COO

If you set it manually, you also need to create new hdbuserstore entries:

Bash

# run as <sapsid>adm
/usr/sap/AH1/hdbclient/hdbuserstore LIST
# reuse the port that was listed from the command above, in this example
35041
/usr/sap/AH1/hdbclient/hdbuserstore SET DEFAULT ah1-db:35041@AH1 SAPABAP1
<password>

In the Primary Application Server Instance dialog, for PAS Instance Host Name, use
ah1-di-0.

Post-installation steps for SAP HANA

Back up SYSTEMDB and all tenant databases before you try to copy a tenant, move a
tenant, or create a system replication.
Microsoft SQL Server
The following examples assume that you install the SAP NetWeaver system with SID
AS1. The virtual host names are:

as1-db for the SQL Server instance that the SAP NetWeaver system uses
as1-ascs for SAP NetWeaver ASCS
as1-di-0 for the first SAP NetWeaver application server

Install SAP NetWeaver ASCS for SQL Server


Before you start SWPM, you need to mount the IP address of the virtual host name of
ASCS. The recommended way is to use SAPACEXT. If you mount the IP address by using
SAPACEXT, be sure to remount the IP address after a reboot.

Bash

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network


interface> -h <virtual hostname or IP address> -n <subnet mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h
as1-ascs -n 255.255.255.128

Run SWPM. For ASCS Instance Host Name, use as1-ascs.

Install SQL Server


Before you start SWPM, you need to add the IP address of the virtual host name of the
database to a network interface. The recommended way is to use SAPACEXT. If you
mount the IP address by using SAPACEXT, be sure to remount the IP address after a
reboot.

Bash

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network


interface> -h <virtual hostname or IP address> -n <subnet mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h
as1-db -n 255.255.255.128

Run the database instance installation of SWPM on the SQL Server virtual machine. Use
SAPINST_USE_HOSTNAME=as1-db to override the host name that's used to connect to SQL

Server. If you deployed the virtual machine by using the Azure Resource Manager
template, set the directory that's used for the database data files to C:\sql\data, and set
the database log file to C:\sql\log.
Make sure that the user NT AUTHORITY\SYSTEM has access to the SQL Server instance
and has the server role sysadmin. For more information, see SAP Notes 1877727 and
2562184 .

Install the SAP NetWeaver application server


Before you start SWPM, you need to mount the IP address of the virtual host name of
the application server. The recommended way is to use SAPACEXT. If you mount the IP
address by using SAPACEXT, be sure to remount the IP address after a reboot.

Bash

# C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network


interface> -h <virtual hostname or IP address> -n <subnet mask>
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h
as1-di-0 -n 255.255.255.128

In the Primary Application Server Instance dialog, for PAS Instance Host Name, use
as1-di-0.

Troubleshooting

Errors and warnings during discovery


The SELECT permission was denied.

Error:

[Microsoft][ODBC SQL Server Driver][SQL Server]The SELECT permission was


denied on the object 'log_shipping_primary_databases', database 'msdb',

schema 'dbo'. [SOAPFaultException] The SELECT permission was denied on the


object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'.

Solution: Make sure that NT AUTHORITY\SYSTEM can access the SQL Server
instance. See SAP Note 2562184 .

Errors and warnings during instance validation


An exception was raised in the validation of hdbuserstore. See Log Viewer.

Caused by: com.sap.nw.lm.aci.monitor.api.validation


Error:

RuntimeValidationException

Exception in validator with ID 'RuntimeHDBConnectionValidator' (Validation:

'VALIDATION_HDB_USERSTORE'): Could not retrieve the hdbuserstore


HANA userstore is not in the correct location

Solution: Make sure that /usr/sap/AH1/hdbclient/install/installation.ini is correct.

Errors and warnings during a system copy


An error occurred in validating the system provisioning step.

Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception

Error:

HAOperationException

Calling '/usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h


hn1-db -o level=0\;status=5\;port=35013

pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -

r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-


db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -

R -T dev_lvminfo -u SYSTEM -p hook -r

Solution: Back up all databases in the source HANA system.

An error occurred in the system copy Start step of the database instance.

Error:

Host Agent Operation '000D3A282BC91EE8A1D76CF1F92E2944' failed

(OperationException. FaultCode: '127', Message: 'Command execution failed.


: [Microsoft][ODBC SQL Server Driver][SQL Server]User does not have

permission to alter database 'AS2', the database does not exist, or the

database is not in a state that allows access checks.')

Solution: Make sure that NT AUTHORITY\SYSTEM can access the SQL Server
instance. See SAP Note 2562184 .

Errors and warnings during a system clone


An error occurred in trying to register an instance agent in the Forced Register
and Start Instance Agent step of the application server or ASCS.

Error:

Error occurred when trying to register instance agent. (RemoteException:


'Failed to load instance data from profile '\\as1-

ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': Cannot access profile

'\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': No such file or


directory.')

Solution: Make sure that the sapmnt share on ASCS/SCS has full access for
SAP_AS1_GlobalAdmin.

An error occurred in the Enable Startup Protection for Clone step.

Error:

Failed to open file '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0'

Cause: No such file or directory

Solution: The computer account of the application server needs write access to
the profile.

Errors and warnings during creation of system replication


An exception was raised in selecting Create System Replication.

Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception

Error:

HAOperationException

Calling '/usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h


hn1-db -o level=0\;status=5\;port=35013

pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -

r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-


db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -

R -T dev_lvminfo -u SYSTEM -p hook -r

Solution: Test if SAPACEXT can be executed as <hanasid>adm.

An error occurred when full copy was not enabled in the storage step.
Error:

An error occurred when reporting a context attribute message for path


IStorageCopyData.storageVolumeCopyList:1 and field targetStorageSystemId

Solution: Ignore warnings in the step and try again. This issue was fixed in a
support package/patch of SAP LaMa.

Errors and warnings during relocation


The path /usr/sap/AH1 is not allowed for NFS re-exports.
Solution: Add ASCS exports to the ASCS Host Agent profile. See SAP Note
2628497 .

A function is not implemented in relocating ASCS.

Command output:

exportfs: host:/usr/sap/AX1: Function not implemented

Solution: Make sure that the NFS server service is enabled on the target virtual
machine for relocation.

Errors and warnings during application server installation


An error occurred in executing the SAPinst getProfileDir step.

Error:

Last error reported by the step: Caught ESAPinstException in module call:


Validator of step

'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|
NW_readProfileDir|ind|ind|ind|ind|readProfile|0|getProfileDir' reported an

error: Node \\\as1-ascs\sapmnt\AS1\SYS\profile does not exist. Start

SAPinst in interactive mode to solve this problem

Solution: Make sure that SWPM is running with a user who has access to the
profile. You can configure this user in the Application Server Installation wizard.

An error occurred in executing the SAPinst askUnicode step.

Error:
Last error reported by the step: Caught ESAPinstException in module call:

Validator of step
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|

NW_getUnicode|ind|ind|ind|ind|unicode|0|askUnicode' reported an error:


Start SAPinst in interactive mode to solve this problem

Solution: If you use a recent SAP kernel, SWPM can't determine whether the
system is a Unicode system anymore by using the message server of ASCS. See
SAP Note 2445033 .

Until this issue is fixed in a new support package/patch of SAP LaMa, work
around it by setting the profile parameter OS_UNICODE=uc in the default profile
of your SAP system.

An error occurred in executing the SAPinst dCheckGivenServer" version="1.0"


step.

Error:

Last error reported by the step: Installation was canceled by user.

Solution: Make sure that SWPM is running with a user who has access to the
profile. You can configure this user in the Application Server Installation wizard.

An error occurred in executing the SAPinst checkClient" version="1.0" step.

Error:

Last error reported by the step: Installation was canceled by user.

Solution: Make sure that the Microsoft ODBC driver for SQL Server is installed
on the virtual machine on which you want to install the application server.

An error occurred in executing the SAPinst copyScripts step.

Error:

Last error reported by the step: System call failed. DETAILS: Error 13

(0x0000000d) (Permission denied) in execution of system call 'fopenU' with


parameter (\\\as1-ascs/sapmnt/AS1/SYS/exe/uc/NTAMD64/strdbs.cmd, w), line

(494) in file

(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/filesystem/syx
xcfstrm2.cpp), stack trace: CThrThread.cpp: 85:

CThrThread::threadFunction() CSiServiceSet.cpp: 63:


CSiServiceSet::executeService() CSiStepExecute.cpp: 913:

CSiStepExecute::execute() EJSController.cpp: 179:


EJSControllerImpl::executeScript() JSExtension.hpp: 1136:

CallFunctionBase::call() iaxxcfile.cpp: 183: iastring


CIaOsFileConnect::callMemberFunction(iastring const& name, args_t const&

args) iaxxcfile.cpp: 1849: iastring CIaOsFileConnect::newFileStream(args_t

const& _args) iaxxbfile.cpp: 773: CIaOsFile::newFileStream_impl(4)


syxxcfile.cpp: 233: CSyFileImpl::openStream(ISyFile::eFileOpenMode)

syxxcfstrm.cpp: 29:
CSyFileStreamImpl::CSyFileStreamImpl(CSyFileStream*,iastring,ISyFile::eFile

OpenMode) syxxcfstrm.cpp: 265: CSyFileStreamImpl::open() syxxcfstrm2.cpp:

58: CSyFileStream2Impl::CSyFileStream2Impl(const CSyPath & \\\aw1-


ascs/sapmnt/AW1/SYS/exe/uc/NTAMD64/strdbs.cmd, 0x4) syxxcfstrm2.cpp: 456:

CSyFileStream2Impl::open()

Solution: Make sure that SWPM is running with a user who has access to the
profile. You can configure this user in the Application Server Installation wizard.

An error occurred in executing the SAPinst askPasswords step.

Error:

Last error reported by the step: System call failed. DETAILS: Error 5
(0x00000005) (Access is denied.) in execution of system call

'NetValidatePasswordPolicy' with parameter (...), line (359) in file


(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/account/synxca

ccmg.cpp), stack trace: CThrThread.cpp: 85: CThrThread::threadFunction()

CSiServiceSet.cpp: 63: CSiServiceSet::executeService() CSiStepExecute.cpp:


913: CSiStepExecute::execute() EJSController.cpp: 179:

EJSControllerImpl::executeScript() JSExtension.hpp: 1136:


CallFunctionBase::call() CSiStepExecute.cpp: 764:

CSiStepExecute::invokeDialog() DarkModeGuiEngine.cpp: 56:

DarkModeGuiEngine::showDialogCalledByJs() DarkModeDialog.cpp: 85:


DarkModeDialog::submit() EJSController.cpp: 179:

EJSControllerImpl::executeScript() JSExtension.hpp: 1136:


CallFunctionBase::call() iaxxcaccount.cpp: 107: iastring

CIaOsAccountConnect::callMemberFunction(iastring const& name, args_t const&

args) iaxxcaccount.cpp: 1186: iastring


CIaOsAccountConnect::validatePasswordPolicy(args_t const& _args)
iaxxbaccount.cpp: 430: CIaOsAccount::validatePasswordPolicy_impl()

synxcaccmg.cpp: 297: ISyAccountMgt::PasswordValidationMessage


CSyAccountMgtImpl::validatePasswordPolicy(saponazure,*****) const

Solution: Add a host rule in the isolation step to allow communication from the
VM to the domain controller.

Next steps
SAP HANA on Azure operations guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP Cloud Appliance Library
Article • 04/09/2024

SAP Cloud Appliance Library offers a quick and easy way to create SAP workloads in
Azure. You can set up a fully configured demo environment from an Appliance Template
or deploy a standardized system for an SAP product based on default or custom SAP
software installation stacks. This page lists the latest Appliance Templates and below the
latest SAP S/4HANA stacks for production-ready deployments.

To deploy an appliance template, you'll need to authenticate with your S-User or P-User.
You can create a P-User free of charge via the SAP Community .

For details on Azure account creation, see the SAP learning video and description

You will also find detailed answers to your questions related to SAP Cloud Appliance
Library on Azure SAP CAL FAQ

The online library is continuously updated with Appliances for demo, proof of concept
and exploration of new business cases. For the most recent ones, select “Create
Appliance” here from the list – or visit cal.sap.com for further templates.

Deployment of appliances through SAP Cloud


Appliance Library
ノ Expand table

Appliance Date Description Creation


Template Link

SAP S/4HANA December This appliance contains SAP S/4HANA 2023 Create
2023, Fully- 14 2023 (SP00) with pre-activated SAP Best Practices for Appliance
Activated SAP S/4HANA core functions, and further
Appliance scenarios for Service, Master Data Governance
(MDG), Portfolio Mgmt. (PPM), Human Capital
Management (HCM), Analytics, and more. User
access happens via SAP Fiori, SAP GUI, SAP
HANA Studio, Windows remote desktop, or the
backend operating system for full
administrative access.
Appliance Date Description Creation
Template Link

SAP S/4HANA July 16 This appliance contains SAP S/4HANA 2022 Create
2022 FPS02, Fully- 2023 (FPS02) with pre-activated SAP Best Practices Appliance
Activated for SAP S/4HANA core functions, and further
Appliance scenarios for Service, Master Data Governance
(MDG), Portfolio Mgmt. (PPM), Human Capital
Management (HCM), Analytics, and more. User
access happens via SAP Fiori, SAP GUI, SAP
HANA Studio, Windows remote desktop, or the
backend operating system for full
administrative access.

SAP S/4HANA March 12 This Appliance Template contains a pre- Create


2023 FPS01 2024 configured and activated SAP S/4HANA Fiori Appliance
UI in client 100, with prerequisite components
activated as per SAP note 3336782 –
Composite SAP note: Rapid Activation for SAP
Fiori in SAP S/4HANA 2023. It also includes a
remote desktop for easy frontend access.

SAP BW/4HANA April 07 This solution offers you an insight of SAP Create
2023 Developer 2024 BW/4HANA 2023. SAP BW/4HANA is the next Appliance
Edition generation Data Warehouse optimized for SAP
HANA. Beside the basic BW/4HANA options,
the solution offers a bunch of SAP HANA
optimized BW/4HANA Content and the next
step of Hybrid Scenarios with SAP Datasphere.

SAP S/4HANA December This appliance contains SAP S/4HANA 2022 Create
2022, Fully- 15 2022 (SP00) with pre-activated SAP Best Practices for Appliance
Activated SAP S/4HANA core functions, and further
Appliance scenarios for Service, Master Data Governance
(MDG), Portfolio Mgmt. (PPM), Human Capital
Management (HCM), Analytics, and more. User
access happens via SAP Fiori, SAP GUI, SAP
HANA Studio, Windows remote desktop, or the
backend operating system for full
administrative access.

SAP Focused Run December SAP Focused Run is designed specifically for Create
4.0 FP02, 07 2023 businesses that need high-volume system and Appliance
unconfigured application monitoring, alerting, and analytics.
It's a powerful solution for service providers,
who want to host all their customers in one
central, scalable, safe, and automated
environment. It also addresses customers with
advanced needs regarding system
management, user monitoring, integration
Appliance Date Description Creation
Template Link

monitoring, and configuration and security


analytics.

Deployment of S/4HANA system for productive


usage through SAP Cloud Appliance Library
You can now also deploy SAP S/4HANA systems with High Availability (HA), non-HA or
single server architecture through SAP Cloud Appliance Library. The offering comprises
default SAP S/4HANA software stacks including FPS levels and an integration into
Maintenance Planner to enable creation and installation of custom SAP S/4HANA
software stacks. The following links highlight the Product stacks that you can quickly
deploy on Azure. Just select “Deploy System”.

ノ Expand table

All products Link

SAP S/4HANA 2022 FPS02 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2022 FPS01 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2022 FPS00 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2021 FPS04 for Productive Deployments Deploy


System
All products Link

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2021 FPS03 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2021 FPS02 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2021 FPS01 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2021 FPS00 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

SAP S/4HANA 2020 FPS04 for Productive Deployments Deploy


System

This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.

Within a few hours, a healthy SAP S/4HANA appliance or product is deployed in Azure.

If you bought an SAP CAL subscription, SAP fully supports deployments through SAP
CAL on Azure. The support queue is BC-VCM-CAL.
Deploy SAP IDES EHP7 SP3 for SAP ERP
6.0 on Azure
Article • 02/10/2023

This article describes how to deploy an SAP IDES system running with SQL Server and
the Windows operating system on Azure via the SAP Cloud Appliance Library (SAP CAL)
3.0. The screenshots show the step-by-step process. To deploy a different solution,
follow the same steps.

To start with the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also
has a blog about the new SAP Cloud Appliance Library 3.0 .

7 Note

As of May 29, 2017, you can use the Azure Resource Manager deployment model in
addition to the less-preferred classic deployment model to deploy the SAP CAL. We
recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.

If you already created an SAP CAL account that uses the classic model, you need to
create another SAP CAL account. This account needs to exclusively deploy into Azure by
using the Resource Manager model.

After you sign in to the SAP CAL, the first page usually leads you to the Solutions page.
The solutions offered on the SAP CAL are steadily increasing, so you might need to
scroll quite a bit to find the solution you want. The highlighted Windows-based SAP
IDES solution that is available exclusively on Azure demonstrates the deployment
process:
Create an account in the SAP CAL
1. To sign in to the SAP CAL for the first time, use your SAP S-User or other user
registered with SAP. Then define an SAP CAL account that is used by the SAP CAL
to deploy appliances on Azure. In the account definition, you need to:

a. Select the deployment model on Azure (Resource Manager or classic).

b. Enter your Azure subscription. An SAP CAL account can be assigned to one
subscription only. If you need more than one subscription, you need to create
another SAP CAL account.

c. Give the SAP CAL permission to deploy into your Azure subscription.

7 Note

The next steps show how to create an SAP CAL account for Resource Manager
deployments. If you already have an SAP CAL account that is linked to the
classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource
Manager model.

2. To create a new SAP CAL account, the Accounts page shows two choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer
preferred.

b. Microsoft Azure is the new Resource Manager deployment model.

To deploy in the Resource Manager model, select Microsoft Azure.

3. Enter the Azure Subscription ID that can be found on the Azure portal.

4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click
Authorize. The following page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be
the coadministrator of the Azure subscription you selected. The following page
appears in the browser tab:

6. Click Accept. If the authorization is successful, the SAP CAL account definition
displays again. After a short time, a message confirms that the authorization
process was successful.

7. To assign the newly created SAP CAL account to your user, enter your User ID in
the text box on the right and click Add.
8. To associate your account with the user that you use to sign in to the SAP CAL,
click Review.

9. To create the association between your user and the newly created SAP CAL
account, click Create.

You successfully created an SAP CAL account that is able to:

Use the Resource Manager deployment model.


Deploy SAP systems into your Azure subscription.

7 Note

Before you can deploy the SAP IDES solution based on Windows and SQL Server,
you might need to sign up for an SAP CAL subscription. Otherwise, the solution
might show up as Locked on the overview page.

Deploy a solution
1. After you set up an SAP CAL account, select The SAP IDES solution on Windows
and SQL Server solution. Click Create Instance, and confirm the usage and terms
conditions.

2. On the Basic Mode: Create Instance page, you need to:

a. Enter an instance Name.


b. Select an Azure Region. You might need an SAP CAL subscription to get multiple
Azure regions offered.

c. Enter the master Password for the solution, as shown:

3. Click Create. After some time, depending on the size and complexity of the
solution (the SAP CAL provides an estimate), the status is shown as active and
ready for use:
4. To find the resource group and all its objects that were created by the SAP CAL, go
to the Azure portal. The virtual machine can be found starting with the same
instance name that was given in the SAP CAL.

5. On the SAP CAL portal, go to the deployed instances and click Connect. The
following pop-up window appears:
6. Before you can use one of the options to connect to the deployed systems, click
Getting Started Guide. The documentation names the users for each of the
connectivity methods. The passwords for those users are set to the master
password you defined at the beginning of the deployment process. In the
documentation, other more functional users are listed with their passwords, which
you can use to sign in to the deployed system.

Within a few hours, a healthy SAP IDES system is deployed in Azure.

If you bought an SAP CAL subscription, SAP fully supports deployments through the
SAP CAL on Azure. The support queue is BC-VCM-CAL.
SAP Information Lifecycle Management
(ILM) with Microsoft Azure Blob Storage
Article • 02/10/2023

SAP Information Lifecycle Management (ILM) provides a broad range of capabilities for
managing data volumes, Retention Management as well as the decommissioning of
legacy systems, while balancing the total cost of ownership, risk, and legal compliance.
SAP ILM Store (a component of ILM) would enable storing of these archive files and
attachments from SAP system into Microsoft Azure Blob storage, thus enabling cloud
storage.

How to
This document covers creation and configuration of Azure blob storage account to be
used with SAP ILM. This account will be used to store archive data from S/4HANA
System.

The steps to be followed to create a storage account are:

1. Register a new application with your subscription.


2. Create a Blob storage account.
3. Create a new custom role or use an existing (build-In or custom) role.
4. Assign the role to application to allow access to the storage account.

7 Note

Steps 2, 3 and 4 can either be done manually or by using the Microsoft Quickstart
template.
QuickStart template approach:
This is an automated approach to create the Azure account. You can find the template in
the Azure Quickstart Templates library .

Manual configuration approach:


Azure blob storage account can be configured manually. The steps to be followed are:

1. Register a new application


The details are available at Register an application with the Microsoft identity
platform

7 Note

Make sure that Client secret is added as per the section Add Credentials –
Add a Client Secret

2. Create a Blob Storage account


Refer steps in the page Create a storage account
Ensure "Enable secure transfer" is set.
It is recommended to set the following property values:

Enable blob public access = false


Minimum TLS Version = 1.2
Enable storage account key access = false

3. Maintain IAM for the account


In the Access Control (IAM) setting, go to "Role Assignments" and add "Role
assignment" for the App created with the role of "Storage Blob Data Contributor".
In the App dialog, choose "User, group or Service Principal" for "Assign Access to"
field.

7 Note

Ensure no other user has access to this storage account apart from the
registered application.

During the process of the account setup and configuration, it is recommended to refer
to Security recommendations for Blob Storage With the completion of this setup, we are
ready to use this blob storage account with SAP ILM to store archive files from S/4
HANA System.

Next steps
SAP ILM on the SAP help portal
Integrating Azure with SAP RISE
managed workloads
Article • 01/15/2024

For customers with SAP solutions such as RISE with SAP Enterprise Cloud Services (ECS)
and SAP S/4HANA Cloud, private edition (PCE) deployed in Azure, integrating the SAP
managed environment with their own Azure ecosystem and third party applications is of
particular importance. The following articles explain the concepts and best practices to
follow for a performant and secure solution.

Network connectivity options in Azure with SAP RISE


Integrating Azure services with SAP RISE
Identity and security in Azure with SAP RISE

Enablement of integration scenarios


It's important to distinguish the responsibility between SAP and customer when
enabling certain Azure scenarios. The following diagram illustrates most common
situations.

There might be some circumstances when an initial request needs to be placed with SAP
RISE for enablement. However, most Azure scenarios depend on open network
communication to available SAP interfaces and activities entirely within customer's
responsibility. Diagram shown doesn't replace or extends an existing responsibility
matrix between the customer and SAP RISE/ECS.

First steps
Review the specifics within this document and then jump to individual documents for
your scenario. From the integration table, some examples are listed.

Setup network peering


Enable Power App to consume SAP interfaces
Enable Power BI, Fabric and Synapse to consume SAP data.
Enable Microsoft Entra ID as SSO provider
Defend SAP at machine speed with Sentinel to block compromised users during
attacks.

Azure support
SAP RISE customers in Azure have the SAP landscape run by SAP in an Azure
subscription owned by SAP. The subscription and all Azure resources of your SAP
environment are visible to and managed by SAP only. In turn, the customer's own Azure
environment contains applications that interact with the SAP systems. Elements such as
virtual networks, network security groups, firewalls, routing, Azure services such as Azure
Data Factory and others running inside the customer subscription access the SAP
managed landscape. When you engage with Azure support, only resources in your own
subscriptions are in scope. Contact SAP for issues with any resources operated in SAP's
Azure subscriptions for your RISE workload.

As part of your RISE project, document the interfaces and transfer points between your
own Azure environment, SAP workload managed by SAP RISE and on-premises. Such
document needs to include network information such as address space, firewall(s) and
routing, file shares, Azure services, DNS and others. Document ownership of any
interface partner and where any resource is running, to access this information quickly in
a troubleshooting and support situation. Contact SAPs support organization for services
running in SAP's Azure subscriptions.

) Important
For all details about RISE with SAP and SAP S/4HANA Cloud private edition, contact
your SAP representative.

RISE architecture
SAP creates and manages the entire SAP RISE architecture running in SAP's subscription
and Azure tenant. SAP also decides, validates and deploys all technical elements and
details used by SAP for RISE in Azure. Microsoft and SAP are continuously working
together to create the Azure infrastructure architectures optimized to support the RISE
SLAs, to apply Azure best practices as documented by Microsoft, and adapt these best
practices to the unique challenges of the RISE managed services. The cooperation on
Azure architecture as experienced by RISE customers includes continuous optimizations
and adoption of new Azure functionalities to provide added value for RISE customers.
Microsoft documents the integration part with SAP RISE in these documents, however
not the details about SAP's used architecture, which is intellectual property of SAP. From
Microsoft's recommended architecture SAP might use modifications and optimizations
in their employed architecture, to fulfill RISE SLAs and expectations by customers. Work
with SAP on configuration and customization of the deployed RISE landscape, to fit your
organization's requirements.

Next steps
Check out the documentation:

Network connectivity options in Azure with SAP RISE


Integrating Azure services with SAP RISE
Identity and security in Azure with SAP RISE
Virtual network peering
Get started with SAP on Azure VMs
Connectivity with SAP RISE
Article • 12/21/2023

With your SAP landscape operated within RISE and running in a separate virtual
network, in this article we provide available connectivity options.

Virtual network peering with SAP RISE/ECS


A virtual network (vnet) peering is the most performant way to connect securely
between two virtual networks, all in a private network address space. The peered
networks appear as one for connectivity purposes, allowing applications to talk to each
other. Applications running in different virtual networks, subscriptions, Azure tenants or
regions can communicate directly. Like network traffic on a single virtual network,
peering traffic remains in a private address space and doesn't traverse the internet.

For SAP RISE/ECS deployments, virtual peering is the preferred way to establish
connectivity with customer’s existing Azure environment. Primary benefits are:

Minimized network latency and maximum throughput between SAP RISE


landscape and own applications and services running in Azure.
No extra complexity and cost with different on-premises communication path for
SAP RISE, instead using existing Azure network hub(s).

Virtual network peering can be set up within the same region as your SAP managed
environment, but also through global virtual network peering between any two Azure
regions. With SAP RISE/ECS available in many Azure regions , the region should match
with workload running in customer virtual networks due to latency and peering cost
considerations. However, some of the scenarios (for example, central S/4HANA
deployment for a globally present company) also require to peer networks globally. For
such globally distributed SAP landscape, we recommend to use multi-region network
architecture within your own Azure environment, with SAP RISE peering locally in each
geography to your network hubs.
Both the SAP and customer virtual network(s) are protected with network security
groups (NSG), permitting communication on SAP and database ports through the
peering. Communication between the peered virtual networks is secured through these
NSGs, limiting communication to and from customer’s SAP environment.

Since SAP RISE/ECS runs in SAP’s Azure tenant and subscriptions, set up the virtual
network peering between different tenants. You accomplish this configuration by setting
up the peering with the SAP provided network’s Azure resource ID and have SAP
approve the peering. Add a user from the opposite Microsoft Entra tenant as a guest
user, accept the guest user invitation and follow process documented at Create a virtual
network peering - different subscriptions. Contact your SAP representative for the exact
steps required. Engage the respective team(s) within your organization that deal with
network, user administration and architecture to enable this process to be completed
swiftly.

VPN vnet-to-vnet
As an alternative to virtual network peering, virtual private network (VPN) connection
can be established between VPN gateways, deployed both in the SAP RISE/ECS
subscription and customers own. You can establish a vnet-to-vnet connection between
these two VPN gateways, enabling fast communication between the two separate virtual
networks. The respective networks and gateways can reside in different Azure regions.
While virtual network peering is the recommended and more typical deployment model,
a VPN vnet-to-vnet can potentially simplify a complex virtual peering between customer
and SAP RISE/ECS virtual networks. The VPN Gateway acts as only point of entry into the
customer’s network and is managed and secured by a central team. Network
throughput is limited by the chosen gateway SKU on both sides. To address resiliency
requirements, ensure zone-redundant virtual network gateways are used for such
connection.

Network Security Groups are in effect on both customer and SAP virtual network,
identically to peering architecture enabling communication to SAP NetWeaver and
HANA ports as required. For details how to set up the VPN connection and which
settings should be used, contact your SAP representative.

Connectivity back to on-premises


With an existing customer Azure deployment, on-premises network is already connected
through ExpressRoute (ER) or VPN. The same on-premises network path is typically used
for SAP RISE/ECS managed workloads. Preferred architecture is to use existing ER/VPN
Gateways in customer’s for this purpose, with connected SAP RISE virtual network seen
as a spoke network connected to customer’s virtual network hub.
With this architecture, central policies and security rules governing network connectivity
to customer workloads also apply to SAP RISE/ECS managed workloads. The same on-
premises network path is used for both customer's and SAP RISE/ECS virtual network.

If currently there's no present Azure to on-premises connectivity, contact your SAP


representative for details which connections models are possible for the RISE workload.
If SAP RISE/ECS establishes on-premises within RISE directly, such on-premises
connection is available for reaching the SAP managed virtual network only. Such
dedicated ExpressRoute or VPN connection within SAP RISE can't be used to access
customer's own Azure virtual networks.

7 Note

A virtual network can have only have one gateway, local or remote. With virtual
network peering established between SAP RISE using remote gateway transit, no
gateways can be added in the SAP RISE/ECS virtual network. A combination of
virtual network peering with remote gateway transit together with another virtual
network gateway in the SAP RISE/ECS virtual network isn't possible.

Virtual WAN with SAP RISE managed


workloads
Similarly to using a hub and spoke network architecture with connectivity to both SAP
RISE/ECS virtual network and on-premises, Azure Virtual Wan (vWAN) hub can be used
for same purpose. RISE workload is a spoke network connected to the vWAN network
hub. Both connection options to SAP RISE described earlier – virtual network peering as
well as VPN vnet-to-vnet – are available with vWAN.

The vWAN network hub is deployed and managed by customer in own subscription.
Customer also manages entirely the on-premises connection and routing through
vWAN network hub, with access to SAP RISE peered spoke virtual network.

Connectivity during migration to SAP RISE


Migration of your SAP landscape to SAP RISE is done in several phases over several
months or longer. Some of your SAP environments are migrated already and in use
productively, while you prepare other SAP systems for migration. In most customer
projects, the largest and most critical systems are migrated in the middle or at end of
the project. You need to consider having ample bandwidth for data migration or
database replication, and not impact the network path of your users to the already
productive RISE environments. Already migrated SAP systems also might need to
communicate with the SAP landscape still on-premises or at existing service provider.

During your migration planning to SAP RISE, plan how in each phase SAP systems are
reachable for your user base and how data transfer to RISE/ECS virtual network is
routed. Often multiple locations and parties are involved, such as existing service
provider and data centers with own connection to your corporate network. Make sure
no temporary solutions with VPN connections are created without considering how in
later phases SAP data gets migrated for the most business critical systems.

DNS integration with SAP RISE/ECS managed


workloads
Integration of customer owned networks with cloud based infrastructure and providing
a seamless name resolution concept is a vital part of a successful project
implementation. This diagram describes one of the common integration scenarios of
SAP owned subscriptions, virtual networks and DNS infrastructure with customer’s local
network and DNS services. In such setup, Azure hub or on-premises DNS servers are
holding all DNS entries. The DNS infrastructure is capable to resolve DNS requests
coming from all sources (on-premises clients, customer’s Azure services and SAP
managed environments).

Design description and specifics:

Custom DNS configuration for SAP-owned virtual networks

Two VMs inside the RISE/PCE Azure virtual network host DNS servers

Customers must provide and delegate to SAP a subdomain/zone (for example,


example ecs.contoso.com) to assign names and create forward and reverse DNS
entries for the virtual machines that run SAP managed environment. SAP DNS
servers are holding a master DNS role for the delegated zone

DNS zone transfer from SAP DNS server to customer’s DNS servers is the primary
method to replicate DNS entries from RISE/PCE environment.

Customer's Azure virtual networks are also using custom DNS configuration
referring to customer DNS servers located in Azure hub virtual network.

Optionally, customers can set up a private DNS forwarder within their Azure virtual
networks. Such forwarder then pushes DNS requests coming from Azure services
to SAP DNS servers that are targeted to the delegated zone (example
ecs.contoso.com).

DNS zone transfer is applicable for the designs when customers operate custom DNS
solution (for example, AD DS or BIND servers) within their hub virtual network.

7 Note

Both Azure provided DNS and Azure private zones do not support DNS zone
transfer capability, hence, can't be used to accept DNS replication from SAP
RISE/PCE/ECS DNS servers. Additionally, SAP typically does not support external
DNS service providers for the delegated zone.
SAP published a blog post on the DNS implementation with SAP RISE in Azure, see here
for details .

To further read about the usage of Azure DNS for SAP, outside the usage with SAP
RISE/ECS see details in following blog post .

Internet outbound and inbound connections


with SAP RISE/ECS
SAP workloads communicating with external applications and interfaces could require a
network egress path to the Internet. Similarly, your company’s user base (for example,
SAP Fiori) need an Internet ingress or inbound connections to the SAP landscape. For
SAP RISE managed workloads, work with your SAP representative to explore needs for
such https/RFC/other communication paths. Network communication to/from the
Internet is by default not enabled for SAP RISE/ECS customers and default networking
uses private IP ranges only. Internet connectivity requires planning with SAP, to
optimally protect customer’s SAP landscape.

Should you enable Internet bound or incoming traffic with SAP RISE, the network
communication is protected through various Azure technologies such as NSGs, ASGs,
Application Gateway with Web Application Firewall (WAF), proxy servers and others
depending on use and network protocols. These services are entirely managed through
SAP within the SAP RISE/ECS virtual network and subscription. The network path SAP
RISE to and from Internet remains typically within the SAP RISE/ECS virtual network only
and doesn't transit into/from customer’s own vnet(s).

Applications within a customer’s own virtual network connect to the Internet directly
from respective virtual network or through customer’s centrally managed services such
as Azure Firewall, Azure Application Gateway, NAT Gateway and others. Connectivity to
SAP BTP from non-SAP RISE/ECS applications takes the same network Internet bound
path on your side. Should an SAP Cloud Connecter be needed for such integration, run
it on customer’s VMs. In other words, SAP BTP or any public endpoint communication is
on a network path managed by customers themselves if SAP RISE workload is not
involved.

SAP BTP connectivity


SAP Business Technology Platform (BTP) provides a multitude of applications typically
accessed through public IP/hostname via the Internet. Customer's services running in
their Azure subscriptions access BTP through the configured outbound access method,
such as central firewall or outbound public IPs. Some SAP BTP services, such as SAP Data
Intelligence, however is by design accessed through a separate virtual network
peering instead of a public endpoint.

SAP offers Private Link Service for customers using SAP BTP on Azure. The SAP Private
Link Service connects SAP BTP services through a private IP range into customer’s Azure
network and thus accessible privately through the private link service instead of through
the Internet. Contact SAP for availability of this service for SAP RISE/ECS workloads.

See SAP's documentation and a series of blog posts on the architecture of the SAP
BTP Private Link Service and private connectivity methods, dealing with DNS and
certificates in following SAP blog series Getting Started with BTP Private Link Service for
Azure .

Network communication ports with SAP RISE


Any Azure service with access to the customer virtual network can communicate with
the SAP landscape running within the SAP RISE/ECS subscription via the available ports.
Your SAP system in SAP RISE can be accessed through the open network ports, as
configured and opened by SAP for your use. https, RFC and JDBC/ODBC protocols can
be used through private network address ranges. Additionally, applications can access
through https on a publicly available IP, exposed by SAP RISE managed Azure
application gateway. For details and settings for the application gateway and NSG open
ports, contact SAP.

See further document Integrating Azure services with SAP RISE how the available
connectivity allows you to extend your SAP landscape with Azure services.

Next steps
Check out the documentation:

Integrating Azure with SAP RISE


Integrating Azure services with SAP RISE
Identity and security in Azure with SAP RISE
Virtual network peering
DNS integration with SAP RISE in multicloud environment series guide – Azure |
SAP Blogs
Integrating Azure services with SAP RISE
Article • 12/21/2023

Your SAP landscape running within SAP RISE can easily integrate with additional
applications on Azure. With the information about available interfaces to the SAP
RISE/ECS landscape, many scenarios with Azure Services are possible.

Data integration scenarios with Azure Data Factory or Synapse Analytics require a
self-hosted integration runtime or Azure Integration Runtime. For details see the
next chapter.

App integration scenarios with Microsoft services using ABAP with the ABAP SDK
for Azure and the Microsoft AI SDK for SAP . Installation requires prior setup of
abapGit . See this SAP blog post for more information about ABAP Platform
and ABAP Cloud environment.

App integration scenarios with Microsoft services using Azure Integration


Services serving as intermediary to address the desired integration pattern.
Consumers like Power Apps, Power BI, Azure Functions and Azure App Service are
governed and secured through Azure API Management deployed in the customer
environment. This component offers industry standard features such as request
throttling, usage quotas, and SAP Principal Propagation to retain the SAP backend
authorizations with Microsoft 365 authenticated callers. Find the API Management
policy for SAP Principal Propagation here.

SAP legacy protocols remote function calls (RFC) support with built-in connectors
for Azure Logic Apps, Power Apps, Power BI through the Microsoft on-premises
data gateway between the SAP RISE system and Azure service. See below chapters
for more details.

Find a comprehensive overview of all the available SAP and Microsoft integration
scenarios here.

Integration with self-hosted integration


runtime
Integrating your SAP system with Azure cloud native services such as Azure Data Factory
or Azure Synapse would use these communication channels to the SAP RISE/ECS
managed environment.
The following high-level architecture shows possible integration scenario with Azure
data services such as Data Factory or Synapse Analytics. For these Azure services either a
self-hosted integration runtime (self-hosted IR or IR) or Azure integration runtime
(Azure IR) can be used. The use of either integration runtime depends on the chosen
data connector, most SAP connectors are only available for the self-hosted IR. SAP ECC
connector is capable of being using through both Azure IR and self-hosted IR. The
choice of IR governs the network path taken. SAP .NET connector is used for SAP table
connector, SAP BW and SAP OpenHub connectors alike. All these connectors use SAP
function modules (FM) on the SAP system, executed through RFC connections. Last if
direct database access has been agreed with SAP, along with users and connection path
opened, ODBC/JDBC connector for SAP HANA can be used from the self-hosted IR as
well.

For data connectors using the Azure IR, this IR accesses your SAP environment through
a public IP address. SAP RISE/ECS provides this endpoint through an application
gateway for use and the communication and data movement is through https.

Data connectors within the self-hosted integration runtime communicate with the SAP
system within SAP RISE/ECS subscription and vnet through the established vnet peering
and private network address only. The established network security group rules limit
which application can communicate with the SAP system.

The customer is responsible for deployment and operation of the self-hosted


integration runtime within their subscription and vnet. The communication between
Azure PaaS services such as Data Factory or Synapse Analytics and self-hosted
integration runtime is within the customer’s subscription. SAP RISE/ECS exposes the
communication ports for these applications to use but has no knowledge or support
about any details of the connected application or service.
Contact SAP for details on communication paths available to you with SAP RISE and the
necessary steps to open them. SAP must also be contacted for any SAP license details
for any implications accessing SAP data through any external applications.

Learn more about the overall support on SAP data integration scenario from our Cloud
Adoption Framework with detailed introduction on each SAP connector, comparison
and guidance. The whitepaper SAP data integration using Azure Data Factory
whitepaper completes the picture.

On-premises data gateway


Further Azure Services such as Azure Logic Apps, Power Apps or Power BI communicate
and exchange data with SAP systems through an on-premises data gateway where
required. The on-premises data gateway is a virtual machine, running in Azure or on-
premises. It provides secure data transfer between these Azure Services and your SAP
systems including the option for runtime and driver support for SAP RFCs.

With SAP RISE, the on-premises data gateway can connect to Azure Services running in
customer’s Azure subscription. This VM running the data gateway is deployed and
operated by the customer. Following high-level architecture serves as overview, similar
method can be used for either service.

The SAP RISE environment here provides access to the SAP ports for RFC and https
described earlier. The communication ports are accessed by the private network address
through the vnet peering or VPN site-to-site connection. The on-premises data gateway
VM running in customer’s Azure subscription uses the SAP .NET connector to run RFC,
BAPI or IDoc calls through the RFC connection. Additionally, depending on service and
way the communication is set up, a way to connect to public IP of the SAP systems REST
API through https might be required. The https connection to a public IP can be
exposed through SAP RISE/ECS managed application gateway. This high level
architecture shows the possible integration scenario. Alternatives to it such as using
Logic Apps single tenant and private endpoints to secure the communication and other
can be seen as extension and aren't described here in.

SAP RISE/ECS exposes the communication ports for these applications to use but has no
knowledge about any details of the connected application or service running in a
customer’s subscription.

SAP RISE/ECS exposes the communication ports for these applications to use but has no
knowledge about any details of the connected application or service running in a
customer’s subscription. Contact SAP for any SAP license details for any implications
accessing SAP data through Azure service connecting to the SAP system or database.

Next steps
Check out the documentation:

Integrating Azure with SAP RISE overview


Network connectivity options in Azure with SAP RISE
SAP and Microsoft integration scenarios
SAP Data Integration Using Azure Data Factory
Azure identity and security services with
SAP RISE
Article • 12/21/2023

This article details integration of Azure identity and security services with an SAP RISE
workload. Additionally use of some Azure monitoring services are explained for an SAP
RISE landscape.

Single sign-on for SAP


Single sign-On (SSO) is configured for many SAP environments. With SAP workloads
running in ECS/RISE, steps to implement do not differ from a natively run SAP system.
The integration steps with Microsoft Entra ID based SSO are available for typical
ECS/RISE managed workloads:

Tutorial: Microsoft Entra Single sign-on (SSO) integration with SAP NetWeaver
Tutorial: Microsoft Entra single sign-on (SSO) integration with SAP Fiori
Tutorial: Microsoft Entra integration with SAP HANA

ノ Expand table

SSO Identity Provider Typical use case Implementation


method

SAML/OAuth Microsoft Entra ID SAP Fiori, Web GUI, Portal, Configuration by customer
HANA

SNC Microsoft Entra ID SAP GUI Configuration by customer

SPNEGO Active Directory Web GUI, SAP Enterprise Configuration by customer and
(AD) Portal SAP

SSO against Active Directory (AD) of your Windows domain for ECS/RISE managed SAP
environment, with SAP SSO Secure Login Client requires AD integration for end user
devices. With SAP RISE, any Windows systems are not integrated with the customer's
active directory domain. The domain integration isn't necessary for SSO with
AD/Kerberos as the domain security token is read on the client device and exchanged
securely with SAP system. Contact SAP if you require any changes to integrate AD based
SSO or using third party products other than SAP SSO Secure Login Client, as some
configuration on RISE managed systems might be required.
Microsoft Sentinel with SAP RISE
The SAP RISE certified Microsoft Sentinel solution for SAP applications allows you to
monitor, detect, and respond to suspicious activities. Microsoft Sentinel guards your
critical data against sophisticated cyberattacks for SAP systems hosted on Azure, other
clouds, or on-premises infrastructure.

The solution allows you to gain visibility to user activities on SAP RISE/ECS and the SAP
business logic layers and apply Sentinel’s built-in content.

Use a single console to monitor all your enterprise estate including SAP instances
in SAP RISE/ECS on Azure and other clouds, SAP Azure native and on-premises
estate
Detect and automatically respond to threats: detect suspicious activity including
privilege escalation, unauthorized changes, sensitive transactions, data exfiltration
and more with out-of-the-box detection capabilities
Correlate SAP activity with other signals: more accurately detect SAP threats by
cross-correlating across endpoints, Microsoft Entra data and more
Customize based on your needs - build your own detections to monitor sensitive
transactions and other business risks
Visualize the data with built-in workbooks

For SAP RISE/ECS, the Microsoft Sentinel solution must be deployed in customer's Azure
subscription. All parts of the Sentinel solution are managed by customer and not by
SAP. Private network connectivity from customer's vnet is needed to reach the SAP
landscapes managed by SAP RISE/ECS. Typically, this connection is over the established
vnet peering or through alternatives described in this document.
To enable the solution, only an authorized RFC user is required and nothing needs to be
installed on the SAP systems. The container-based SAP data collection agent included
with the solution can be installed either on VM or AKS/any Kubernetes environment. The
collector agent uses an SAP service user to consume application log data from your SAP
landscape through RFC interface using standard RFC calls.

Authentication methods supported in SAP RISE: SAP username and password or


X509/SNC certificates
Only RFC based connections are possible currently with SAP RISE/ECS
environments

Note for running Microsoft Sentinel in an SAP RISE/ECS environment:

The following log fields/source require an SAP transport change request: Client IP
address information from SAP security audit log, DB table logs (preview), spool
output log. Sentinel's built-in content (detections, workbooks and playbooks)
provides extensive coverage and correlation without those log sources.
SAP infrastructure and operating system logs aren't available to Sentinel in RISE,
including VMs running SAP, SAPControl data sources, network resources placed
within ECS. SAP monitors elements of the Azure infrastructure and operation
system independently.

Use prebuilt playbooks for security, orchestration, automation and response capabilities
(SOAR) to react to threats quickly. A popular first scenario is SAP user blocking with
intervention option from Microsoft Teams. The integration pattern can be applied to any
incident type and target service spanning towards SAP Business Technology Platform
(BTP) or Microsoft Entra ID with regard to reducing the attack surface.

For more information on Microsoft Sentinel and SOAR for SAP, see the blog series From
zero to hero security coverage with Microsoft Sentinel for your critical SAP security
signals .
For more information on Microsoft Sentinel and SAP, including a deployment guide, see
Sentinel product documentation.

Azure Monitoring for SAP with SAP RISE


Azure Monitor for SAP solutions is an Azure-native solution for monitoring your SAP
system. It extends the Azure monitor platform monitoring capability with support to
gather data about SAP NetWeaver, database, and operating system details.

SAP RISE/ECS is a fully managed service for your SAP landscape and thus Azure
Monitoring for SAP is not intended to be utilized for such managed environment. SAP
RISE/ECS doesn't support any integration with Azure Monitor for SAP solutions. SAP's
own monitoring and reporting is used and provided to the customer as defined by your
service description with SAP.

Azure Center for SAP Solutions


As with Azure Monitoring for SAP solutions, SAP RISE/ECS doesn't support any
integration with Azure Center for SAP Solutions in any capability. All SAP RISE workloads
are deployed by SAP and running in SAP's Azure tenant and subscription, without any
access by customer to the Azure resources.

Next steps
Check out the documentation:

Integrating Azure with SAP RISE overview


Network connectivity options in Azure with SAP RISE
Integrating Azure services with SAP RISE
Deploy Microsoft Sentinel solution for SAP® applicationsE
Virtual Machine Scale Sets for SAP
workload
Article • 03/21/2024

In Azure, Virtual machine scale sets provide a logical grouping of platform-managed


virtual machines.

Virtual machine scale sets offer two orchestration modes that enable improved
virtual machine management. For SAP workloads, the Virtual Machines Scale Set
with flexible orchestration is the recommended and only supported option, as it
offers the ability to use different virtual machine SKUs and operating systems
within a single scale set.
The flexible orchestration of virtual machine scale set provides the option to create
the scale set within a region or span it across availability zones. On creating, the
flexible scale set within a region with platformFaultDomainCount>1 (FD>1), the
VMs deployed in the scale set would be distributed across specified number of
fault domains in the same region. On the other hand, creating the flexible scale set
across availability zones with platformFaultDomainCount=1 (FD=1) would
distribute the virtual machines across specified zone and the scale set would also
distribute VMs across different fault domains within the zone on a best effort basis.
For SAP workload only flexible scale set with FD=1 is supported. The advantage
of using flexible scale sets with FD=1 for cross zonal deployment, instead of
traditional availability zone deployment is that the VMs deployed with the scale set
would be distributed across different fault domains within the zone in a best-effort
manner.
There are two ways to configure flexible virtual machine scale sets: with or without
a scaling profile. However, for SAP workload, we recommend creating a flexible
virtual machine scale set without a scaling profile. It is because the autoscaling
feature of scale set with a scaling profile doesn't work out of the box for SAP
workload. So, currently flexible virtual machine scale set is solely used as a
deployment framework for SAP.

Important consideration of Flexible Virtual


Machine Scale Sets for SAP workload
1. Virtual Machine Scale Set with Flexible orchestration is the recommended and
supported orchestration mode for SAP workloads. The Uniform orchestration
mode can't be used for SAP workloads.
2. For SAP workloads, flexible orchestration of virtual machine scale sets is supported
only with FD=1. Currently regional deployment with FD>1 isn't supported for SAP
workload.
3. Deploy each SAP system in a separate flexible scale set.
4. For SAP NetWeaver, it's recommended to deploy all components of a single SAP
system within a single flexible scale set. These components include the database,
SAP ASCS/ERS, and SAP application servers.
5. Different virtual machine (VM) SKUs, such as D-Series, E-Series, M-Series, and
operating systems, including Windows and various Linux distributions, can be
included within a single virtual machine scale set with flexible orchestration.
6. When setting up a flexible scale set for SAP workload, platformFaultDomainCount
can be set to a maximum value of 1. As a result, the virtual machine instances
associated with the scale set would be distributed across multiple fault domains on
a best effort basis.
7. You can configure flexible virtual machine scale sets with or without a scaling
profile. However, it's recommended to create a flexible virtual machine scale set
without a scaling profile.
8. The standard load balancer is the only supported load balancer for virtual
machines deployed in flexible scale set.
9. To configure Azure fence agent with managed-system identity (MSI) for highly
available SAP environment using pacemaker cluster, you can enable system-
managed identity on individual VM.
10. Capacity reservation can be enabled at the individual VM level if you're using
flexible scale set without a scaling profile to manage your SAP workload. For more
information, see the limitations and restrictions section as not all SKUs are
currently supported for capacity reservation.
11. For SAP workload, we don't advise using a proximity placement group (PPG) in
combination with a flexible scale set deployment with FD=1.
12. In a multi-SID SAP ASCS/ERS environment, it's recommended to deploy the first
SAP system using a flexible scale set with FD=1. Additionally, it's necessary to set
up a separate flexible scale set with FD=1 for the application and database tier of
the second system.

) Important

After the creation of the scale set, the orchestration mode and configuration type
(with or without scaling profile) cannot be modified or updated at a later time.
Reference architecture of SAP workload
deployed with Flexible Virtual Machine Scale
Sets
When creating virtual machine scale set with flexible orchestration across availability
zones, it's important to mention all the availability zones where you would be deploying
your SAP system. It's worth noting that the availability zones must be specified while
creating the scale set, as they can't be modified at a later stage.

By default, when configuring flexible scale set across availability zones, the fault domain
count is set to 1. It means that the VM instances belonging to the scale set would be
spread across different fault domains on a best-effort basis in each zone.

The diagram illustrates architecture for deploying three separate systems using a flexible
virtual machine scale set with FD=1. Three flexible virtual machine scale sets are created,
one for each system, with a platform fault domain count set to 1. The first flexible scale
set is created for high availability SAP system with two availability zones for (zone 1, and
2). The second scale set is created to configure SBD device across three availability
zones (zone 1, 2, and 3), and third scale set is created for nonproduction or non-HA SAP
system with one availability zone (zone 1).

The virtual machines for each system are then manually deployed in their corresponding
availability zone within the scale set. For SAP System #1, high availability components,
such as primary and secondary databases and ASCS/ERS instances, are deployed across
multiple zones. For application tier VMs, the scale set would distribute them across
different fault domains within a single zone, on a best-effort basis. Take note that it
wouldn't be feasible to include more VMs for SAP System #1 in availability zone 3 at a
later stage. It is because the flexible scale set is limited to only two availability zones,
which are zone 1 and 2. For more information on high availability deployment for SAP
workload, see High-availability architecture and scenarios for SAP NetWeaver.

For SBD devices, VMs are manually deployed in each availability zone within the scale
set. For SAP system #3, which is a nonproduction or non-HA environment, all the
components of SAP systems are deployed in a single zone.

7 Note

When creating a flexible scale set for zonal deployment, it's not possible to set
platformFaultDomainCount to a value higher than 1.

Configuration of Flexible Virtual Machine Scale


Set without a scaling profile
For SAP workloads, it's recommended to create a flexible virtual machine scale set
without a scaling profile. To create a flexible scale set across availability zones, set the
fault domain count to 1 and specify the desired zones.

Azure portal

To set up a virtual machine scale set without scaling profile using Azure portal,
proceed as follows -

1. Sign in to Azure portal .


2. Search for Virtual machine scale set and select create on the corresponding
page.
3. In the basics tab, provide the necessary details:
a. Under project details, verify the correct subscription and choose my-
resource-group from the resource group dropdown.
b. For scale set details, name your scale set myVmssFlex, choose the
appropriate region, and specify availability zone (For example, zone1,
zone2, zone3) for your deployment.
4. Select the flexible orchestration mode.
5. Under the scaling section, select no scaling profile.
6. For the allocation policy, select max spreading.
7. Select create.

7 Note

For SAP workload only flexible scale set with FD=1 is supported. So, do not
configure scale set with "fixed spreading" as the allocation policy.

Once you have created the flexible virtual machine scale set, you can create a virtual
machine by following the quick start guide. When configuring the virtual machine, be
sure to select "virtual machine scale set" under availability options and choose the
flexible scale set you created. The portal would list all the zones that you included when
creating the flexible scale set, so you can select the desired availability zone for your VM.
Follow the remaining instructions in the quick start guide to complete the virtual
machine configuration.
FAQs for Virtual Machine Scale Set for
SAP workload
Article • 03/21/2024

Get answers to frequently asked questions about Virtual Machine Scale Sets for SAP
workload.

SAP workload deployment

Can I create flexible scale set with scaling or scaling


profile for SAP workload to use autoscaling feature for
SAP application servers?
Use of flexible scale set with scaling profile isn't recommended, as the scaling feature
doesn't work out-of-the-box for SAP workload. Currently, virtual machines scale set with
flexible orchestration can only be used as a deployment framework for SAP workload.

Does setting FD=1 for flexible scale set zonal deployment


imply that all VMs within the scale set would belong to a
single fault domain?
Setting FD=1 for flexible scale set zonal deployment means that the scale set would
attempt to max spread instances across multiple fault domains on best effort basis.

Flexible virtual machine scale set with FD=1 is used for


zonal deployment, what is the method for deploying
flexible scale set in a region that doesn't have any zones?
Deploying a flexible scale set in a region without zones is essentially the same as
deploying one with zones, except that you don't need to specify any zones for that
region. However, it's important to avoid creating a scale set with a
platformFaultDomainCount value greater than 1.

Which data disks can be used with VMs deployed with


flexible scale set?
For new SAP deployment in flexible scale set with FD=1, VMs deployed within the scale
set can utilize any data disks that are listed as supported in this reference list. For more
information on migrating a deployment that involves pinned storage volumes (such as
ANF), see the Migration of SAP Workload FAQ section.

What are the limitations of assigning capacity reservation


to VMs deployed in flexible scale set?
If you're deploying VMs in flexible scale set without a scaling profile for SAP workloads,
it's not possible to assign capacity reservation group at the scale set level. Attempting to
do so would result in deployment failure. Instead, you would need to enable capacity
reservation for each individual VM. For more information, see the limitations and
restrictions section as not all SKUs are currently supported for capacity reservation.

High Availability and Disaster Recovery of SAP


workload

How can I use Azure Site Recovery for VMs deployed in


flexible scale set for disaster recovery?
You can use PowerShell to set up Azure Site Recovery for disaster recovery of VMs that
are deployed in a flexible scale set. Currently, it's the only method available to configure
disaster recovery for VMs deployed in scale set.

I want to use Azure fence agent with managed-system


identity (MSI). How could I enable managed system
identity on the VMs deployed in flexible scale set without
a scaling profile?
You can enable managed system identity at the VM level after a VM is manually
deployed in the scale set.

Migration of SAP workload

How can I migrate my current Availability set or


Availability zone deployment of SAP workload to flexible
scale set with zonal deployment (FD=1)?
To migrate SAP VMs to a flexible scale set, you need to re-create the VMs and the disks
with zone constraints (if necessary) from existing resources. There's no direct way to
migrate SAP workloads deployed in availability sets or availability zones to flexible scale
with FD=1. An open-source project includes PowerShell functions that you can use as
a sample, and a blog post shows you how to modify a HA or non-HA SAP system
deployed in availability set or availability zone to flexible scale set with FD=1.

How can an existing deployment of SAP HANA, which is


pinned to Azure NetApp Files, be migrated to flexible
scale set with FD=1?
To move an existing SAP HANA deployment that is currently pinned with Azure NetApp
Files to zonal deployment with flexible scale set (FD=1), you must redeploy or migrate
the SAP HANA VMs with flexible scale set (FD=1). Additionally, you would need to
configure Azure NetApp Files with the availability zones volume placement feature and
transfer data to new volumes using backup/restore.

Keep in mind that the availability zone volume placement feature is still in preview.
Therefore, it's essential to thoroughly review the documentation on managing
availability zone volume placement for Azure NetApp Files for additional consideration.

How to configure SAP HANA using Azure NetApp Files


(ANF) Application Volume Groups (AVG) in a specific
availability zone?
You can create new volumes in your preferred logical availability zone as described in
availability zones volume placement feature guide. For configuring AVG for SAP HANA,
follow the steps described in the article Configuring Azure NetApp Files (ANF)
Application Volume Group (AVG) for zonal SAP HANA deployment .
Implement the Azure VM extension for
SAP solutions
Article • 04/25/2023

7 Note

General Support Statement: Support for the Azure Extension for SAP is provided
through SAP support channels. If you need assistance with the Azure VM extension
for SAP solutions, please open a support case with SAP Support.

When you've prepared the VM as described in Deployment scenarios of VMs for SAP on
Azure, the Azure VM Agent is installed on the virtual machine. The next step is to deploy
the Azure Extension for SAP, which is available in the Azure Extension Repository in the
global Azure datacenters.

To be sure SAP supports your environment, enable the Azure VM extension for SAP
solutions as described in Configure the Azure Extension for SAP.

SAP resources
When you are setting up your SAP software deployment, you need the following SAP
resources:

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure

SAP Note 2015553 lists prerequisites for SAP-supported SAP software


deployments in Azure.

SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.

SAP Note 1409604 has the required SAP Host Agent version for Windows in
Azure.

SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.

SAP Note 1999351 has additional troubleshooting information for the Azure
Extension for SAP.

SAP-specific PowerShell cmdlets that are part of Azure PowerShell.

SAP-specific Azure CLI commands that are part of Azure CLI.

Differences between the two versions of the


Azure VM extension for SAP solutions
There are two versions of the VM Extension for SAP. Check the prerequisites for SAP and
required minimum versions of SAP Kernel and SAP Host Agent in the resources listed in
SAP resources.

Standard Version of VM Extension for SAP


This version is the current standard VM Extension for SAP. There are some exceptions
where Microsoft recommends installing the new VM Extension for SAP. In addition,
when opening a support case, SAP Support is able to request to install the new VM
Extension. For more details on when to use the new version of the VM Extension for
SAP, see chapter New Version of VM Extension for SAP

New Version of VM Extension for SAP


This version is the new Azure VM extension for SAP solutions. With further
improvements and new Azure Offerings the new extension was built to be able to
monitor all Azure resources of a virtual machine. This extension needs internet access to
the URL "management.azure.com". It supports additional storage options, for example
Standard Disks and operating systems. Please choose the new version of the VM
Extension if one of the following applies:

You want to install the VM extension with Terraform, Azure Resource Manager
Templates or with other means than Azure CLI or Azure PowerShell
You want to install the extension on SUSE SLES 15 or higher.
You want to install the extension on Red Hat Enterprise Linux 8.1 or higher.
You want to use Azure Ultra Disk or Standard Managed Disks
Microsoft or SAP support asks you to install the new extension
Recommendation
We currently recommend using the standard version of the extension for each
installation where none of the use cases for the new version of the extension applies. We
are currently working on improving the new version of the VM extension to be able to
make it the default and deprecate the standard version of the extension. During this
time, you can use the new version. However, you need to make sure the VM Extension
can access management.azure.com.

7 Note

Make sure to uninstall the VM Extension before switching between the two
versions.

Next steps
Standard Version of Azure VM extension for SAP solutions
New Version of Azure VM extension for SAP solutions
Standard Version of Azure VM extension
for SAP solutions
Article • 04/13/2023

Prerequisites

7 Note

General Support Statement: Support for the Azure Extension for SAP is provided
through SAP support channels. If you need assistance with the Azure VM extension
for SAP solutions, please open a support case with SAP Support

7 Note

Make sure to uninstall the VM extension before switching between the standard
and the new version of the Azure Extension for SAP.

7 Note

There are two versions of the VM extension. This article covers the standard version
of the Azure VM extension for SAP. For guidance on how to install the new version,
see New Version of Azure VM extension for SAP solutions.

Deploy Azure PowerShell cmdlets


Follow the steps described in the article Install the Azure PowerShell module

Check frequently for updates to the PowerShell cmdlets, which usually are updated
monthly. Follow the steps described in this article. Unless stated otherwise in SAP Note
1928533 or SAP Note 2015553 , we recommend that you work with the latest
version of Azure PowerShell cmdlets.

To check the version of the Azure PowerShell cmdlets that are installed on your
computer, run this PowerShell command:

PowerShell
(Get-Module Az.Compute).Version

Deploy Azure CLI


Follow the steps described in the article Install the Azure CLI

Check frequently for updates to Azure CLI, which usually is updated monthly.

To check the version of Azure CLI that is installed on your computer, run this command:

Console

az --version

Configure the Azure VM extension for SAP


solutions with PowerShell
To install the Azure Extension for SAP by using PowerShell:

1. Make sure that you have installed the latest version of the Azure PowerShell
cmdlet. For more information, see Deploying Azure PowerShell cmdlets
2. Run the following PowerShell cmdlet. For a list of available environments, run
cmdlet Get-AzEnvironment . If you want to use global Azure, your environment is
AzureCloud. For Azure China 21Vianet, select AzureChinaCloud.

PowerShell

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>

Set-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName


<virtual machine name>

After you enter your account data, the script deploys the required extensions and
enables the required features. This can take several minutes. For more information
about Set-AzVMAEMExtension , see Set-AzVMAEMExtension.
The Set-AzVMAEMExtension configuration does all the steps to configure host data
collection for SAP.

The script output includes the following information:

Confirmation that data collection for the OS disk and all additional data disks has
been configured.
The next two messages confirm the configuration of Storage Metrics for a specific
storage account.
One line of output gives the status of the actual update of the VM Extension for
SAP configuration.
Another line of output confirms that the configuration has been deployed or
updated.
The last line of output is informational. It shows your options for testing the VM
Extension for SAP configuration.
To check that all steps of Azure VM Extension for SAP configuration have been
executed successfully, and that the Azure Infrastructure provides the necessary
data, proceed with the readiness check for the Azure Extension for SAP, as
described in Readiness check.
Wait 15-30 minutes for Azure Diagnostics to collect the relevant data.

Configure the Azure VM extension for SAP


solutions with Azure CLI
To install the Azure VM Extension for SAP by using Azure CLI:

1. Make sure that you have installed the latest version of the Azure CLI. For more
information, see Deploy Azure CLI

2. Sign in with your Azure account:

Azure CLI
az login

3. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.

Azure CLI

az extension add --name aem

4. Enable the extension:

Azure CLI

az vm aem set -g <resource-group-name> -n <vm name>

5. Verify that the Azure Extension for SAP is active on the Azure Linux VM. Check
whether the file /var/lib/AzureEnhancedMonitor/PerfCounters exists. If it exists, at a
command prompt, run this command to display information collected by the Azure
Extension for SAP:

Console

cat /var/lib/AzureEnhancedMonitor/PerfCounters

The output looks like this:

Output

...
2;cpu;Current Hw Frequency;;0;2194.659;MHz;60;1444036656;saplnxmon;
2;cpu;Max Hw Frequency;;0;2194.659;MHz;0;1444036656;saplnxmon;
...

Update the configuration of Azure extension


for SAP
Update the configuration of Azure Extension for SAP in any of the following scenarios:

The joint Microsoft/SAP team extends the capabilities of the VM extension and
requests more or fewer counters.
Microsoft introduces a new version of the underlying Azure infrastructure that
delivers the data, and the Azure Extension for SAP needs to be adapted to those
changes.
You mount additional data disks to your Azure VM or you remove a data disk. In
this scenario, update the collection of storage-related data. Changing your
configuration by adding or deleting endpoints or by assigning IP addresses to a
VM does not affect the extension configuration.
You change the size of your Azure VM, for example, from size A5 to any other VM
size.
You add new network interfaces to your Azure VM.

To update settings, update configuration of Azure Extension for SAP by following the
steps in Configure the Azure VM extension for SAP solutions with Azure CLI or
Configure the Azure VM extension for SAP solutions with PowerShell.

Checks and troubleshooting


After you have deployed your Azure VM and set up the relevant Azure Extension for
SAP, check whether all the components of the extension are working as expected.

Run the readiness check for the Azure Extension for SAP as described in Readiness
check. If all readiness check results are positive and all relevant performance counters
appear OK, Azure Extension for SAP has been set up successfully. You can proceed with
the installation of SAP Host Agent as described in the SAP Notes in SAP resources. If the
readiness check indicates that counters are missing, run the health check for the Azure
Extension for SAP, as described in Health check for the Azure Extension for SAP
configuration. For more troubleshooting options, see Troubleshooting for Windows or
Troubleshooting for Linux.

Readiness check
This check makes sure that all performance metrics that appear inside your SAP
application are provided by the underlying Azure Extension for SAP.

Run the readiness check on a Windows VM


1. Sign in to the Azure virtual machine (using an admin account is not necessary).

2. Open a Command Prompt window.

3. At the command prompt, change the directory to the installation folder of the
Azure Extension for SAP:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExte
nsionHandler\<version>\drop

The version in the path to the extension might vary. If you see folders for multiple
versions of the extension in the installation folder, check the configuration of the
AzureEnhancedMonitoring Windows service, and then switch to the folder
indicated as Path to executable.

4. At the command prompt, run azperflib.exe without any parameters.

7 Note

Azperflib.exe runs in a loop and updates the collected counters every 60


seconds. To end the loop, close the Command Prompt window.

If the Azure Extension for SAP is not installed, or the AzureEnhancedMonitoring service
is not running, the extension has not been configured correctly. For detailed information
about how to troubleshoot the extension, see Troubleshooting for Windows or
Troubleshooting for Linux.

7 Note

The Azperflib.exe is a component that can't be used for own purposes. It is a


component which delivers Azure infrastructure data related to the VM for the SAP
Host Agent exclusively.
Check the output of azperflib.exe

Azperflib.exe output shows all populated Azure performance counters for SAP. At the
bottom of the list of collected counters, a summary and health indicator show the status
of Azure Extension for SAP.

Check the result returned for the Counters total output, which is reported as empty, and
for Health status, shown in the preceding figure.

Interpret the resulting values as follows:

Azperflib.exe Azure Extension for SAP health status


result values

API Calls - not Counters that are not available might be either not applicable to the virtual
available machine configuration, or are errors. See Health status.

Counters total - The following two Azure storage counters can be empty:
empty Storage Read Op Latency Server msec
Storage Read Op Latency E2E msec

All other counters must have values.

Health status Only OK if return status shows OK.

Diagnostics Detailed information about health status.


If the Health status value is not OK, follow the instructions in Health check for the Azure
Extension for SAP configuration.

Run the readiness check on a Linux VM

1. Connect to the Azure Virtual Machine by using SSH.

2. Check the output of the Azure Extension for SAP.

a. Run more /var/lib/AzureEnhancedMonitor/PerfCounters

Expected result: Returns list of performance counters. The file should not be
empty.

b. Run cat /var/lib/AzureEnhancedMonitor/PerfCounters | grep Error

Expected result: Returns one line where the error is none, for example,
3;config;Error;;0;0;none;0;1456416792;tst-servercs;

c. Run more /var/lib/AzureEnhancedMonitor/LatestErrorRecord

Expected result: Returns as empty or does not exist.

If the preceding check was not successful, run these additional checks:

1. Make sure that the waagent is installed and enabled.

a. Run sudo ls -al /var/lib/waagent/

Expected result: Lists the content of the waagent directory.

b. Run ps -ax | grep waagent

Expected result: Displays one entry similar to: python /usr/sbin/waagent -daemon

2. Make sure that the Azure Extension for SAP is installed and running.

a. Run sudo sh -c 'ls -al


/var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-*/'

Expected result: Lists the content of the Azure Extension for SAP directory.

b. Run ps -ax | grep AzureEnhanced

Expected result: Displays one entry similar to: python


/var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-
2.0.0.2/handler.py daemon

3. Install SAP Host Agent as described in SAP Note 1031096 , and check the output
of saposcol .

a. Run /usr/sap/hostctrl/exe/saposcol -d

b. Run dump ccm

c. Check whether the Virtualization_Configuration\Enhanced Monitoring Access


metric is true.

If you already have an SAP NetWeaver ABAP application server installed, open
transaction ST06 and check whether monitoring is enabled.

If any of these checks fail, and for detailed information about how to redeploy the
extension, see Troubleshooting for Linux or Troubleshooting for Windows.

Health checks
If some of the infrastructure data is not delivered correctly as indicated by the tests
described in Readiness check, run the health checks described in this chapter to check
whether the Azure infrastructure and the Azure Extension for SAP are configured
correctly.

Health checks using PowerShell


1. Make sure that you have installed the latest version of the Azure PowerShell
cmdlet, as described in Deploying Azure PowerShell cmdlets.

2. Run the following PowerShell cmdlet. For a list of available environments, run the
cmdlet Get-AzEnvironment . To use global Azure, select the AzureCloud
environment. For Azure China 21Vianet, select AzureChinaCloud.

PowerShell

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>
Test-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName
<virtual machine name>

3. The script tests the configuration of the virtual machine you select.
Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Wait 15 minutes, and repeat the checks described in Readiness check and this chapter. If
the checks still indicate a problem with some or all counters, see Troubleshooting for
Linux or Troubleshooting for Windows.

7 Note

You can experience some warnings in cases where you use Managed Standard
Azure Disks. Warnings will be displayed instead of the tests returning "OK". This is
normal and intended in case of that disk type. See also Troubleshooting for Linux
or Troubleshooting for Windows.

Health checks using Azure CLI


To run the health check for the Azure VM Extension for SAP by using Azure CLI:

1. Install Azure CLI 2.0. Ensure that you use at least version 2.19.1 or later (use the
latest version).

2. Sign in with your Azure account:

Azure CLI

az login

3. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.

Azure CLI
az extension add --name aem

4. Verify the installation of the extension:

azurecliw

az vm aem verify -g <resource-group-name> -n <vm name>

The script tests the configuration of the virtual machine you select.

Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Wait 15 minutes, and repeat the checks described in Readiness check and this chapter. If
the checks still indicate a problem with some or all counters, see Troubleshooting for
Linux or Troubleshooting for Windows.

Troubleshooting for Windows

Azure performance counters do not show up at all


The AzureEnhancedMonitoring Windows service collects performance metrics in Azure.
If the service has not been installed correctly or if it is not running in your VM, no
performance metrics can be collected.

The installation directory of the Azure Extension for SAP is empty

Issue

The installation directory


C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtension
Handler<version>\drop is empty.

Solution

The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine or rerun the Set-AzVMAEMExtension
configuration script.
Service for Azure Extension for SAP does not exist

Issue

The AzureEnhancedMonitoring Windows service does not exist.

Azperflib.exe output throws an error:

Solution

If the service does not exist, the Azure Extension for SAP has not been installed correctly.
Redeploy the extension as described in Configure the Azure VM extension for SAP
solutions with Azure CLI or Configure the Azure VM extension for SAP solutions with
PowerShell.

After you deployed the extension, check again whether the Azure performance counters
are provided in the Azure VM.

Service for Azure Extension for SAP exists, but fails to start

Issue

The AzureEnhancedMonitoring Windows service exists and is enabled, but fails to start.
For more information, check the application event log.

Solution

The configuration is incorrect. Restart the Azure Extension for SAP in the VM, as
described in Configure the Azure Extension for SAP.

Some Azure performance counters are missing


The AzureEnhancedMonitoring Windows service collects performance metrics in Azure.
The service gets data from several sources. Some configuration data is collected locally,
and some performance metrics are read from Azure Diagnostics. Storage counters are
used from your logging on the storage subscription level.
If troubleshooting by using SAP Note 1999351 doesn't resolve the issue, rerun the
Set-AzVMAEMExtension configuration script. You might have to wait an hour because
storage analytics or diagnostics counters might not be created immediately after they
are enabled. If the problem persists, open an SAP customer support message on the
component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.

Troubleshooting for Linux

Azure performance counters do not show up at all


Performance metrics in Azure are collected by a daemon. If the daemon is not running,
no performance metrics can be collected.

The installation directory of the Azure Extension for SAP is empty

Issue

The directory \var\lib\waagent\ does not have a subdirectory for the Azure Extension for
SAP.

Solution

The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine and/or rerun the Set-AzVMAEMExtension
configuration script.

The execution of Set-AzVMAEMExtension and Test-


AzVMAEMExtension show warning messages stating that Standard
Managed Disks are not supported

Issue

When executing Set-AzVMAEMExtension or Test-AzVMAEMExtension messages like


these are shown:

WARNING: [WARN] Standard Managed Disks are not supported. Extension will be
installed but no disk metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be
installed but no disk metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be
installed but no disk metrics will be available.

Executing azperfli.exe as described earlier you can get a result that is indicating a non-
healthy state.

Solution

The messages are caused by the fact that Standard Managed Disks are not delivering
the APIs used by the SAP Extension for SAP to check on statistics of the Standard Azure
Storage Accounts. This is not a matter of concern. Reason for introducing the collecting
data for Standard Disk Storage accounts was throttling of inputs and outputs that
occurred frequently. Managed disks will avoid such throttling by limiting the number of
disks in a storage account. Therefore, not having that type of that data is not critical.

Some Azure performance counters are missing


Performance metrics in Azure are collected by a daemon, which gets data from several
sources. Some configuration data is collected locally, and some performance metrics are
read from Azure Diagnostics. Storage counters come from the logs in your storage
subscription.

For a complete and up-to-date list of known issues, see SAP Note 1999351 , which has
additional troubleshooting information for Azure Extension for SAP.

If troubleshooting by using SAP Note 1999351 does not resolve the issue, rerun the
Set-AzVMAEMExtension configuration script as described in Configure the Azure VM
extension for SAP solutions with Azure CLI or Configure the Azure VM extension for SAP
solutions with PowerShell. You might have to wait for an hour because storage analytics
or diagnostics counters might not be created immediately after they are enabled. If the
problem persists, open an SAP customer support message on the component BC-OP-
NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.

Azure extension error codes


Error ID Error description Solution

cfg/018 App configuration is missing. run setup


script
Error ID Error description Solution

cfg/019 No deployment ID in app config. contact


support

cfg/020 No RoleInstanceId in app config. contact


support

cfg/022 No RoleInstanceId in app config. contact


support

cfg/031 Cannot read Azure configuration. contact


support

cfg/021 App configuration file is missing. run setup


script

cfg/015 No VM size in app config. run setup


script

cfg/016 GlobalMemoryStatusEx counter failed. contact


support

cfg/023 MaxHwFrequency counter failed. contact


support

cfg/024 NIC counters failed. contact


support

cfg/025 Disk mapping counter failed. contact


support

cfg/026 Processor name counter failed. contact


support

cfg/027 Disk mapping counter failed. contact


support

cfg/038 The metric 'Disk type' is missing in the extension configuration file run setup
config.xml. 'Disk type' along with some other counters was introduced in script
v2.2.0.68 12/16/2015. If you deployed the extension prior to 12/16/2015,
it uses the old configuration file. The Azure extension framework
automatically upgrades the extension to a newer version, but the
config.xml remains unchanged. To update the configuration, download
and execute the latest PowerShell setup script.

cfg/039 No disk caching. run setup


script

cfg/036 No disk SLA throughput. run setup


script
Error ID Error description Solution

cfg/037 No disk SLA IOPS. run setup


script

cfg/028 Disk mapping counter failed. contact


support

cfg/029 Last hardware change counter failed. contact


support

cfg/030 NIC counters failed contact


support

cfg/017 Due to sysprep of the VM your Windows SID has changed. redeploy
after
sysprep

str/007 Access to the storage analytics failed. run setup


script
As population of storage analytics data on a newly created VM may need
up to half an hour, the error might disappear after some time. If the error
still appears, re-run the setup script.

str/010 No Storage Analytics counters. run setup


script

str/009 Storage Analytics failed. run setup


script

wad/004 Bad WAD configuration. run setup


script

wad/002 Unexpected WAD format. contact


support

wad/001 No WAD counters found. run setup


script

wad/040 Stale WAD counters found. contact


support

wad/003 Cannot read WAD table. There is no connection to WAD table. There can run setup
be several causes of this: script
fix internet
1) outdated configuration connection
2) no network connection to Azure contact
3) issues with WAD setup support

prf/011 Perfmon NIC metrics failed. contact


support
Error ID Error description Solution

prf/012 Perfmon disk metrics failed. contact


support

prf/013 Some prefmon metrics failed. contact


support

prf/014 Perfmon failed to create a counter. contact


support

cfg/035 No metric providers configured. contact


support

str/006 Bad Storage Analytics config. run setup


script

str/032 Storage Analytics metrics failed. run setup


script

cfg/033 One of the metric providers failed. run setup


script

str/034 Provider thread failed. contact


support

Detailed guidelines on solutions provided

Run the setup script


Follow the steps in chapter Configure the Azure Extension for SAP in this guide to install
the extension again. Note that some counters might need up to 30 minutes for
provisioning.

If the errors do not disappear, contact support.

Contact Support
Unexpected error or there is no known solution. Collect the
AzureEnhancedMonitoring_service.log file located in the folder
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtension
Handler\<version>\drop (Windows) or
/var/log/azure/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux (Linux) and
contact SAP support for further assistance.
Redeploy after sysprep
If you plan to build a generalized sysprepped OS image (which can include SAP
software), it is recommended that this image does not include the Azure extension for
SAP. You should install the Azure extension for SAP after the new instance of the
generalized OS image has been deployed.

However, if your generalized and sysprepped OS image already contains the Azure
Extension for SAP, you can apply the following workaround to reconfigure the extension,
on the newly deployed VM instance:

On the newly deployed VM instance delete the content of the following folders:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExte
nsionHandler\<version>\RuntimeSettings
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExte
nsionHandler\<version>\Status

Follow the steps in chapter Configure the Azure Extension for SAP in this guide to
install the extension again.

Fix internet connection


The Microsoft Azure Virtual Machine running the Azure extension for SAP requires
access to the Internet. If this Azure VM is part of an Azure Virtual Network or of an on-
premises domain, make sure that the relevant proxy settings are set. These settings
must also be valid for the LocalSystem account to access the Internet. Follow chapter
Configure the proxy in this guide.

In addition, if you need to set a static IP address for your Azure VM, do not set it
manually inside the Azure VM, but set it using Azure PowerShell, Azure CLI Azure portal.
The static IP is propagated via the Azure DHCP service.

Manually setting a static IP address inside the Azure VM is not supported, and might
lead to problems with the Azure extension for SAP.

Next steps
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines planning and implementation for SAP NetWeaver
New Version of Azure VM extension for
SAP solutions
Article • 03/14/2023

Prerequisites

7 Note

General Support Statement: Support for the Azure Extension for SAP is provided
through SAP support channels. If you need assistance with the Azure VM extension
for SAP solutions, please open a support case with SAP Support

7 Note

Make sure to uninstall the VM extension before switching between the standard
and the new version of the Azure Extension for SAP.

7 Note

There are two versions of the VM extension. This article covers the new version of
the Azure VM extension for SAP. For guidance on how to install the standard
version, see Standard Version of Azure VM extension for SAP solutions.

Make sure to use SAP Host Agent 7.21 PL 47 or higher.


Make sure the virtual machine on which the extension is enabled has access to
management.azure.com.

Deploy Azure PowerShell cmdlets


Follow the steps described in the article Install the Azure PowerShell module

Check frequently for updates to the PowerShell cmdlets, which usually are updated
monthly. Follow the steps described in this article. Unless stated otherwise in SAP Note
1928533 or SAP Note 2015553 , we recommend that you work with the latest
version of Azure PowerShell cmdlets.
To check the version of the Azure PowerShell cmdlets that are installed on your
computer, run this PowerShell command:

PowerShell

(Get-Module Az.Compute).Version

Deploy Azure CLI


Follow the steps described in the article Install the Azure CLI

Check frequently for updates to Azure CLI, which usually is updated monthly.

To check the version of Azure CLI that is installed on your computer, run this command:

Console

az --version

Configure the Azure VM extension for SAP


solutions with PowerShell
The new VM Extension for SAP uses a managed identity that's assigned to the VM to
access monitoring and configuration data of the VM. To install the new Azure Extension
for SAP by using PowerShell, you first have to assign such an identity to the VM and
grant that identity access to all resources that are in use by that VM, for example, disks
and network interfaces.

7 Note

The following steps require Owner privileges over the resource group or individual
resources (virtual machine, data disks, and network interfaces)

1. Make sure to use SAP Host Agent 7.21 PL 47 or higher.

2. Make sure to uninstall the standard version of the VM Extension for SAP. It is not
supported to install both versions of the VM Extension for SAP on the same virtual
machine.

3. Make sure that you have installed the latest version of the Azure PowerShell
cmdlet (at least 4.3.0). For more information, see Deploying Azure PowerShell
cmdlets.

4. Run the following PowerShell cmdlet. For a list of available environments, run
cmdlet Get-AzEnvironment . If you want to use global Azure, your environment is
AzureCloud. For Azure China 21Vianet, select AzureChinaCloud.

The VM Extension for SAP supports configuring a proxy that the extension should
use to connect to external resources, for example the Azure Resource Manager API.
Please use parameter -ProxyURI to set the proxy.

PowerShell

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>

Set-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName


<virtual machine name> -InstallNewExtension

5. Restart SAP Host Agent

Log on to the virtual machine on which you enabled the VM Extension for SAP and
restart the SAP Host Agent if it was already installed. SAP Host Agent does not use
the VM Extension until it is restarted. It currently cannot detect that an extension
was installed after it was started.

Configure the Azure VM extension for SAP


solutions with Azure CLI
The new VM Extension for SAP uses a managed identity that is assigned to the VM to
access monitoring and configuration data of the VM.

7 Note

The following steps require Owner privileges over the resource group or individual
resources (virtual machine, data disks, and so on)

1. Ensure that you use SAP Host Agent 7.21 PL 47 or later.

2. Ensure that you uninstall the current version of the VM Extension for SAP. You can't
install both versions of the VM Extension for SAP on the same VM.

3. Install the latest version of Azure CLI 2.0 (version 2.19.1 or later).
4. Sign in with your Azure account:

Azure CLI

az login

5. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.

Azure CLI

az extension add --name aem

6. Enable the new extension:

The VM Extension for SAP supports configuring a proxy that the extension should
use to connect to external resources, for example the Azure Resource Manager API.
Please use parameter --proxy-uri to set the proxy.

Azure CLI

az vm aem set -g <resource-group-name> -n <vm name> --install-new-


extension

7. Restart SAP Host Agent

Log on to the virtual machine on which you enabled the VM Extension for SAP and
restart the SAP Host Agent if it was already installed. SAP Host Agent does not use
the VM Extension until it is restarted. It currently cannot detect that an extension
was installed after it was started.

Manually configure the Azure VM extension for


SAP solutions
If you want to use Azure Resource Manager, Terraform or other tools to deploy the VM
Extension for SAP, you can also deploy the VM Extension for SAP manually i.e. without
using the dedicated PowerShell or Azure CLI commands.

Before deploying the VM Extension for SAP, please make sure to assign a user or system
assigned managed identity to the virtual machine. For more information, read the
following guides:

Configure managed identities for Azure resources on a VM using the Azure portal
Configure managed identities for Azure resources on an Azure VM using Azure CLI
Configure managed identities for Azure resources on an Azure VM using
PowerShell
Configure managed identities for Azure resources on an Azure VM using templates
Terraform VM Identity

After assigning an identity to the virtual machine, give the VM read access to either the
resource group or the individual resources associated to the virtual machine (VM,
Network Interfaces, OS Disks and Data Disks). It is recommended to use the built-in
Reader role to grant the access to these resources. You can also grant this access by
adding the VM identity to an Azure Active Directory group that already has read access
to the required resources. It is then no longer needed to have Owner privileges when
deploying the VM Extension for SAP if you use a user assigned identity that already has
the required permissions.

There are different ways how to deploy the VM Extension for SAP manually. Please find
a few examples in the next chapters.

The extension currently supports the following configuration keys. In the example
below, the msi_res_id is shown.

msi_res_id: ID of the user assigned identity the extension should use to get the
required information about the VM and its resources
proxy: URL of the proxy the extension should use to connect to the internet, for
example to retrieve information about the virtual machine and its resources.

Deploy manually with Azure PowerShell


The following code contains four examples. It shows how to deploy the extension on
Windows and Linux, using a system or user assigned identity. Make sure to replace the
name of the resource group, the location and VM name in the example.

PowerShell

# Windows VM - user assigned identity


Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -
ExtensionType "MonitorX64Windows" -ResourceGroupName "<rg name>" -VMName "
<vm name>" `
-Name "MonitorX64Windows" -TypeHandlerVersion "1.0" -Location "
<location>" -SettingString '{"cfg":[{"key":"msi_res_id","value":"<user
assigned resource id>"}]}'

# Windows VM - system assigned identity


Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -
ExtensionType "MonitorX64Windows" -ResourceGroupName "<rg name>" -VMName "
<vm name>" `
-Name "MonitorX64Windows" -TypeHandlerVersion "1.0" -Location "
<location>" -SettingString '{"cfg":[]}'

# Linux VM - user assigned identity


Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -
ExtensionType "MonitorX64Linux" -ResourceGroupName "<rg name>" -VMName "<vm
name>" `
-Name "MonitorX64Linux" -TypeHandlerVersion "1.0" -Location "<location>"
-SettingString '{"cfg":[{"key":"msi_res_id","value":"<user assigned resource
id>"}]}'

# Linux VM - system assigned identity


Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -
ExtensionType "MonitorX64Linux" -ResourceGroupName "<rg name>" -VMName "<vm
name>" `
-Name "MonitorX64Linux" -TypeHandlerVersion "1.0" -Location "<location>"
-SettingString '{"cfg":[]}'

Deploy manually with Azure CLI


The following code contains four examples. It shows how to deploy the extension on
Windows and Linux, using a system or user assigned identity. Make sure to replace the
name of the resource group, the location and VM name in the example.

Bash

# Windows VM - user assigned identity


az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring"
--name "MonitorX64Windows" --resource-group "<rg name>" --vm-name "<vm
name>" \
--extension-instance-name "MonitorX64Windows" --settings '{"cfg":
[{"key":"msi_res_id","value":"<user assigned resource id>"}]}'

# Windows VM - system assigned identity


az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring"
--name "MonitorX64Windows" --resource-group "<rg name>" --vm-name "<vm
name>" \
--extension-instance-name "MonitorX64Windows" --settings '{"cfg":[]}'

# Linux VM - user assigned identity


az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring"
--name "MonitorX64Linux" --resource-group "<rg name>" --vm-name "<vm name>"
\
--extension-instance-name "MonitorX64Linux" --settings '{"cfg":
[{"key":"msi_res_id","value":"<user assigned resource id>"}]}'

# Linux VM - system assigned identity


az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring"
--name "MonitorX64Linux" --resource-group "<rg name>" --vm-name "<vm name>"
\
--extension-instance-name "MonitorX64Linux" --settings '{"cfg":[]}'
Deploy manually with Terraform
The following manifest contains four examples. It shows how to deploy the extension on
Windows and Linux, using a system or user assigned identity. Make sure to replace the
ID of the VM and ID of the user assigned identity in the example.

Terraform

# Windows VM - user assigned identity

resource "azurerm_virtual_machine_extension" "example" {


name = "MonitorX64Windows"
virtual_machine_id = "<vm id>"
publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
type = "MonitorX64Windows"
type_handler_version = "1.0"
auto_upgrade_minor_version = true

settings = <<SETTINGS
{
"cfg":[
{
"key":"msi_res_id",
"value":"<user assigned resource id>"
}
]
}
SETTINGS
}

# Windows VM - system assigned identity

resource "azurerm_virtual_machine_extension" "example" {


name = "MonitorX64Windows"
virtual_machine_id = "<vm id>"
publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
type = "MonitorX64Windows"
type_handler_version = "1.0"
auto_upgrade_minor_version = true

settings = <<SETTINGS
{
"cfg":[
]
}
SETTINGS
}

# Linux VM - user assigned identity

resource "azurerm_virtual_machine_extension" "example" {


name = "MonitorX64Linux"
virtual_machine_id = "<vm id>"
publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
type = "MonitorX64Linux"
type_handler_version = "1.0"
auto_upgrade_minor_version = true

settings = <<SETTINGS
{
"cfg":[
{
"key":"msi_res_id",
"value":"<user assigned resource id>"
}
]
}
SETTINGS
}

# Linux VM - system assigned identity

resource "azurerm_virtual_machine_extension" "example" {


name = "MonitorX64Linux"
virtual_machine_id = "<vm id>"
publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
type = "MonitorX64Linux"
type_handler_version = "1.0"
auto_upgrade_minor_version = true

settings = <<SETTINGS
{
"cfg":[
]
}
SETTINGS
}

Versions of the VM Extension for SAP


If you want to disable automatic updates for the VM extension or want to deploy a
specific version of the extension, you can retrieve the available versions with Azure CLI
or Azure PowerShell.

Azure PowerShell

PowerShell

# Windows
Get-AzVMExtensionImage -Location westeurope -PublisherName
Microsoft.AzureCAT.AzureEnhancedMonitoring -Type MonitorX64Windows
# Linux
Get-AzVMExtensionImage -Location westeurope -PublisherName
Microsoft.AzureCAT.AzureEnhancedMonitoring -Type MonitorX64Linux

Azure CLI

Azure CLI

# Windows
az vm extension image list --location westeurope --publisher
Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Windows
# Linux
az vm extension image list --location westeurope --publisher
Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Linux

Readiness check
This check makes sure that all performance metrics that appear inside your SAP
application are provided by the underlying Azure Extension for SAP.

Run the readiness check on a Windows VM


1. Sign in to the Azure virtual machine (using an admin account is not necessary).
2. Open a web browser and navigate to http://127.0.0.1:11812/azure4sap/metrics .
3. The browser should display or download an XML file that contains the monitoring
data of your virtual machine. If that is not the case, make sure that the Azure
Extension for SAP is installed.
4. Check the content of the XML file. The XML file that you can access at
http://127.0.0.1:11812/azure4sap/metrics contains all populated Azure
performance counters for SAP. It also contains a summary and health indicator of
the status of Azure Extension for SAP.
5. Check the value of the Provider Health Description element. If the value is not OK,
follow the instructions in chapter Health checks.

Run the readiness check on a Linux VM


1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the following command

Bash

curl http://127.0.0.1:11812/azure4sap/metrics
Expected result: Returns an XML document that contains the monitoring
information of the virtual machine, its disks and network interfaces.

If the preceding check was not successful, run these additional checks:

1. Make sure that the waagent is installed and enabled.

a. Run sudo ls -al /var/lib/waagent/

Expected result: Lists the content of the waagent directory.

b. Run ps -ax | grep waagent

Expected result: Displays one entry similar to: python /usr/sbin/waagent -daemon

2. Make sure that the Azure Extension for SAP is installed and running.

a. Run sudo sh -c 'ls -al


/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-

*/'

Expected result: Lists the content of the Azure Extension for SAP directory.

b. Run ps -ax | grep AzureEnhanced

Expected result: Displays one entry similar to:


/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-

1.0.0.82/AzureEnhancedMonitoring -monitor

3. Install SAP Host Agent as described in SAP Note 1031096 , and check the output
of saposcol .

a. Run /usr/sap/hostctrl/exe/saposcol -d

b. Run dump ccm

c. Check whether the Virtualization_Configuration\Enhanced Monitoring Access


metric is true.

If you already have an SAP NetWeaver ABAP application server installed, open
transaction ST06 and check whether monitoring is enabled.

If any of these checks fail, and for detailed information about how to redeploy the
extension, see Troubleshooting for Windows or Troubleshooting for Linux
Health checks
If some of the infrastructure data is not delivered correctly as indicated by the tests
described in Readiness check, run the health checks described in this chapter to check
whether the Azure infrastructure and the Azure Extension for SAP are configured
correctly.

Health checks using PowerShell


1. Make sure that you have installed the latest version of the Azure PowerShell
cmdlet, as described in Deploying Azure PowerShell cmdlets.

2. Run the following PowerShell cmdlet. For a list of available environments, run the
cmdlet Get-AzEnvironment . To use global Azure, select the AzureCloud
environment. For Azure China 21Vianet, select AzureChinaCloud.

PowerShell

$env = Get-AzEnvironment -Name <name of the environment>


Connect-AzAccount -Environment $env
Set-AzContext -SubscriptionName <subscription name>
Test-AzVMAEMExtension -ResourceGroupName <resource group name> -VMName
<virtual machine name>

3. The script tests the configuration of the virtual machine you selected.

Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Repeat the checks described in Readiness check and this chapter. If the checks still
indicate a problem with some or all counters, see Troubleshooting for Linux or
Troubleshooting for Windows.

Health checks using Azure CLI


To run the health check for the Azure VM Extension for SAP by using Azure CLI:

1. Install Azure CLI 2.0. Ensure that you use at least version 2.19.1 or later (use the
latest version).

2. Sign in with your Azure account:

Azure CLI
az login

3. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.

Azure CLI

az extension add --name aem

4. Verify the installation of the extension:

Azure CLI

az vm aem verify -g <resource-group-name> -n <vm name>

The script tests the configuration of the virtual machine you select.

Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Repeat the checks described in Readiness check and this chapter. If the checks still
indicate a problem with some or all counters, see Troubleshooting for Linux or
Troubleshooting for Windows.

Troubleshooting for Windows

Azure performance counters do not show up at all


The AzureEnhancedMonitoring process collects performance metrics in Azure. If the
process is not running in your VM, no performance metrics can be collected.

The installation directory of the Azure Extension for SAP is empty

Issue

The installation directory


C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Window
s\<version> is empty.

Solution
The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine or install the VM extension again.

Some Azure performance counters are missing


The AzureEnhancedMonitoring Windows process collects performance metrics in Azure.
The process gets data from several sources. Some configuration data is collected locally,
and some performance metrics are read from Azure Monitor.

If troubleshooting by using SAP Note 1999351 does not resolve the issue, open an
SAP customer support message on the component BC-OP-NT-AZR for Windows or BC-
OP-LNX-AZR for a Linux virtual machine. Please attach the log file
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Window
s\<version>\logapp.txt to the incident.

Troubleshooting for Linux

Azure performance counters do not show up at all


Performance metrics in Azure are collected by a daemon. If the daemon is not running,
no performance metrics can be collected.

The installation directory of the Azure Extension for SAP is empty

Issue

The directory /var/lib/waagent/ does not have a subdirectory for the Azure Extension for
SAP.

Solution

The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine and/or install the VM extension again.

Some Azure performance counters are missing


Performance metrics in Azure are collected by a daemon, which gets data from several
sources. Some configuration data is collected locally, and some performance metrics are
read from Azure Monitor. For a complete and up-to-date list of known issues, see SAP
Note 1999351 , which has additional troubleshooting information for Azure Extension
for SAP. If troubleshooting by using SAP Note 1999351 does not resolve the issue,
install the extension again as described in Configure the Azure Extension for SAP. If the
problem persists, open an SAP customer support message on the component BC-OP-
NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine. Please attach the
log file
/var/lib/waagent/Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Linux-
<version>/logapp.txt to the incident.

Azure extension error codes


All error IDs have a unique tag in the form of a-#, where # is a number. It allows a fast
search for a specific error and possible solutions.

Error Error Solutions


ID description

a- no auth More info:


0116 token The extension cannot obtain authentication token to access VM metrics in
Azure monitor. To deliver VM metrics it needs access to VM resources like
VM itself, all disks and all NICs attached to a VM
Solution:
Please enable VM managed Identity and give it a reader role for a VM
resource group. When you use a setup script, the script does it for you.
Normally you don’t need to enable and assign VM managed identity
manually.

Next steps
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines planning and implementation for SAP NetWeaver
SAP BusinessObjects BI platform
planning and implementation guide on
Azure
Article • 06/16/2023

The purpose of this guide is to provide guidelines for planning, deploying, and
configuring SAP BusinessObjects BI Platform, also known as SAP BOBI Platform on
Azure. This guide is intended to cover common Azure services and features that are
relevant for SAP BOBI Platform. This guide isn't an exhaustive list of all possible
configuration options. It covers solutions common to typical deployment scenarios.

This guide isn't intended to replace the standard SAP BOBI Platform installation and
administration guides, operating system, or any database documentation.

Plan and implement SAP BusinessObjects BI


platform on Azure
Microsoft Azure offers a wide range of services including compute, storage, networking,
and many others for businesses to build their applications without lengthy procurement
cycles. Azure virtual machines (VM) help companies to deploy on-demand and scalable
computing resources for different SAP applications like SAP NetWeaver based
applications, SAP Hybris, SAP BusinessObjects BI Platform, based on their business need.
Azure also supports the cross-premises connectivity, which enables companies to
integrate Azure virtual machines into their on-premises domains, their private clouds
and their SAP system landscape.

This document provides guidance on planning and implementation consideration for


SAP BusinessObjects BI Platform on Azure. It complements the SAP installation
documentation and SAP Notes, which represent the primary resources for installations
and deployments of SAP BOBI.

Architecture overview
SAP BusinessObjects BI Platform is a self-contained system that can exist on a single
Azure virtual machine or can be scaled into a cluster of many Azure Virtual Machines
that run different components. SAP BOBI Platform consists of six conceptual tiers: Client
Tier, Web Tier, Management Tier, Storage Tier, Processing Tier, and Data Tier. (For more
details on each tier, refer Administrator Guide in SAP BusinessObjects Business
Intelligence Platform help portal). Following is the high-level details on each tier:

Client Tier: It contains all desktop client applications that interact with the BI
platform to provide different kind of reporting, analytic, and administrative
capabilities.
Web Tier: It contains web applications deployed to Java web application servers.
Web applications provide BI Platform functionality to end users through a web
browser.
Management Tier: It coordinates and controls all the components that makes the
BI Platform. It includes Central Management Server (CMS) and the Event Server
and associated services
Storage Tier: It's responsible for handling files, such as documents and reports. It
also handles report caching to save system resources when user access reports.
Processing Tier: It analyzes data, and produces reports and other output types. It's
the only tier that accesses the databases that contain report data.
Data Tier: It consists of the database servers hosting the CMS system databases
and Auditing Data Store.

The SAP BI Platform consists of a collection of servers running on one or more hosts. It's
essential that you choose the correct deployment strategy based on the sizing, business
need and type of environment. For small installation like development or test, you can
use a single Azure Virtual Machine for web application server, database server, and all BI
Platform servers. In case you're using Database-as-a-Service (DBaaS) offering from
Azure, database server runs separately from other components. For medium and large
installation, you can have servers running on multiple Azure virtual machines.

The diagram below illustrates the architecture of a large-scale deployment of the SAP
BOBI Platform on Azure virtual machines, with each component distributed. To ensure
infrastructure resilience against service disruption, VMs can be deployed using either
flexible scale set, availability sets or availability zones.
Architecture details

Load balancer

In SAP BOBI multi-instance deployment, Web application servers (or web tier) are
running on two or more hosts. To distribute user load evenly across web servers,
you can use a load balancer between end users and web servers. In Azure, you can
either use Azure Load Balancer or Azure Application Gateway to manage traffic to
your web servers.

Web application servers

The web server hosts the web applications of SAP BOBI Platform like CMC and BI
Launch Pad. To achieve high availability for web server, you must deploy at least
two web application servers to manage redundancy and load balancing. In Azure,
these web application servers can be placed either in flexible scale set, availability
zones or availability sets for better availability.

Tomcat is the default web application for SAP BI Platform. To achieve high
availability for tomcat, enable session replication using Static Membership
Interceptor in Azure. It ensures that user can access SAP BI web application even
when tomcat service is disrupted.

) Important

By default Tomcat uses multicast IP and Port for clustering which is not
supported on Azure (SAP Note 2764907 ).

BI platform servers

BI Platform servers include all the services that are part of SAP BOBI application
(management tier, processing tier, and storage tier). When a web server receives a
request, it detects each BI platform server (specifically, all CMS servers in a cluster)
and automatically load balance their requests. In case if one of the BI Platform
hosts fails, web server automatically send requests to other host.

To achieve high availability or redundancy for BI Platform, you must deploy the
application in at least two Azure virtual machines. Based on the sizing, you can
scale your BI Platform to run on more Azure virtual machines.

File repository server (FRS)

File Repository Server contains all reports and other BI documents that have been
created. In multi-instance deployment, BI Platform servers are running on multiple
virtual machines and each VM should have access to these reports and other BI
documents. So, a filesystem needs to be shared across all BI platform servers.

In Azure, you can either use Azure Premium Files or Azure NetApp Files for File
Repository Server. Both of these Azure services have built-in redundancy.

CMS & audit database


SAP BOBI Platform requires a database to store its system data, which is referred as
CMS database. It's used to store BI platform information such as user, server,
folder, document, configuration, and authentication details.

Azure offers MySQL Database and Azure SQL database Database-as-a-Service


(DBaaS) offering that can be used for CMS database and Audit database. As this
being a PaaS offering, customers don't have to worry about operation, availability,
and maintenance of the databases. Customer can also choose their own database
for CMS and Audit repository based on their business need.

Support matrix
This section describes supportability of different SAP BOBI component like SAP
BusinessObjects BI Platform version, Operating System and, Databases in Azure.

SAP BusinessObjects BI platform


Azure Infrastructure as a Service (IaaS) enables you to deploy and configure SAP
BusinessObjects BI Platform on Azure Compute. It supports following version of SAP
BOBI Platform -

SAP BusinessObjects BI Platform 4.3


SAP BusinessObjects BI Platform 4.2 SP04+
SAP BusinessObjects BI Platform 4.1 SP05+

The SAP BI Platform runs on different operating system and databases. Supportability of
SAP BOBI platform between Operating System and Database version can be found in
Product Availability Matrix for SAP BOBI.

Operating system
Azure supports following operating systems for SAP BusinessObjects BI Platform
deployment.

Microsoft Windows Server


SUSE Linux Enterprise Server (SLES)
Red Hat Enterprise Linux (RHEL)
Oracle Linux (OL)

The operating system version that is listed in Product Availability Matrix (PAM) for SAP
BusinessObjects BI Platform are supported as long as they're compatible to run on
Azure Infrastructure.
Databases
The BI Platform needs database for CMS and Auditing Data store, which can be installed
on any supported databases that are listed in SAP Product Availability Matrix that
includes the following -

Microsoft SQL Server

Azure SQL Database (Supported database only for SAP BOBI Platform on
Windows)

It's a fully managed SQL Server database engine, based on the latest stable
Enterprise Edition of SQL Server. Azure SQL database handles most of the database
management functions such as upgrading, patching, and monitoring without user
involvement. With Azure SQL Database, you can create a highly available and high-
performance data storage layer for the applications and solutions in Azure. For
more details, check Azure SQL Database documentation.

Azure Database for MySQL (Follow same compatibility guidelines as mentioned


for MySQL AB in SAP PAM)

It's a relational database service powered by the MySQL community edition. Being
a fully managed Database-as-a-Service (DBaaS) offering, it can handle mission-
critical workloads with predictable performance and dynamic scalability. It has
built-in high availability, automatic backups, software patching, automatic failure
detection, and point-in-time restore for up to 35 days, which substantially reduce
operation tasks. For more details, check Azure Database for MySQL
documentation.

SAP HANA

SAP ASE

IBM DB2

Oracle (For version and restriction, check SAP Note 2039619 )

MaxDB

This document illustrates the guidelines to deploy SAP BOBI Platform on Windows with
Azure SQL Database and SAP BOBI Platform on Linux with Azure Database for MySQL.
It's also our recommended approach for running SAP BusinessObjects BI Platform on
Azure.
Sizing
Sizing is a process of determining the hardware requirement to run the application
efficiently. For SAP BOBI Platform, sizing needs to be done using SAP sizing tool called
Quick Sizer . The tool provides the SAPS based on the input, which then needs to be
mapped to certified Azure virtual machines types for SAP. SAP Note 1928533 provides
the list of supported SAP products and Azure VM types along with SAPS. For more
information on sizing, check SAP BI Sizing Guide .

For storage need for SAP BOBI Platform, Azure offers different types of Managed Disks.
For SAP BOBI Installation directory, it's recommended to use premium managed disk
and for the database that runs on virtual machines, follow the guidance that is provided
in DBMS deployment for SAP workload.

Azure supports two DBaaS offering for SAP BOBI Platform data tier - Azure SQL
Database (BI Application running on Windows) and Azure Database for MySQL (BI
Application running on Linux and Windows). So based on the sizing result, you can
choose purchasing model that best fits your need.

 Tip

For quick sizing reference, consider 800 SAPS = 1 vCPU while mapping the SAPS
result of SAP BOBI Platform database tier to Azure Database-as-a-Service (Azure
SQL Database or Azure Database for MySQL).

Sizing models for Azure SQL database


Azure SQL Database offers the following three purchasing models:

vCore-based

It lets you choose the number of vCores, amount of memory, and the amount and
speed of storage. The vCore-based purchasing model also allows you to use Azure
Hybrid Benefit for SQL Server to gain cost savings. This model is suited for
customer who value flexibility, control, and transparency.

There are three Service Tier Options being offered in vCore model that includes -
General Purpose, Business Critical, and Hyperscale. The service tier defines the
storage architecture, space, I/O limits, and business continuity options related to
availability and disaster recovery. Following is high-level details on each service tier
option -
1. General Purpose service tier is best suited for Business workloads. It offers
budget-oriented, balanced, and scalable compute and storage options. For
more information, refer Resource options and limits.
2. Business Critical service tier offers business applications the highest resilience
to failures by using several isolated replicas, and provides the highest I/O
performance per database replica. For more information, refer Resource
options and limits.
3. Hyperscale service tier is best for business workloads with highly scalable
storage and read-scale requirements. It offers higher resilience to failures by
allowing configuration of more than one isolated database replica. For more
information, refer Resource options and limits.

DTU-based

The DTU-based purchasing model offers a blend of compute, memory, and I/O
resources in three service tiers, to support light and heavy database workloads.
Compute sizes within each tier provide a different mix of these resources, to which
you can add additional storage resources. It's best suited for customers who want
simple, preconfigure resource options.

Service Tiers in the DTU-based purchasing model is differentiated by a range of


compute sizes with a fixed amount of included storage, fixed retention period of
backups, and fixed price.

Serverless

The serverless model automatically scales compute based on workload demand,


and bills for the amount of compute used per second. The serverless compute tier
automatically pauses databases during inactive periods when only storage is billed,
and automatically resumes databases when activity returns. For more information,
refer Resource options and limits.

It's more suitable for intermittent, unpredictable usage with low average compute
utilization over time. So this model can be used for nonproduction SAP BOBI
deployment.

7 Note

For SAP BOBI, it's convenient to use vCore based model and choose either General
Purpose or Business Critical service tier based on the business need.

Sizing models for Azure database for MySQL


Azure Database for MySQL comes with three different pricing tiers. They're
differentiated by the amount of compute in vCores, memory per vCore, and the storage
technology used to store the date. Following is the high-level details on the options and
for more details on different attributes, refer Pricing Tier for Azure Database for MySQL.

Basic

It's used for the target workloads that require light compute and I/O performance.

General Purpose

It's suited for most business workloads that require balanced compute and
memory with scalable I/O throughput.

Memory Optimized

For high-performance database workloads that require in-memory performance


for faster transaction processing and higher concurrency.

7 Note

For SAP BOBI, it is convenient to use General Purpose or Memory Optimized


pricing tier based on the business workload.

Azure resources

Choosing regions
Azure region is one or a collection of data-centers that contains the infrastructure to run
and hosts different Azure Services. This infrastructure includes large number of nodes
that function as compute nodes or storage nodes, or run network functionality. Not all
region offers the same services.

SAP BI Platform contains different components that might require specific VM types,
Storage like Azure Files or Azure NetApp Files or Database as a Service (DBaaS) for its
data tier that might not be available in certain regions. You can find out the exact
information on VM types, Azure Storage types or, other Azure Services in Products
available by region site. If you're already running your SAP systems on Azure,
probably you have your region identified. In that case, you need to first investigate that
the necessary services are available in those regions to decide the architecture of SAP BI
Platform.
Virtual machine scale sets with flexible orchestration
Virtual machine scale sets with flexible orchestration provide a logical grouping of
platform-managed virtual machines. You have an option to create scale set within region
or span it across availability zones. On creating, the flexible scale set within a region with
platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be
distributed across specified number of fault domains in the same region. On the other
hand, creating the flexible scale set across availability zones with
platformFaultDomainCount=1 (FD=1) would distribute VMs across specified zone and
the scale set would also distribute VMs across different fault domains within the zone on
a best effort basis.

For SAP workload only flexible scale set with FD=1 is supported. The advantage of
using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional
availability zone deployment is that the VMs deployed with the scale set would be
distributed across different fault domains within the zone in a best-effort manner. To
learn more about SAP workload deployment with scale set, see flexible virtual machine
scale deployment guide.

Availability zones
Availability Zones are physically separate locations within an Azure region. Each
Availability Zone is made of one or more datacenters equipped with independent
power, cooling, and networking.

To achieve high availability on each tier for SAP BI Platform, you can distribute VMs
across Availability Zone by implementing high availability framework, which can provide
the best SLA in Azure. For Virtual Machine SLA in Azure, check the latest version of
Virtual Machine SLAs .

For data tier, Azure Database as a Service (DBaaS) service provides high availability
framework by default. You just need to select the region and service inherent high
availability, redundancy, and resiliency capabilities to mitigate database downtime from
planned and unplanned outages, without requiring you to configure any additional
components. For more details on the SLA for supported DBaaS offering on Azure, check
High availability in Azure Database for MySQL and High availability for Azure SQL
Database.

Availability sets
Availability set is a logical grouping capability for isolating Virtual Machine (VM)
resources from each other on being deployed. Azure makes sure of the VMs you place
within an Availability Set run across multiple physical servers, compute racks, storage
units, and network switches. If a hardware or software failure happens, only a subset of
your VMs is affected and your overall solution stays operational. So when virtual
machines are placed in availability sets, Azure Fabric Controller distributes the VMs over
different Fault and Upgrade domains to prevent all VMs from being inaccessible
because of infrastructure maintenance or failure within one Fault domain.

SAP BI Platform contains many different components and while designing the
architecture you have to make sure that each of this component is resilient of any
disruption. It can be achieved by placing Azure virtual machines of each component
within availability sets. Keep in mind, when you mix VMs of different VM families within
one availability set, you may come across problems that prevent you to include a certain
VM type into such availability set. So have separate availability set for Web Application,
BI Application for SAP BI Platform as highlighted in Architecture Overview.

Also the number of update and fault domains that can be used by an Azure Availability
Set within an Azure Scale unit is finite. So if you keep adding VMs to a single availability
set, two or more VMs will eventually end in the same fault or update domain. For more
information, see the Azure Availability Sets section of the Azure virtual machines
planning and implementation for SAP document.

To understand the concept of Azure availability sets and the way availability sets relate
to Fault and Upgrade Domains, read manage availability article.

) Important

The concepts of Azure availability zones and Azure availability sets are
mutually exclusive. You can deploy a pair or multiple VMs into either a specific
availability zone or an availability set, but you can't do both.
If you planning to deploy across availability zones, it is advised to use flexible
scale set with FD=1 over standard availability zone deployment.

Virtual machines
Azure Virtual Machine is a service offering that enables you to deploy custom images to
Azure as Infrastructure-as-a-Service (IaaS) instances. It simplifies maintaining and
operating applications by providing on-demand compute and storage to host, scale,
and manage web application and connected applications.

Azure offers varieties of virtual machines for all your application needs. But for SAP
workload, Azure has narrowed the selection to different VM families that are suitable for
SAP workload and SAP HANA workload more specifically. For more insight, check What
SAP software is supported for Azure deployments.

Based on the SAP BI Platform sizing, you need to map your requirement to Azure Virtual
Machine, which is supported in Azure for SAP product. SAP Note 1928533 is a good
starting point that list out supported Azure VM types for SAP Products on Windows and
Linux. Also a point to keep in mind that beyond the selection of purely supported VM
types, you also need to check whether those VM types are available in specific region.
You can check the availability of VM type on Products available by region page. For
choosing the pricing model, you can refer to Azure virtual machines for SAP workload

Storage
Azure Storage is an Azure-managed cloud service that provides storage that is highly
available, secure, durable, scalable, and redundant. Some of the storage types have
limited use for SAP scenarios. But several Azure Storage types are well suited or
optimized for specific SAP workload scenarios. For more information, refer Azure
Storage types for SAP Workload guide, as it highlights different storage options that are
suited for SAP.

Azure Storage has different Storage types available for customers and details for the
same can be read in the article What disk types are available in Azure?. SAP BOBI
Platform uses following Azure Storage to build the application -

Azure-managed disks

It's a block-level storage volume that is managed by Azure. You can use the disks
for SAP BOBI Platform application servers and databases, when installed on Azure
virtual machines. There are different types of Azure Managed Disks available, but
it's recommended to use Premium SSDs for SAP BOBI Platform application and
database.

In below example, Premium SSDs are used for BOBI Platform installation directory.
For database installed on virtual machine, you can use managed disks for data and
log volume as per the guidelines. CMS and Audit databases are typically small and
it doesn’t have the same storage performance requirements as that of other SAP
OLTP/OLAP databases.

Azure Premium Files or Azure NetApp Files

In SAP BOBI Platform, File Repository Server (FRS) refers to the disk directories
where contents like reports, universes, and connections are stored which are used
by all application servers of that system. Azure Premium Files or Azure NetApp
Files storage can be used as a shared file system for SAP BOBI applications FRS. As
this storage offering isn't available all regions, refer to Products available by
region site to find out up-to-date information.

If the service is unavailable in your region, you can create NFS server from which
you can share the file system to SAP BOBI application. But you'll also need to
consider its high availability.

Networking
SAP BOBI is a reporting and analytics BI platform that doesn’t hold any business data. So
the system is connected to other database servers from where it fetches all the data and
provide insight to users. Azure provides a network infrastructure, which allows the
mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-
premises system, systems in different virtual network and others. For more information
check Microsoft Azure Networking for SAP Workload.
For Database-as-a-Service offering, any newly created database (Azure SQL Database or
Azure Database for MySQL) has a firewall that blocks all external connections. To allow
access to the DBaaS service from BI Platform virtual machines, you need to specify one
or more server-level firewall rules to enable access to your DBaaS server. For more
information, see Firewall rules for Azure Database for MySQL and Network Access
Controls section for Azure SQL database.

Next steps
SAP BusinessObjects BI Platform Deployment on Linux
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP BusinessObjects BI platform
deployment guide for Windows on
Azure
Article • 06/16/2023

This article describes the strategy to deploy the SAP BusinessObjects Business
Intelligence (SAP BOBI) platform on Azure for Windows. In this example, two virtual
machines (VMs) with Azure Premium SSD managed disks as their installation directory
are configured. Azure SQL Database, a platform as a service (PaaS) offering, is used for
the central management server (CMS) and audit databases. Azure Premium Files, an
SMB protocol, is used as a file store that's shared across both VMs. The default Tomcat
Java web application and business intelligence (BI) platform application are installed
together on both VMs. To load balance the user requests, Azure Application Gateway is
used, which has native TLS/SSL offloading capabilities.

This type of architecture is effective for small deployment or nonproduction


environments. For production or large-scale deployment, you should separate hosts for
web applications, and you can have multiple SAP BOBI application hosts, which allows
the server to process more information.
In this example, the following product versions and file system layout are used:

SAP BusinessObjects platform 4.3 SP01 Patch 1


Windows Server 2019
SQL Database (Version: 12.0.2000.8)
Microsoft ODBC driver - msodbcsql.msi (Version: 13.1)

File system Description Size (GB) Required Storage


access
File system Description Size (GB) Required Storage
access

F: The file SAP sizing Local Azure


system for guidelines administrative Premium
installation privileges SSD
of an SAP managed
BOBI disks
instance,
default
Tomcat web
application,
and
database
drivers (if
necessary).

\\azusbobi.file.core.windows.net\frsinput The mount Business Local Azure


directory is need administrative NetApp
for the privileges Files
shared files
across all
SAP BOBI
hosts that
will be used
as the Input
Filestore
directory.

\\azusbobi.file.core.windows.net\frsoutput The mount Business Local Azure


directory is need administrative NetApp
for the privileges Files
shared files
across all
SAP BOBI
hosts that
will be used
as the
Output
Filestore
directory.

Deploy a Windows virtual machine via the


Azure portal
In this section, we'll create two VMs with a Windows operating system (OS) image for
the SAP BOBI platform. The high-level steps to create VMs are as follows:
1. Create a resource group.

2. Create a virtual network:

Don't use a single subnet for all Azure services in an SAP BI platform
deployment. Based on SAP BI platform architecture, you might need to create
multiple subnets. In this deployment, we'll create two subnets: a BI
application subnet and an Application Gateway subnet.
Follow SAP Note 2276646 to identify ports for SAP BOBI platform
communication across different components.
SQL Database communicates over port 1433. Outbound traffic over port 1433
should be allowed from your SAP BOBI application servers.
In Azure, Application Gateway must be on a separate subnet. For more
information, see Application Gateway configuration overview.
If you're using Azure NetApp Files for a file store instead of Azure Files, create
a separate subnet for Azure NetApp Files. For more information, see
Guidelines for Azure NetApp Files network planning.

3. Select the suitable availability options depending on your preferred system


configuration within an Azure region, whether it involves spanning across zones,
residing within a single zone, or operating in a zone-less region.

4. Create virtual machine 1 (azuswinboap1):

You can either use a custom image or choose an image from Azure
Marketplace. Based on your need, see Deploy a VM from Azure Marketplace
for SAP or Deploy a VM with a custom image for SAP.

5. Create virtual machine 2 (azuswinboap2).

6. Add one Premium SSD disk. It will be used as an SAP BOBI installation directory.

Provision Azure Premium Files


Before you continue with the setup for Azure Files, familiarize yourself with the Azure
Files documentation.

Azure Files offers standard file shares hosted on HDD-based hardware and premium file
shares hosted on SSD-based hardware. For an SAP BusinessObjects file store, use Azure
Premium Files.

Azure premium file shares are available with local and zone redundancy in a subset of
regions. To find out if premium file shares are currently available in your region, see
Products available by region . For information about regions that support zone-
redundant storage (ZRS), see Azure Storage redundancy.

Deploy an Azure files storage account and NFS shares


Azure file shares are deployed into storage accounts, which are top-level objects that
represent a shared pool of storage. This pool of storage can be used to deploy multiple
file shares. Azure supports multiple types of storage accounts for different storage
scenarios customers might have. For SAP BusinessObjects file storage, you need to
create a FileStorage account. You use it to deploy Azure file shares on Premium SSD-
based hardware.

7 Note

FileStorage accounts can only be used to store Azure file shares. No other storage
resources, such as blobs, containers, queues, or tables, can be deployed in a
FileStorage account.

The storage account will be accessed via private endpoint and deployed in the same
virtual network of an SAP BOBI platform. With this setup, the traffic from your SAP
system never leaves the virtual network security boundaries. SAP systems often contain
sensitive and business-critical data, so staying within the boundaries of the virtual
network is an important security consideration for many customers.

If you need to access the storage account from a different virtual network, you can use
Azure Virtual Network peering.

Azure files storage account

1. To create a storage account via the Azure portal, select Create a resource >
Storage > Storage account.

2. On the Basics tab, complete all required fields to create a storage account:

a. Select Subscription > Resource group > Region.

b. Enter the Storage account name. For example, enter azusbobi. This name must
be globally unique, but otherwise you can provide any name you want.

c. Select Premium as the performance tier, and select FileStorage as the account
kind.
d. For Replication label, choose a redundancy level. Select Locally redundant
storage (LRS).

For Premium FileStorage, ZRS and LRS are the only options available. Based on
your VM deployment strategy (flexible scale set, availability zone or availability
set), choose the appropriate redundancy level. For more information, see Azure
Storage redundancy.

e. Select Next.

3. On the Networking tab, select private endpoint as the connectivity method. For
more information, see Azure Files networking considerations.

a. Select Add in the private endpoint section.

b. Select Subscription > Resource group > Location.

c. Enter the Name of the private endpoint. For example, enter azusbobi-pe.

d. Select file in storage sub-resource.

e. In the Networking section, select the Virtual network and Subnet on which the
SAP BusinessObjects BI application is deployed.

f. Accept the default (yes) for Integrate with private DNS zone.

g. Select your private DNS zone from the dropdown list.

h. Select OK to go back to the Networking tab in Create storage account.

4. On the Data protection tab, configure the soft-delete policy for Azure file shares in
your storage account. By default, soft-delete functionality is turned off. To learn
more about soft delete, see Prevent accidental deletion of Azure file shares.

5. On the Advanced tab, select different security options.

The Secure transfer required field indicates whether the storage account requires
encryption in transit for communication to the storage account. If you require SMB
2.1 support, you must disable this field. For the SAP BOBI platform, keep it default
(enabled).

6. Continue and create the storage account.

For details on how to create a storage account, see Create a FileStorage storage
account.
Create Azure file shares
The next step is to create Azure files in the storage account. Azure files use a
provisioned model for premium file shares. In a provisioned business model, you
proactively specify to Azure files what your storage requirements are, rather than being
billed based on what you use. To understand more about this model, see Provisioned
model. In this example, we create two Azure files: frsinput (256 GB) and frsoutput (256
GB) for the SAP BOBI file store.

1. Go to the storage account azusbobi > File shares.


2. Select New file share.
3. Enter the Name of the file share. For example, enter frsinput or frsouput.
4. Insert the required file share size in Provisioned capacity. For example, enter 256
GB.
5. Choose SMB as Protocol.
6. Select Create.

Configure a data disk on a Windows virtual


machine
The steps in this section use the following prefix:

[A]: The step applies to all hosts.

Initialize a new data disk


The SAP BusinessObjects BI application requires a partition on which its binaries can be
installed. You can install an SAP BOBI application on the OS partition (C:), but you must
make sure to have enough space for the deployment and the OS. We recommend that
you have at least 2 GB available for temporary files and web applications. Also, it's
advisable to separate SAP BOBI installation binaries in separate partitions.

In this example, an SAP BOBI application is installed on a separate partition (F:). Initialize
the Premium SSD disk that you attached during the VM provisioning:

1. [A] If no data disk is attached to the VM (azuswinboap1 and azuswinboap2), follow


the steps in Add a data disk to attach a new managed data disk.
2. [A] After the managed disk is attached to the VM, initialize the disk by following
the steps in Initialize a new data disk.

Mount Azure Premium Files


To use Azure Files as a file store, you must mount it, which means you assign it a drive
letter or mount point path.

[A] To mount the Azure file share, follow the steps in Mount the Azure file share.

To mount an Azure file share on a Windows server, the SMB protocol requires TCP port
445 to be open. Connections will fail if port 445 is blocked. You can check if your firewall
or ISP is blocking port 445 by using the Test-NetConnection cmdlet. See Port 445 is
blocked.

Configure a CMS database: Azure SQL


This section provides details on how to provision Azure SQL by using the Azure portal. It
also provides instructions on how to create the CMS and the audit database for an SAP
BOBI platform and a user account to access the databases.

The guidelines are applicable only if you're using SQL Database. For other databases,
see SAP or database-specific documentation for instructions.

Create a SQL Database server


SQL Database offers different deployment options: single database, elastic pool, and
database server. For an SAP BOBI platform, we need two databases, CMS and audit.
Instead of creating two single databases, you can create a SQL Database server that can
manage the group of single databases and elastic pools. Follow these steps to create a
SQL Database server:

1. Browse to the Select SQL deployment option page.

2. Under SQL databases, change Resource type to Database server. Select Create.

3. On the Basics tab, fill in all the required fields to Create SQL Database Server:

a. Under Project details, select the Subscription and Resource group.

b. Enter a Server name. For example, enter azussqlbodb. The server name must be
globally unique, but otherwise, you can provide any name you want.

c. Select the Location.

d. Enter the Server admin login. For example, enter boadmin. Then enter a
Password.
4. On the Networking tab, change Allow Azure services and resources to access this
server to No under Firewall rules.

5. On Additional settings, keep the default settings.

6. Continue and create SQL Database Server.

In the next step, create the CMS and the audit databases in the SQL Database server
(azussqlbodb.database.windows.net).

Create the CMS and the audit database


After you provision the SQL Database server, browse to the resource azussqlbodb. Then
follow these steps to create CMS and audit databases.

1. On the azussqlbodb overview page, select Create database.

2. On the Basics tab, fill in all the required fields:

a. Enter the Database name. For example, enter bocms or boaudit.

b. On the Compute + storage option, select Configure database. Choose the


appropriate model based on your sizing result. For insight on the options, see
Sizing models for Azure SQL Database.

3. On the Networking tab, select private endpoint for the connectivity method. The
private endpoint will be used to access SQL Database within the configured virtual
network.

a. Select Add private endpoint.

b. Select Subscription > Resource group > Location.

c. Enter the Name of the private endpoint. For example, enter azusbodb-pe.

d. In Target sub-resource, select SqlServer.

e. In the Networking section, select the Virtual network and Subnet on which the
SAP BusinessObjects BI application is deployed.

f. Accept default (yes) for Integrate with private DNS zone.

g. Select your private DNS zone from the dropdown list.

h. Select OK to go back to the Networking tab in Create SQL database.


4. On the Additional settings tab, change the Collation setting to
SQL_Latin1_General_CP850_BIN2.

5. Continue and create the CMS database.

Similarly, you can create the audit database. For example, enter boaudit.

Download and install an ODBC driver


SAP BOBI application servers require database client/drivers to access the CMS or audit
database. A Microsoft ODBC driver is used to access CMS and audit databases running
on SQL Database. This section provides instructions on how to download and set up an
ODBC driver on Windows.

1. See the CMS + Audit repository support by OS section in the Product Availability
Matrix (PAM) for SAP BusinessObjects BI platform to find out the database
connectors that are compatible with SQL Database.
2. Download the ODBC driver version from the link. In this example, we're
downloading ODBC driver 13.1.
3. Install the ODBC driver on all BI servers (azuswinboap1 and azuswinboap2).
4. After you install the driver in azuswinboap1, go to Start > Windows
Administrative Tools > ODBC Data Sources (64-bit).
5. Go to the System DSN tab.
6. Select Add to create a connection to the CMS database.
7. Select ODBC Driver 13 for SQL Server, and select Finish.
8. Enter the information of your CMS database like the following, and select Next:

Name: The name of the database created in the section "Create the CMS and
the audit database." For example, enter bocms or boaudit.
Description: A description that describes the data source. For example, enter
CMS database or Audit database.
Server: The name of the server created in the section "Create a SQL Database
server." For example, enter azussqlbodb.database.windows.net.

9. Select With SQL Server authentication using a login ID and password entered by
user to verify authenticity to Azure SQL Server. Enter the user credential that was
created at the time of the SQL Database server creation. For example, enter
boadmin. Select Next.
10. Change the default database to bocms, and keep everything else as default. Select
Next.
11. Select the Use strong encryption for data checkbox, and keep everything else as
default. Select Finish.
12. The data source to the CMS database has been created. Now you can select Test
Data Source to validate the connection to the CMS database from the BI
application. It should complete successfully. If it fails, troubleshoot the connectivity
issue.

7 Note

SQL Database communicates over port 1433. Outbound traffic over port 1433
should be allowed from your SAP BOBI application servers.

Repeat the preceding steps to create a connection for the audit database on the server
azuswinboap1. Similarly, install and configure both ODBC data sources (bocms and
boaudit) on all BI application servers (azuswinboap2).

Server preparation
Follow the latest guide by SAP to prepare servers for the installation of the BI platform.
For the most up-to-date information, see the "Preparation" section in the SAP Business
Intelligence Platform Installation Guide for Windows .

Installation
To install the BI platform on a Windows host, sign in with a user that has local
administrative privileges.

Go to the media of the SAP BusinsessObjects BI platform, and run setup.exe .

Follow the instructions in the SAP Business Intelligence Platform Installation Guide for
Windows that are specific to your version. Here are a few points to note while you
install the SAP BOBI platform on Windows:

On the Configure Destination Folder screen, provide the destination folder where
you want to install the BI platform. For example, enter F:\SAP BusinessObjects*.

On the Configure Product Registration screen, you can either use a temporary
license key for SAP BusinessObjects Solutions from SAP Note 1288121 or
generate a license key in SAP Service Marketplace.

On the Select Install Type screen, select Full installation on the first server
(azuswinboap1). For the other server (azuswinboap2), select Custom / Expand,
which expands the existing SAP BOBI setup.
On the Select Default or Existing Database screen, select configure an existing
database, which prompts you to select the CMS and the audit database. Select
Microsoft SQL Server using ODBC for the CMS Database type and the Audit
Database type.

You can also select No auditing database if you don't want to configure auditing
during installation.

Select the appropriate options on the Select Java Web Application Server screen
based on your SAP BOBI architecture. In this example, we've selected option 1,
which installs a Tomcat server on the same SAP BOBI platform.

Enter CMS database information in Configure CMS Repository Database - SQL


Server (ODBC). The following image shows example input for CMS database
information for a Windows installation.

(Optional) Enter audit database information in Configure Auditing Database - SQL


Server (ODBC). The following image shows example input for audit database
information for a Windows installation.
Follow the instructions, and enter the required inputs to finish the installation.

For a multi-instance deployment, run the installation setup on the second host
(azuswinboap2). In the Select Install Type screen, select Custom / Expand, which
expands the existing SAP BOBI setup. For more information, see the SAP blog SAP
BusinessObjects Business Intelligence platform setup with Azure SQL Database .

) Important

The database engine version numbers for SQL Server and SQL Database aren't
comparable with each other. They're internal build numbers for these separate
products. The database engine for SQL Database is based on the same code base
as the SQL Server database engine. Most importantly, the database engine in SQL
Database always has the newest SQL Database engine bits. Version 12 of SQL
Database is newer than version 15 of SQL Server.

To find out the current SQL Database version, you can either check in the settings of the
Central Management Console (CMC) or run the following query by using sqlcmd or SQL
Server Management Studio. The alignment of SQL versions to default compatibility can
be found in the database compatibility level article.
SQL

1> select @@version as version;


2> go
version
----------------------------------------------------------------------------
--------------
Microsoft SQL Azure (RTM) - 12.0.2000.8
Feb 20 2021 17:51:58
Copyright (C) 2019 Microsoft Corporation

(1 rows affected)

1> select name, compatibility_level from sys.databases;


2> go
name
compatibility_level
--------------------------------------------------------------------- ------
-------------
master
150
bocms
150
boaudit
150

(3 rows affected)

Post installation
After a multi-instance installation of the SAP BOBI platform, more post-configuration
steps need to be performed to support application high availability.

Configure a cluster name


In a multi-instance deployment of the SAP BOBI platform, you want to run several CMS
servers together in a cluster. A cluster consists of two or more CMS servers working
together against a common CMS system database. If a node that's running on CMS fails,
a node with another CMS will continue to service BI platform requests. By default in an
SAP BOBI platform, a cluster name reflects the hostname of the first CMS that you
install.

To configure the cluster name on Windows, follow the instructions in the SAP Business
Intelligence Platform Administrator Guide . After you configure the cluster name,
follow SAP Note 1660440 to set the default system entry on the CMC or BI Launchpad
sign-in page.

Configure the input and output filestore location to Azure


Premium Files
Filestore refers to the disk directories where the actual SAP BusinessObjects BI files are
located. The default location of the file repository server for the SAP BOBI platform is
located in the local installation directory. In a multi-instance deployment, it's important
to set up a filestore on shared storage like Azure Premium Files or Azure NetApp Files so
that it can be accessed from all storage tier servers.

1. If not created, follow the instructions provided in the preceding section, "Provision
Azure Premium Files," to create and mount Azure Premium Files.

 Tip

Based on whether the virtual machine is deployed in a zonal or regional


manner, the selection of storage redundancy for Azure Premium Files (ZRS or
LRS) should be determined.

2. Follow SAP Note 2512660 to change the path of the file repository (Input and
Output).

Tomcat clustering: Session replication


Tomcat supports clustering of two or more application servers for session replication
and failover. If SAP BOBI platform sessions are serialized, a user session can fail over
seamlessly to another instance of Tomcat, even when an application server fails. For
example, a user might be connected to a web server that fails while the user is
navigating a folder hierarchy in an SAP BI application. On a correctly configured cluster,
the user can continue navigating the folder hierarchy without being redirected to a sign-
in page.

In SAP Note 2808640 , steps to configure Tomcat clustering are provided by using
multicast. But multicast isn't supported in Azure. To make a Tomcat cluster work in
Azure, you must use StaticMembershipInterceptor (SAP Note 2764907 ). To set up a
Tomcat cluster in Azure, see Tomcat clustering using static membership for the SAP
BusinessObjects BI platform on the SAP blog.

Load balance a web tier of an SAP BI platform


In an SAP BOBI multi-instance deployment, Java web application servers (web tier) are
running on two or more hosts. To distribute the user load evenly across web servers, you
can use a load balancer between end users and web servers. You can use Azure Load
Balancer or Application Gateway to manage traffic to your web application servers. The
offerings are explained in the following sections:

Load Balancer is a high-performance, low-latency, layer 4 (TCP, UDP) load balancer


that distributes traffic among healthy VMs. A load balancer health probe monitors
a given port on each VM and only distributes traffic to operational VMs. You can
choose either a public load balancer or an internal load balancer depending on
whether you want the SAP BI platform accessible from the internet or not. It's zone
redundant, which ensures high availability across availability zones.

In the following figure, see the "Internal Load Balancer" section where the web
application server runs on port 8080 (default Tomcat HTTP port), which will be
monitored by a health probe. Any incoming request that comes from users will get
redirected to the web application servers (azuswinboap1 or azuswinboap2) in the
back-end pool. Load Balancer doesn't support TLS/SSL termination, which is also
known as TLS/SSL offloading. If you're using Load Balancer to distribute traffic
across web servers, we recommend using Standard Load Balancer.

7 Note

When VMs without public IP addresses are placed in the back-end pool of
internal (no public IP address) Standard Load Balancer, there will be no
outbound internet connectivity, unless additional configuration is performed
to allow routing to public endpoints. For information on how to achieve
outbound connectivity, see Public endpoint connectivity for virtual machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
Application Gateway provides an application delivery controller as a service, which
is used to help applications direct user traffic to one or more web application
servers. It offers various layer 7 load-balancing capabilities like TLS/SSL offloading,
Web Application Firewall, and cookie-based session affinity for your applications.

In an SAP BI platform, Application Gateway directs application web traffic to the


specified resources in a back-end pool. In this case, it's either azuswinboap1 or
azuswinboap2. You assign a listener to the port, create rules, and add resources to
a back-end pool. In the following figure, Application Gateway with a private front-
end IP address (10.31.3.25) acts as an entry point for users, handles incoming
TLS/SSL (HTTPS - TCP/443) connections, decrypts the TLS/SSL, and passes the
request (HTTP - TCP/8080) to the servers in the back-end pool. With the built-in
TLS/SSL termination feature, you need to maintain only one TLS/SSL certificate on
the application gateway, which simplifies operations.

To configure Application Gateway for an SAP BOBI web server, see Load balancing
SAP BOBI web servers by using Application Gateway on the SAP blog.

7 Note

Use Application Gateway to load balance the traffic to the web server because
it provides features like SSL offloading, centralized SSL management to
reduce encryption and decryption overhead on the server, round-robin
algorithms to distribute traffic, Web Application Firewall capabilities, and high
availability.

SAP BusinessObjects BI platform reliability on


Azure
The SAP BusinessObjects BI platform includes different tiers, which are optimized for
specific tasks and operations. When a component from any one tier becomes
unavailable, the SAP BOBI application will either become inaccessible or certain
functionality of the application won't work. You need to make sure that each tier is
designed to be reliable to keep the application operational without any business
disruption.

This guide explores how features native to Azure in combination with an SAP BOBI
platform configuration improve the availability of an SAP deployment. This section
focuses on the following options for SAP BOBI platform reliability on Azure:

Backup and restore: This process creates periodic copies of data and applications
to separate locations. If the original data or applications are lost or damaged, the
copies can be used to restore or recover to the previous state.
High availability: A high-availability platform has at least two of everything within
an Azure region to keep the application operational if one of the servers becomes
unavailable.
Disaster recovery (DR): This process restores your application functionality if there
are any catastrophic losses. For example, an entire Azure region might become
unavailable because of a natural disaster.

Implementation of this solution varies based on the nature of the system set up in
Azure. You need to tailor your backup and restore, high-availability, and DR solutions
based on your business requirements.

Backup and restore


Backup and restore is a process of creating periodic copies of data and applications to a
separate location so that they can be restored or recovered to a previous state if the
original data or applications are lost or damaged. It's also an essential component of
any business DR strategy. These backups enable application and database restore to a
point in time within the configured retention period.

To develop a comprehensive backup and restore strategy for an SAP BOBI platform,
identify the components that lead to system downtime or disruption in the application.
In an SAP BOBI platform, backup of the following components is vital to protect the
application:

SAP BOBI installation directory (Managed Premium Disks)


Filestore (Azure Premium Files or Azure NetApp Files for distributed installation)
CMS and audit database (SQL Database, Azure Database for MySQL, or a database
on Azure Virtual Machines)

The following section describes how to implement a backup and restore strategy for
each component on an SAP BOBI platform.

Backup and restore for an SAP BOBI installation directory


In Azure, the simplest way to back up VMs and all the attached disks is by using Azure
Backup. It provides an independent and isolated backup to guard against unintended
destruction of the data on your VMs. Backups are stored in a Recovery Services vault
with built-in management of recovery points. Configuration and scaling are simple.
Backups are optimized and can be restored easily when needed.
As part of the backup process, a snapshot is taken. The data is transferred to the
Recovery Services vault with no effect on production workloads. The snapshot provides
a different level of consistency as described in Snapshot consistency. Backup also offers
side-by-side support for backup of managed disks by using Azure disk backup in
addition to an Azure VM backup solution. It's useful when you need consistent backups
of VMs once a day and more frequent backups of OS disks, or a specific data disk, that
are crash consistent. For more information, see About Azure VM backup, Azure disk
backup, and FAQs: Back up Azure VMs.

Backup and restore for filestore


Based on your deployment, filestore of an SAP BOBI platform can be on Azure NetApp
Files or Azure Files. Choose from the following options for backup and restore based on
the storage you use for filestore:

Azure NetApp Files: For Azure NetApp Files, you can create on-demand snapshots
and schedule an automatic snapshot by using snapshot policies. Snapshot copies
provide a point-in-time copy of your Azure NetApp Files volume. For more
information, see Manage snapshots by using Azure NetApp Files.
Azure Files: Azure Files backup is integrated with a native instance of Backup,
which centralizes the backup and restore function along with VM backup and
simplifies operation work. For more information, see Azure file share backup and
FAQs: Back up Azure Files.

If you've created a separate NFS server, make sure you implement the backup and
restore strategy for the same.

Backup and restore for the CMS and audit database


For an SAP BOBI platform running on Windows VMs, the CMS and audit database can
run on any of the supported databases as described in the support matrix of the SAP
BOBI platform planning and implementation guide on Azure. So it's important that you
adopt the backup and restore strategy based on the database you used for CMS and
audit data storage.

SQL Database uses SQL Server technology to create full backups every week,
differential backups every 12 to 24 hours, and transaction log backups every 5 to
10 minutes. The frequency of transaction log backups is based on the compute
size and the amount of database activity.

Users can choose an option to configure backup storage redundancy between LRS,
ZRS, or GRS blobs. Storage redundancy mechanisms store multiple copies of your
data to protect it from planned and unplanned events, which includes transient
hardware failure, network or power outages, or massive natural disasters. By
default, SQL Database stores backup in GRS blobs that are replicated to a paired
region. It can be changed based on the business requirement to either LRS or ZRS
blobs. For more up-to-date information on SQL Database backup scheduling,
retention, and storage consumption, see Automated backups: Azure SQL Database
and Azure SQL Managed Instance.

Azure Database for MySQL automatically creates server backups and stores in
user-configured LRS or GRS. Azure Database for MySQL takes backups of the data
files and the transaction log. Depending on the supported maximum storage size,
it either takes full and differential backups (4-TB max storage servers) or snapshot
backups (up to 16-TB max storage servers). These backups allow you to restore a
server at any point in time within your configured backup retention period. The
default backup retention period is 7 days, which you can optionally configure up to
35 days. All backups are encrypted by using AES 256-bit encryption. These backup
files aren't user exposed and can't be exported. These backups can only be used
for restore operations in Azure Database for MySQL. You can use mysqldump to
copy a database. For more information, see Backup and restore in Azure Database
for MySQL.

For a database installed on an Azure VM, you can use standard backup tools or
Backup for supported databases. Also, if the Azure services and tools don't meet
your requirements, you can use supported third-party backup tools that provide an
agent for backup and recovery of all SAP BOBI platform components.

High availability
High availability refers to a set of technologies that can minimize IT disruptions by
providing business continuity of applications or services through redundant, fault-
tolerant, or failover-protected components inside the same datacenter. In our case, the
datacenters are within one Azure region. The article High-availability architecture and
scenarios for SAP provides insight on different high-availability techniques and
recommendations offered on Azure for SAP applications, which complement the
instructions in this section.

Based on the sizing result of the SAP BOBI platform, you need to design the landscape
and determine the distribution of BI components across Azure VMs and subnets. The
level of redundancy in the distributed architecture depends on the business-required
recovery time objective (RTO) and recovery point objective (RPO). The SAP BOBI
platform includes different tiers, and components on each tier should be designed to
achieve redundancy. Then if one component fails, there's little to no disruption to your
SAP BOBI application. For example:

Redundant application servers like BI application servers and web servers


Unique components like CMS database, filestore, and load balancer

The following section describes how to achieve high availability on each component of
an SAP BOBI platform.

High availability for application servers


BI and web application servers don't need a specific high-availability solution, no matter
whether they're installed separately or together. You can achieve high availability by
redundancy, that is, by configuring multiple instances of BI and web servers in various
Azure VMs. You can deploy the VMs in either flexible scale set, availability sets or
availability zones based on business-required RTO. For deployment across availability
zones, make sure all other components in the SAP BOBI platform are designed to be
zone redundant too.

Currently, not all Azure regions offer availability zones, so you need to adopt the
deployment strategy based on your region. The Azure regions that offer zones are listed
in Azure availability zones.

) Important

The concepts of Azure availability zones and Azure availability sets are
mutually exclusive. You can deploy a pair or multiple VMs into either a specific
availability zone or an availability set, but you can't do both.
If you planning to deploy across availability zones, it is advised to use flexible
scale set with FD=1 over standard availability zone deployment.

High availability for the CMS database


If you're using an Azure database as a solution for your CMS and audit database, a
locally redundant high-availability framework is provided by default. Select the region
and service inherent high-availability, redundancy, and resiliency capabilities without
requiring you to configure any more components. If the deployment strategy for an SAP
BOBI platform is across an availability zone, make sure you achieve zone redundancy for
your CMS and audit database. For more information on high availability for supported
database offerings in Azure, see High availability for Azure SQL Database and High
availability in Azure Database for MySQL.

For other database management system (DBMS) deployment for a CMS database, see
DBMS deployment guides for SAP workload for insight on a different DBMS deployment
and its approach to achieving high availability.

High availability for filestore


Filestore refers to the disk directories where contents like reports, universes, and
connections are stored. It's being shared across all application servers of that system. So,
you must make sure that it's highly available, alongside other SAP BOBI platform
components.

For an SAP BOBI platform running on Windows, you can either choose Azure Premium
Files or Azure NetApp Files for filestore, which is designed to be highly available and
highly durable in nature. Azure Premium Files support ZRS, which can be useful for
cross-zone deployment of an SAP BOBI platform. For more information, see the
Redundancy section for Azure Files.

Because the file share service isn't available in all regions, make sure you see the list of
products available by region to find up-to-date information. If the service isn't
available in your region, you can create an NFS server from which you can share the file
system to an SAP BOBI application. But you'll also need to consider its high availability.

High availability for the load balancer


To distribute traffic across a web server, you can use Load Balancer or Application
Gateway. The redundancy for either of the load balancers can be achieved based on the
SKU you choose for deployment:

Load Balancer: Redundancy can be achieved by configuring the Standard Load


Balancer front end as zone redundant. For more information, see Standard Load
Balancer and availability zones.
Application Gateway: High availability can be achieved based on the type of tier
selected during deployment:
The v1 SKU supports high-availability scenarios when you've deployed two or
more instances. Azure distributes these instances across update and fault
domains to ensure that instances don't all fail at the same time. With this SKU,
redundancy can be achieved within the zone.
The v2 SKU automatically ensures that new instances are spread across fault
domains and update domains. If you choose zone redundancy, the newest
instances are also spread across availability zones to offer zonal failure
resiliency. For more information, see Autoscaling and zone-redundant
Application Gateway v2.

Reference high-availability architecture for the SAP


BusinessObjects BI platform
The following reference architecture describes the setup of an SAP BOBI platform across
availability zones running on a Windows server. The architecture showcases the use of
different Azure services like Application Gateway, Azure Premium Files (filestore), and
SQL Database (CMS and audit database). The SAP BOBI platform offers built-in zone
redundancy, which reduces the complexity of managing different high-availability
solutions.

In the following figure, the incoming traffic (HTTPS - TCP/443) is load balanced by using
Application Gateway v2 SKU, which spans multiple availability zones. The application
gateway distributes the user request across web servers, which are distributed across
availability zones. The web server forwards the request to management and processing
server instances that are deployed in separate VMs across availability zones. Azure
premium files with ZRS are attached via private link to management and storage tier
VMs to access the contents like reports, universe, and connections. The application
accesses the CMS and audit database running on a zone-redundant instance of SQL
Database, which replicates databases across multiple physical locations within an Azure
region.
The preceding architecture provides insight on how an SAP BOBI deployment on Azure
can be done. But it doesn't cover all possible configuration options for an SAP BOBI
platform on Azure. You can tailor your deployment based on your business
requirements by choosing different products or services for components like Load
Balancer, File Repository Server, and DBMS.

If availability zones aren't available in your selected region, you can deploy Azure VMs in
availability sets. Azure makes sure the VMs you place within an availability set run across
multiple physical servers, compute racks, storage units, and network switches. If
hardware or software failure occurs, only a subset of your VMs is affected and the
overall solution stays operational.

Disaster recovery
This section explains the strategy to provide DR protection for an SAP BOBI platform. It
complements the Disaster recovery for SAP document, which represents the primary
resources for an overall SAP DR approach. For the SAP BOBI platform, see SAP Note
2056228 , which describes the following methods to implement a DR environment
safely:

Fully or selectively use Lifecycle Management or federation to promote or


distribute the content from the primary system.
Periodically copy over the CMS and FRS contents.

In this guide, we'll talk about the second option to implement a DR environment. We
won't cover an exhaustive list of all possible configuration options for DR. We'll cover a
solution that features native Azure services in combination with SAP BOBI platform
configuration.

) Important

Availability of each component in the SAP BOBI platform should be factored


in to the secondary region. The entire DR strategy must be thoroughly tested.
In case where your SAP BI platform is configured with flexible scale set with
FD=1, then you need to use PowerShell to set up Azure Site Recovery for
disaster recovery. Currently, it's the only method available to configure
disaster recovery for VMs deployed in scale set.

Reference DR architecture for an SAP BusinessObjects BI


platform
This reference architecture is running a multi-instance deployment of the SAP BOBI
platform with redundant application servers. For DR, you should fail over all the
components of the SAP BOBI platform to a secondary region. In the following figure,
Azure Premium Files is used as the filestore, SQL Database is used as the CMS and audit
repository, and Application Gateway is used to load balance traffic. The strategy to
achieve DR protection for each component is different, which is described in the
following section.
Load Balancer
Load Balancer is used to distribute traffic across web application servers of an SAP BOBI
platform. On Azure, you can use Load Balancer or Application Gateway to load balance
the traffic across web servers. To achieve DR for the load balancer services, you need to
implement another load balancer or application gateway on a secondary region. To keep
the same URL after DR failover, change the entry in DNS and point to the load-
balancing service that runs on the secondary region.

Virtual machines that run web and BI application servers


Azure Site Recovery can be used to replicate VMs running web and BI application
servers on the secondary region. It replicates the servers and all the attached managed
disks to the secondary region so that when disasters and outages occur, you can easily
fail over to your replicated environment and continue working. To start replicating all the
SAP application VMs to the Azure DR datacenter, follow the guidance in Replicate a
virtual machine to Azure.

Filestore
Filestore is a disk directory where the actual files like reports and BI documents are
stored. It's important that all the files in the filestore are in sync to the DR region. Based
on the type of file share service you use for the SAP BOBI platform running on Windows,
the necessary DR strategy needs to be adopted to sync the content. For example:

Azure Premium Files only supports LRS and ZRS. For Azure Premium Files DR
strategy, you can use AzCopy or Azure PowerShell to copy your files to another
storage account in a different region. For more information, see Disaster recovery
and storage account failover.

Azure NetApp Files provides NFS and SMB volumes, so any file-based copy tool
can be used to replicate data between Azure regions. For more information on
how to copy Azure NetApp Files volume in another region, see FAQs about Azure
NetApp Files.

You can use Azure NetApp Files Cross-Region Replication, currently in preview ,
which uses NetApp SnapMirror technology. With this technology, only changed
blocks are sent over the network in a compressed, efficient format. This proprietary
technology minimizes the amount of data required to replicate across the regions,
which saves data transfer costs. It also shortens the replication time so that you
can achieve a smaller RPO. For more information, see Requirements and
considerations for using cross-region replication.

CMS database
The CMS and audit database in the DR region must be a copy of the databases running
in the primary region. Based on the database type, it's important to copy the database
to a DR region based on business-required RTO and RPO. This section describes
different options available for each database solution in Azure that's supported for an
SAP BOBI application running on Windows.

Azure SQL Database

For a SQL Database DR strategy, two options are available to copy the database to the
secondary region. Both recovery options offer different levels of RTO and RPO. For more
information on the RTO and RPO for each recovery option, see Recover a database to an
existing server.

Option 1: Geo-redundant database backup restore

By default, SQL Database stores data in GRS blobs that are replicated to a paired region.
For a SQL database, the backup storage redundancy can be configured at the time of
CMS and audit database creation, or it can be updated for an existing database. The
changes made to an existing database apply to future backups only. You can restore a
database on any SQL database in any Azure region from the most recent geo-replicated
backups. Geo-restore uses a geo-replicated backup as its source. There's a delay
between when a backup is taken and when it's geo-replicated to an Azure blob in a
different region. As a result, the restored database can be up to one hour behind the
original database.

) Important

Geo-restore is available for SQL databases configured with geo-redundant backup


storage.

Option 2: Geo-replication or an auto-failover group

Geo-replication is a SQL Database feature that allows you to create readable secondary
databases of individual databases on a server in the same or different region. If geo-
replication is enabled for the CMS and audit database, the application can initiate
failover to a secondary database in a different Azure region. Geo-replication is enabled
for individual databases, but to enable transparent and coordinated failover of multiple
databases (CMS and audit) for an SAP BOBI application, it's advisable to use an auto-
failover group. It provides the group semantics on top of active geo-replication, which
means the entire SQL server (all databases) is replicated to another region instead of
individual databases. Check the capabilities table that compares geo-replication with
failover groups.

Auto-failover groups provide read/write and read-only listener endpoints that remain
unchanged during failover. The read/write endpoint can be maintained as a listener in
the ODBC connection entry for the CMS and audit database. So whether you use
manual or automatic failover activation, failover switches all secondary databases in the
group to primary. After the database failover is completed, the DNS record is
automatically updated to redirect the endpoints to the new region. The application is
automatically connected to the CMS database as the read/write endpoint is maintained
as a listener in the ODBC connection.
In the following image, an auto-failover group for the SQL server (azussqlbodb) running
on the East US 2 region is replicated to the East US secondary region (DR site). The
read/write listener endpoint is maintained as a listener in an ODBC connection for the BI
application server running on Windows. After failover, the endpoint will remain the
same. No manual intervention is required to connect the BI application to the SQL
database on the secondary region.

This option provides a lower RTO and RPO than option 1. For more information about
this option, see Use auto-failover groups to enable transparent and coordinated failover
of multiple databases.

Azure Database for MySQL

Azure Database for MySQL provides options to recover a database if there's a disaster.
Choose the appropriate option that works for your business:

Enable cross-region read replicas to enhance your business continuity and DR


planning. You can replicate from a source server up to five replicas. Read replicas
are updated asynchronously by using the Azure Database for MySQL binary log
replication technology. Replicas are new servers that you manage similar to regular
Azure Database for MySQL servers. To learn more about read replicas, available
regions, restrictions, and how to fail over, see Read replicas in Azure Database for
MySQL.

Use the Azure Database for MySQL geo-restore feature that restores the server by
using geo-redundant backups. These backups are accessible even when the region
on which your server is hosted is offline. You can restore from these backups to
any other region and bring your server back online.

) Important
Geo-restore is only possible if you provisioned the server with geo-redundant
backup storage. Changing the backup redundancy options after server
creation isn't supported. For more information, see Backup redundancy.

The following table lists the recommendations for DR for each tier used in this example.

SAP BOBI platform tiers Recommendation

Azure Application Gateway Parallel setup of Application Gateway on a secondary region


or Azure Load Balancer

Web application servers Replicate by using Azure Site Recovery

BI application servers Replicate by using Site Recovery

Azure Premium Files AzCopy or Azure PowerShell

Azure NetApp Files File-based copy tool to replicate data to a secondary region or
Azure NetApp Files Cross-Region Replication Preview

Azure SQL Database Geo-replication/auto-failover groups or geo-restore

Azure Database for MySQL Cross-region read replicas or restore backup from geo-redundant
backups

Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP BusinessObjects BI platform
deployment guide for Linux on Azure
Article • 06/15/2023

This article describes the strategy to deploy SAP BusinessObjects BI (BOBI) platform on
Azure for Linux. In this example, you configure two virtual machines with premium solid-
state drive (SSD) managed disks as the install directory. You use Azure Database for
MySQL for your CMS database, and you share Azure NetApp Files for your file
repository server across both servers. On both virtual machines, you install the default
Tomcat Java web application and BI platform application together. To load-balance user
requests, you use Azure Application Gateway with native TLS/SSL offloading capabilities.

This type of architecture is effective for small deployments or non-production


environments. For large deployments or production environments, you can have
separate hosts for your web application. You can also have multiple BOBI application
hosts, allowing the server to process more information.
Here's the product version and file system layout for this example:

SAP BusinessObjects platform 4.3


SUSE Linux Enterprise Server 12 SP5
Azure Database for MySQL (Version: 8.0.15)
MySQL C API Connector - libmysqlclient (Version: 6.1.11)
ノ Expand table

File system Description Size (GB) Owner Group Storage

/usr/sap The file system for SAP sizing bl1adm sapsys Managed
installation of the SAP BOBI guidelines premium
instance, the default Tomcat disk - SSD
web application, and the
database drivers (if
necessary).

/usr/sap/frsinput The mount directory is for Business bl1adm sapsys Azure


the shared files across all need NetApp
BOBI hosts that will be used Files
as the input file repository
directory.

/usr/sap/frsoutput The mount directory is for Business bl1adm sapsys Azure


the shared files across all need NetApp
BOBI hosts that will be used Files
as the output file repository
directory

) Important

While the setup of the SAP BusinessObjects platform is explained using Azure
NetApp Files, you could use NFS on Azure Files as the input and output file
repository.

Deploy Linux virtual machine via Azure portal


In this section, you create two virtual machines with the Linux operating system image
for the SAP BOBI platform. The high-level steps to create the virtual machines are as
follows:

1. Create a resource group.

2. Create a virtual network.

Don't use a single subnet for all Azure services in the SAP BI platform
deployment. Based on SAP BI platform architecture, you need to create
multiple subnets. In this deployment, you create three subnets: one each for
the application, the file repository store, and Application Gateway.
In Azure, Application Gateway and Azure NetApp Files must always be on a
separate subnet. For more information, see Azure Application Gateway and
Guidelines for Azure NetApp Files network planning.

3. Select the suitable availability options depending on your preferred system


configuration within an Azure region, whether it involves spanning across zones,
residing within a single zone, or operating in a zone-less region.

4. Create virtual machine 1, called (azusbosl1).

You can either use a custom image or choose an image from Azure
Marketplace. For more information, see Deploying a VM from the Azure
Marketplace for SAP or Deploying a VM with a custom image for SAP .

5. Create virtual machine 2, called (azusbosl2).

6. Add one premium SSD disk. You'll use it as your SAP BOBI Installation directory.

Provision Azure NetApp Files


Before you continue with the setup for Azure NetApp Files, familiarize yourself with the
Azure NetApp Files documentation.

Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.

Use Azure NetApp Files availability by Azure Region to check the availability of Azure
NetApp Files by region.

Deploy Azure NetApp Files resources


The following instructions assume that you've already deployed your Azure virtual
network. The Azure NetApp Files resources, and the VMs where the Azure NetApp Files
resources will be mounted, must be deployed in the same Azure virtual network or in
peered Azure virtual networks.

1. Create an Azure NetApp Files account in your selected Azure region.

2. Set up an Azure NetApp Files capacity pool. The SAP BI platform architecture
presented in this article uses a single Azure NetApp Files capacity pool at the
Premium service level. For SAP BI File Repository Server on Azure, we recommend
using an Azure NetApp Files Premium or Ultra service Level.

3. Delegate a subnet to Azure NetApp Files.


4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.

You can deploy the volumes as NFSv3 and NFSv4.1, because both protocols are
supported for the SAP BOBI platform. Deploy the volumes in their respective Azure
NetApp Files subnets. The IP addresses of the Azure NetApp Files volumes are
assigned automatically.

Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the
same Azure virtual network or in peered Azure virtual networks. For example, azusbobi-
frsinput and azusbobi-frsoutput are the volume names, and nfs://10.31.2.4/azusbobi-
frsinput and nfs://10.31.2.4/azusbobi-frsoutput are the file paths for the Azure NetApp
Files volumes.

Volume azusbobi-frsinput (nfs://10.31.2.4/azusbobi-frsinput)


Volume azusbobi-frsoutput (nfs://10.31.2.4/azusbobi-frsoutput)

Important considerations
As you're creating your Azure NetApp Files for SAP BOBI platform file repository server,
be aware of the following considerations:

The minimum capacity pool is 4 tebibytes (TiB). The capacity pool size can be
increased in 1 TiB increments.
The minimum volume size is 100 gibibytes (GiB).
Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes
will be mounted must be in the same Azure virtual network, or in peered virtual
networks in the same region. Azure NetApp Files access over virtual network
peering in the same region is supported. Azure NetApp Files access over global
peering isn't currently supported.
The selected virtual network must have a subnet that is delegated to Azure NetApp
Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. While sizing the SAP Azure NetApp volumes, make sure
that the resulting throughput meets the application requirements.
With the Azure NetApp Files export policy, you can control the allowed clients, the
access type (for example, read-write or read only).
The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't
deployed in all availability zones in an Azure region. Be aware of the potential
latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP BI platform applications.

Configure file systems on Linux servers


The steps in this section use the following prefix:

[A]: The step applies to all hosts.

Format and mount the SAP file system


1. [A] List all attached disks.

Bash

sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 1G 0 part /boot
└─sda4 8:4 0 28.5G 0 part /
sdb 8:16 0 32G 0 disk
└─sdb1 8:17 0 32G 0 part /mnt
sdc 8:32 0 128G 0 disk
sr0 11:0 1 628K 0 rom
# Premium SSD of 128 GB is attached to virtual machine, whose device
name is sdc

2. [A] Format the block device for /usr/sap.

Bash

sudo mkfs.xfs /dev/sdc

3. [A] Create the mount directory.

Bash

sudo mkdir -p /usr/sap

4. [A] Get the UUID of the block device.

Bash
sudo blkid

# It will display information about block device. Copy UUID of the


formatted block device

/dev/sdc: UUID="0eb5f6f8-fa77-42a6-b22d-7a9472b4dd1b" TYPE="xfs"

5. [A] Maintain the file system mount entry in /etc/fstab.

Bash

sudo echo "UUID=0eb5f6f8-fa77-42a6-b22d-7a9472b4dd1b /usr/sap xfs


defaults,nofail 0 2" >> /etc/fstab

6. [A] Mount the file system.

Bash

sudo mount -a

sudo df -h

Filesystem Size Used Avail Use% Mounted on


devtmpfs 7.9G 8.0K 7.9G 1% /dev
tmpfs 7.9G 82M 7.8G 2% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda4 29G 1.8G 27G 6% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
/dev/sda3 1014M 87M 928M 9% /boot
/dev/sda2 512M 1.1M 511M 1% /boot/efi
/dev/sdb1 32G 49M 30G 1% /mnt
/dev/sdc 128G 29G 100G 23% /usr/sap

Mount the Azure NetApp Files volume


1. [A] Create mount directories.

Bash

sudo mkdir -p /usr/sap/frsinput


sudo mkdir -p /usr/sap/frsoutput

2. [A] Configure the client operating system to support NFSv4.1 Mount (only
applicable if using NFSv4.1).
If you're using Azure NetApp Files volumes with NFSv4.1 protocol, run the
following configuration on all VMs where Azure NetApp Files NFSv4.1 volumes
need to be mounted.

In this step, you need to verify NFS domain settings. Make sure that the domain is
configured as the default Azure NetApp Files domain ( defaultv4iddomain.com ), and
that the mapping is set to nobody .

Bash

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files
( defaultv4iddomain.com ). If there's a mismatch, then the permissions for files
on Azure NetApp Files volumes that are mounted on the VMs will be
displayed as nobody .

Verify nfs4_disable_idmapping . It should be set to Y . To create the directory


structure where nfs4_disable_idmapping is located, run the mount command. You
won't be able to manually create the directory under /sys/modules, because access
is reserved for the kernel / drivers.

Bash

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping

# If you need to set nfs4_disable_idmapping to Y


mkdir /mnt/tmp
mount -t nfs -o sec=sys,vers=4.1 10.31.2.4:/azusbobi-frsinput /mnt/tmp
umount /mnt/tmp

echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

# Make the configuration permanent


echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
3. [A] Add mount entries.

If you're using NFSv3:

Bash

sudo echo "10.31.2.4:/azusbobi-frsinput /usr/sap/frsinput nfs


rw,hard,rsize=65536,wsize=65536,vers=3" >> /etc/fstab
sudo echo "10.31.2.4:/azusbobi-frsoutput /usr/sap/frsoutput nfs
rw,hard,rsize=65536,wsize=65536,vers=3" >> /etc/fstab

If you're using NFSv4.1:

Bash

sudo echo "10.31.2.4:/azusbobi-frsinput /usr/sap/frsinput nfs


rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys" >> /etc/fstab
sudo echo "10.31.2.4:/azusbobi-frsoutput /usr/sap/frsoutput nfs
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys" >> /etc/fstab

4. [A] Mount NFS volumes.

Bash

sudo mount -a

sudo df -h

Filesystem Size Used Avail Use% Mounted on


devtmpfs 7.9G 8.0K 7.9G 1% /dev
tmpfs 7.9G 82M 7.8G 2% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda4 29G 1.8G 27G 6% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
/dev/sda3 1014M 87M 928M 9% /boot
/dev/sda2 512M 1.1M 511M 1% /boot/efi
/dev/sdb1 32G 49M 30G 1% /mnt
/dev/sdc 128G 29G 100G 23% /usr/sap
10.31.2.4:/azusbobi-frsinput 101T 18G 100T 1% /usr/sap/frsinput
10.31.2.4:/azusbobi-frsoutput 100T 512K 100T 1% /usr/sap/frsoutput

Configure Azure Database for MySQL


This section provides details on how to provision Azure Database for MySQL by using
the Azure portal. It also provides instructions on how to create the CMS and audit
databases for the SAP BOBI platform, and a user account to access the database.
The guidelines are applicable only if you're using Azure Database for MySQL. For other
databases, refer to the SAP or database-specific documentation for instructions.

Create a database
Sign in to the Azure portal, and follow the steps in Quickstart: Create an Azure Database
for MySQL server by using the Azure portal. Here are a few points to note while you're
provisioning Azure Database for MySQL:

Select the same region for Azure Database for MySQL as where your SAP BI
platform application servers are running.

Choose a supported database version, based on the Product Availability Matrix


(PAM) for SAP BI specific to your SAP BOBI version.

In compute+storage, select Configure server, and select the appropriate pricing


tier based on your sizing output.

Storage Autogrowth is enabled by default. Keep in mind that storage can only be
scaled-up, not down.

By default, Back up Retention Period is seven days. You can optionally configure it
up to 35 days.

Backups of Azure Database for MySQL are locally redundant by default. If you want
server backups in geo-redundant storage, select Geographically Redundant from
Backup Redundancy Options.

) Important

Changing the Backup Redundancy Options after server creation isn't supported.

7 Note

The private link feature is only available for Azure Database for MySQL servers in
the General Purpose or Memory Optimized pricing tiers. Ensure that the database
server is in one of these pricing tiers.

Configure Azure Private Link


In this section, you create a private link that allows SAP BOBI virtual machines to connect
to Azure Database for MySQL through a private endpoint. Azure Private Link brings
Azure services inside your private virtual network.

1. Select the database created in the previous section.


2. Go to Security > Private endpoint connections.
3. In Private endpoint connections, select Private endpoint.
4. Select Subscription > Resource group > Location.
5. Enter the Name of the private endpoint.
6. In the Resource section, specify the following:

Resource type: Microsoft.DBforMySQL/servers


Resource: MySQL database created in the previous section
Target sub-resource: mysqlServer

7. In the Networking section, select the Virtual network and Subnet on which the
SAP BOBI application is deployed.

7 Note

If you have a network security group (NSG) enabled for the subnet, it will be
disabled for private endpoints on this subnet only. Other resources on the
subnet will still have NSG enforcement.

8. For Integrate with private DNS zone, accept the default (yes).
9. Select your private DNS zone from the dropdown list.
10. Select Review+Create, and create a private endpoint.

For more information, see Private Link for Azure Database for MySQL.

Create the CMS and audit databases


1. Download and install MySQL Workbench from MySQL website . Make sure you
install MySQL Workbench on the server that can access Azure Database for MySQL.

2. Connect to the server by using MySQL Workbench. Follow the instructions in Get
connection information. If the connection test is successful, you get following
message:
3. In the SQL query tab, run the following query to create a schema for the CMS and
audit databases.

SQL

# Here cmsbl1 is the database name of CMS database. You can provide the
name you want for CMS database.
CREATE SCHEMA `cmsbl1` DEFAULT CHARACTER SET utf8;

# auditbl1 is the database name of Audit database. You can provide the
name you want for CMS database.
CREATE SCHEMA `auditbl1` DEFAULT CHARACTER SET utf8;

4. Create a user account to connect to the schema.

SQL

# Create a user that can connect from any host, use the '%' wildcard as
a host part
CREATE USER 'cmsadmin'@'%' IDENTIFIED BY 'password';
CREATE USER 'auditadmin'@'%' IDENTIFIED BY 'password';

# Grant all privileges to a user account over a specific database:


GRANT ALL PRIVILEGES ON cmsbl1.* TO 'cmsadmin'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON auditbl1.* TO 'auditadmin'@'%' WITH GRANT
OPTION;

# Following any updates to the user privileges, be sure to save the


changes by issuing the FLUSH PRIVILEGES
FLUSH PRIVILEGES;

5. To check the privileges and roles of the MySQL user account:

SQL
USE sys;
SHOW GRANTS for 'cmsadmin'@'%';
+----------------------------------------------------------------------
--+
| Grants for cmsadmin@%
|
+----------------------------------------------------------------------
--+
| GRANT USAGE ON *.* TO `cmsadmin`@`%`
|
| GRANT ALL PRIVILEGES ON `cmsbl1`.* TO `cmsadmin`@`%` WITH GRANT
OPTION |
+----------------------------------------------------------------------
--+

USE sys;
SHOW GRANTS FOR 'auditadmin'@'%';
+----------------------------------------------------------------------
------+
| Grants for auditadmin@%
|
+----------------------------------------------------------------------
------+
| GRANT USAGE ON *.* TO `auditadmin`@`%`
|
| GRANT ALL PRIVILEGES ON `auditbl1`.* TO `auditadmin`@`%` WITH GRANT
OPTION |
+----------------------------------------------------------------------
------+

Install MySQL C API connector on a Linux server


For the SAP BOBI application server to access a database, it requires database client
drivers. To access the CMS and audit databases, you must use the MySQL C API
Connector for Linux. An ODBC connection to the CMS database isn't supported. This
section provides instructions on how to set up MySQL C API Connector on Linux.

1. Refer to MySQL drivers and management tools compatible with Azure Database
for MySQL. Check for the MySQL Connector/C (libmysqlclient) driver in the article.

2. To download drivers, see MySQL Product Archives .

3. Select the operating system and download the shared component rpm package of
MySQL Connector. In this example, mysql-connector-c-shared-6.1.11 connector
version is used.

4. Install the connector in all SAP BOBI application instances.


Bash

# Install rpm package


SLES: sudo zypper install <package>.rpm
RHEL: sudo yum install <package>.rpm

5. Check the path of libmysqlclient.so.

Bash

# Find the location of libmysqlclient.so file


whereis libmysqlclient

# sample output
libmysqlclient: /usr/lib64/libmysqlclient.so

6. Set LD_LIBRARY_PATH to point to the /usr/lib64 directory for the user account that
will be used for installation.

Bash

# This configuration is for bash shell. If you are using any other
shell for sidadm, kindly set environment variable accordingly.
vi /home/bl1adm/.bashrc

export LD_LIBRARY_PATH=/usr/lib64

Server preparation
The steps in this section use the following prefix:

[A]: The step applies to all hosts.

1. [A] Based on the flavor of Linux (SLES or RHEL), you need to set kernel parameters
and install required libraries. Refer to the "System requirements" section in
Business Intelligence Platform Installation Guide for Unix .

2. [A] Ensure that the time zone on your machine is set correctly. In the Installation
Guide, see Additional Unix and Linux requirements .

3. [A] Create user account (bl1adm) and group (sapsys) under which the software's
background processes can run. Use this account to run the installation and the
software. The account doesn't require root privileges.
4. [A] Set the user account (bl1adm) environment to use a supported UTF-8 locale,
and ensure that your console software supports UTF-8 character sets. To ensure
that your operating system uses the correct locale, set the LC_ALL and LANG
environment variables to your preferred locale in your (bl1adm) user environment.

Bash

# This configuration is for bash shell. If you are using any other
shell for sidadm, kindly set environment variable accordingly.
vi /home/bl1adm/.bashrc

export LANG=en_US.utf8
export LC_ALL=en_US.utf8

5. [A] Configure user account (bl1adm).

Bash

# Set ulimit for bl1adm to unlimited


root@azusbosl1:~> ulimit -f unlimited bl1adm
root@azusbosl1:~> ulimit -u unlimited bl1adm

root@azusbosl1:~> su - bl1adm
bl1adm@azusbosl1:~> ulimit -a

core file size (blocks, -c) unlimited


data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63936
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

6. Download and extract media for SAP BusinessObjects BI platform from SAP Service
Marketplace.

Installation
Check the locale for user account bl1adm on the server:
Bash

bl1adm@azusbosl1:~> locale
LANG=en_US.utf8
LC_ALL=en_US.utf8

Go to the media of SAP BOBI platform, and run the following command with bl1adm
user:

Bash

./setup.sh -InstallDir /usr/sap/BL1

Follow the SAP BOBI platform Installation Guide for Unix, specific to your version.
Here are a few points to note while you're installing the SAP BOBI platform:

On Configure Product Registration, you can either use a temporary license key for
SAP BusinessObjects Solutions from SAP Note 1288121 , or you can generate a
license key in SAP Service Marketplace.

On Select Install Type, select Full installation on the first server ( azusbosl1 ). For
the other server ( azusbosl2 ), select Custom / Expand, which will expand the
existing BOBI setup.

On Select Default or Existing Database, select configure an existing database,


which will prompt you to select CMS and audit databases. Select MySQL for these
database types.

You can also select No auditing database, if you don’t want to configure auditing
during installation.

On Select Java Web Application Server screen, select appropriate options based
on your SAP BOBI architecture. In this example, we have selected option 1, which
installs a tomcat server on the same SAP BOBI platform.

Enter CMS database information in Configure CMS Repository Database - MySQL.


The following example shows input for CMS database information for a Linux
installation. Azure Database for MySQL is used on default port 3306.
(Optional) Enter audit database information in Configure Audit Repository
Database - MySQL. The following example shows input for audit database
information for a Linux installation.

Follow the instructions and enter required inputs to complete the installation.

For multi-instance deployment, run the installation setup on a second host ( azusbosl2 ).
For Select Install Type, select Custom / Expand, which will expand the existing BOBI
setup.

In Azure Database for MySQL, a gateway redirects the connections to server instances.
After the connection is established, the MySQL client displays the version of MySQL set
in the gateway, not the actual version running on your MySQL server instance. To
determine the version of your MySQL server instance, use the SELECT VERSION();
command at the MySQL prompt. For more details, see Supported Azure Database for
MySQL server versions.

SQL
# Run direct query to the database using MySQL Workbench

select version();

+-----------+
| version() |
+-----------+
| 8.0.15 |
+-----------+

Post-installation
After a multi-instance installation of the SAP BOBI platform, you need to perform
additional, post-configuration steps, to support application high availability.

Configure the cluster name


In a multi-instance deployment of the SAP BOBI platform, you want to run several CMS
servers together in a cluster. A cluster consists of two or more CMS servers working
together against a common CMS system database. If a node that is running on CMS
fails, a node with another CMS will continue to service BI platform requests. By default in
the SAP BOBI platform, a cluster name reflects the hostname of the first CMS that you
install.

To configure the cluster name on Linux, follow the instructions in the SAP Business
Intelligence Platform Administrator Guide . After configuring the cluster name, follow
SAP Note 1660440 to set the default system entry on the CMC or BI launchpad sign-in
page.

Configure input and output filestore location to Azure


NetApp Files
Filestore refers to the disk directories where the actual SAP BusinessObjects files are. The
default location of file repository server for the SAP BOBI platform is located in the local
installation directory. In a multi-instance deployment, it's important to set up the
filestore on a shared storage, such as Azure NetApp Files. This allows access to the
filestore from all storage tier servers.

1. If you haven't already created NFS volumes, create them in Azure NetApp Files.
(Follow the instructions in the earlier section "Provision Azure NetApp Files.")
2. Mount the NFS volume. (Follow the instructions in the earlier section "Mount the
Azure NetApp Files volume.")

3. Follow SAP Note 2512660 to change the path of file repository (both input and
output).

Session replication in Tomcat clustering


Tomcat supports clustering two or more application servers for session replication and
failover. SAP BOBI platform sessions are serialized, so a user session can fail over
seamlessly to another instance of Tomcat, even when an application server fails.

For example, suppose a user is connected to a web server that fails while the user is
navigating a folder hierarchy in a SAP BI application. With a correctly configured cluster,
the user can continue navigating the folder hierarchy without being redirected to the
sign-in page.

See SAP Note 2808640 for steps to configure Tomcat clustering by using multicast.
Note that Azure, however, doesn't support multicast. So to make the Tomcat cluster
work in Azure, you must use StaticMembershipInterceptor (SAP Note 2764907 ). For
more information, see the blog post Tomcat Clustering using Static Membership for SAP
BusinessObjects BI Platform .

Load-balancing web tier of SAP BI platform


In a SAP BOBI multi-instance deployment, Java Web Application servers (web tier) are
running on two or more hosts. To distribute the user load evenly across web servers, you
can use a load balancer between end users and web servers. In Azure, you can either use
Azure Load Balancer or Azure Application Gateway to manage traffic to your web
application servers. Details about each offering are explained in following section.

Azure Load Balancer

Azure Load Balancer is a high performance, low latency layer 4 (TCP, UDP) load balancer.
It distributes traffic among healthy virtual machines (VMs). A load balancer health probe
monitors a specified port on each VM, and only distributes traffic to an operational VM.
You can either choose a public load balancer or an internal load balancer, depending on
whether or not you want SAP BI platform accessible from the internet. It's zone
redundant, ensuring high-availability across availability zones.

In the following diagram, refer to the Internal Load Balancer section. The web
application server runs on port 8080, the default Tomcat HTTP port, which will be
monitored by health probe. Any incoming request that comes from end users is
redirected to the web application servers ( azusbosl1 or azusbosl2 ). Load Balancer
doesn’t support TLS/SSL termination (also known as TLS/SSL offloading). If you're using
Load Balancer to distribute traffic across web servers, use Standard Load Balancer.

7 Note

When VMs without public IP addresses are placed in the pool of internal (no public
IP address) Standard Load Balancer, there will be no outbound internet
connectivity, unless you perform additional configuration to allow routing to public
end points. For more information, see Public endpoint connectivity for Virtual
Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
Azure Application Gateway
Azure Application Gateway provides Application Delivery Controller (ADC) as a service.
This service is used to help the application to direct user traffic to one or more web
application servers. It offers various layer 7 load-balancing capabilities, such as TLS/SSL
offloading, web application firewall (WAF), and cookie-based session affinity.

In SAP BI platform, Application Gateway directs application web traffic to the specified
resources, either azusbosl1 or azusbos2 . You assign a listener to a port, create rules, and
add resources to a pool. In the following diagram, Application Gateway has a private IP
address (10.31.3.20) that acts as an entry point for users. It also handles incoming
TLS/SSL (HTTPS - TCP/443) connections, decrypts the TLS/SSL, and passes on the
unencrypted request (HTTP - TCP/8080) to the servers. It simplifies operations to
maintain just one TLS/SSL certificate on Application Gateway.

To configure Application Gateway for a SAP BOBI web server, see the blog post Load
Balancing SAP BOBI Web Servers using Azure Application Gateway .

7 Note

Azure Application Gateway is preferable to load balance the traffic to a web server.
It provides helpful features, such as SSL offloading, centralized SSL management to
reduce encryption and decryption overhead on the server, a round-robin algorithm
to distribute traffic, WAF capabilities, and high availability.

SAP BOBI platform reliability on Azure


SAP BOBI platform includes different tiers, which are optimized for specific tasks and
operations. When a component from any one tier becomes unavailable, a SAP BOBI
application either becomes inaccessible or limited in its functionality. Make sure that
each tier is designed to be reliable, to keep the application operational without any
business disruption.

This guide explores how features native to Azure, in combination with the SAP BOBI
platform configuration, improves the availability of SAP deployment. This section
focuses on the following options:

Backup and restore: It's a process of creating periodic copies of data and
applications to a separate location. You can restore or recover to a previous state if
the original data or applications are lost or damaged.
High availability: A highly available platform has at least two of everything within
an Azure region, to keep the application operational if one of the servers becomes
unavailable.

Disaster recovery: It's a process of restoring your application functionality if there's


any catastrophic loss, such as an entire Azure region becoming unavailable
because of some natural disaster.

Implementation of this solution varies based on the nature of the system setup in Azure.
Tailor you backup/restore, high availability, and disaster recovery solutions according to
your business requirements.

Backup and restore


Backup and restore is an essential component of any business disaster recovery strategy.
To develop a comprehensive strategy for SAP BOBI platform, identify the components
that lead to system downtime or disruption in the application. In the SAP BOBI platform,
backup of following components are vital to protect the application:

SAP BOBI installation directory (Managed Premium Disks)


File repository server (Azure NetApp Files or Azure Premium Files)
CMS database (Azure Database for MySQL or a database on Azure Virtual
Machines)

The following section describes how to implement a backup and restore strategy for
each of these components.

Backup and restore for SAP BOBI installation directory


In Azure, the simplest way to back up application servers and all the attached disks is by
using Azure Backup. It provides independent and isolated backups to guard against
unintended destruction of the data on your VMs. Backups are stored in a recovery
services vault, with built-in management of recovery points. Configuration and scaling
are simple, backups are optimized, and you can easily restore when you need to.

As part of backup process, a snapshot is taken, and the data is transferred to the vault
with no impact on production workloads. For more information, see Snapshot
consistency. You can also choose to back up a subset of the data disks in your VM, by
using the selective disks backup and restore functionality. For more information, see
Azure VM Backup and FAQs - Backup Azure VMs.

Backup and restore for file repository server


Based on your SAP BOBI deployment on Linux, you can use Azure NetApp Files as the
filestore of your SAP BOBI platform. Choose from the following options for backup and
restore based on the storage you use for filestore.

Azure NetApp Files: You can create on-demand snapshots, and schedule
automatic snapshots by using snapshot policies. Snapshot copies provide a point-
in-time copy of your volume. For more information, see Manage snapshots by
using Azure NetApp Files.

If you have created a separate NFS server, make sure you implement the backup
and restore strategy for the same server.

Backup and restore for CMS and audit databases


On Linux VMs, the CMS and audit databases can run on any of the supported databases.
For more information, see the support matrix. It's important that you adopt the backup
and restore strategy based on the database used for the CMS and audit data store.

Azure Database for MySQL automatically creates server backups, and stores them
in user-configured, locally redundant or geo-redundant storage. Azure Database
for MySQL takes backups of the data files and the transaction log. Depending on
the supported maximum storage size, it either takes full and differential backups (4
TB max storage servers), or snapshot backups (up to 16 TB max storage servers).
These backups allow you to restore a server at any point in time within your
configured backup retention period. The default backup retention period is seven
days, which you can optionally configure up to three days. All backups are
encrypted by using AES 256-bit encryption. These backup files aren't user-exposed
and can't be exported. These backups can only be used for restore operations in
Azure Database for MySQL. You can use mysqldump to copy a database. For more
information, see Backup and restore in Azure Database for MySQL.

For a database installed on an Azure virtual machine, you can use standard backup
tools or Azure Backup for supported databases. You can also use supported third-
party backup tools that provide an agent for backup and recovery of all SAP BOBI
platform components.

High availability
High availability refers to a set of technologies that can minimize IT disruptions by
providing business continuity of applications and services. It does so through redundant,
fault-tolerant, or failover-protected components inside the same datacenter. In our case,
the datacenters are within one Azure region. For more information, see High-availability
architecture and scenarios for SAP.

Based on the sizing result of the SAP BOBI platform, you need to design the landscape
and determine the distribution of BI components across Azure Virtual Machines and
subnets. The level of redundancy in the distributed architecture depends on the
recovery time objective (RTO) and recovery point objective (RPO) that you need for your
business. SAP BOBI platform includes different tiers, and components on each tier
should be designed to achieve redundancy. For example:

Redundant application servers, like BI application servers and web server.


Unique components, like CMS database, file repository server, and Load Balancer.

The following sections describe how to achieve high availability on each component of
the SAP BOBI platform.

High availability for application servers


You can achieve high availability for application servers by employing redundancy. To do
this, configure multiple instances of BI and web servers in various Azure VMs.

To reduce the impact of downtime due to planned and unplanned events, it's a good
idea to follow the high availability architecture guidance.

For more information, see Manage the availability of Linux virtual machines.

) Important

The concepts of Azure availability zones and Azure availability sets are
mutually exclusive. You can deploy a pair or multiple VMs into either a specific
availability zone or an availability set, but you can't do both.
If you planning to deploy across availability zones, it is advised to use flexible
scale set with FD=1 over standard availability zone deployment.

High availability for a CMS database


If you're using Azure Database for MySQL for your CMS and audit databases, you have a
locally redundant, high availability framework by default. You just need to select the
region, and service inherent high availability, redundancy, and resiliency capabilities,
without needing to configure any additional components. If the deployment strategy for
the SAP BOBI platform is across availability zones, then you need to make sure you
achieve zone redundancy for your CMS and audit databases. For more information, see
High availability in Azure Database for MySQL and High availability for Azure SQL
Database.

For other deployments for the CMS database, see the high availability information in the
DBMS deployment guides for SAP Workload.

High availability for filestore


Filestore refers to the disk directories where contents like reports, universes, and
connections are stored. It's shared across all application servers of that system. So you
must make sure that it's highly available, along with other SAP BOBI platform
components.

For SAP BOBI platform running on Linux, you can choose Azure Premium Files or Azure
NetApp Files for file shares that are designed to be highly available and highly durable
in nature. For more information, see Redundancy for Azure Files.

Note that this file share service isn't available in all regions. See Products available by
region to find up-to-date information. If the service isn't available in your region, you
can create an NFS server from which you can share the file system to the SAP BOBI
application. But you'll also need to consider its high availability.

High availability for Load Balancer


To distribute traffic across a web server, you can either use Azure Load Balancer or Azure
Application Gateway. The redundancy for either of these can be achieved based on the
SKU you choose for deployment.

For Azure Load Balancer, redundancy can be achieved by configuring Standard


Load Balancer as zone-redundant. For more information, see Standard Load
Balancer and Availability Zones.

For Application Gateway, high availability can be achieved based on the type of tier
selected during deployment.
v1 SKU supports high-availability scenarios when you've deployed two or more
instances. Azure distributes these instances across update and fault domains to
ensure that instances don't all fail at the same time. You achieve redundancy
within the zone.
v2 SKU automatically ensures that new instances are spread across fault
domains and update domains. If you choose zone redundancy, the newest
instances are also spread across availability zones to offer zonal failure
resiliency. For more details, see Autoscaling and Zone-redundant Application
Gateway v2.

Reference high availability architecture for SAP BOBI


platform
The following diagram shows the setup of SAP BOBI platform running on Linux server.
The architecture showcases the use of different services, like Azure Application Gateway,
Azure NetApp Files, and Azure Database for MySQL. These services offer built-in
redundancy, which reduces the complexity of managing different high availability
solutions.

Notice that the incoming traffic (HTTPS) is load-balanced by using Azure Application
Gateway v1/v2 SKU, which is highly available when deployed on two or more instances.
Multiple instances of the web server, management servers, and processing servers are
deployed in separate VMs to achieve redundancy. Azure NetApp Files has built-in
redundancy within the datacenter, so your Azure NetApp Files volumes for the file
repository server will be highly available. The CMS database is provisioned on Azure
Database for MySQL, which has inherent high availability. For more information, see
High availability in Azure Database for MySQL.
The preceding architecture provides insight on how a SAP BOBI deployment on Azure
can be done. But it doesn't cover all possible configuration options. You can tailor your
deployment based on your business requirements.

In several Azure regions, you can use availability zones. This means you can take
advantage of an independent supply of power source, cooling, and network. It enables
you to deploy an application across two or three availability zones. If you want to
achieve high availability across availability zones, you can deploy SAP BOBI platform
across these zones, making sure that each component in the application is zone
redundant.

Disaster recovery
This section explains the strategy to provide disaster recovery protection for a SAP BOBI
platform running on Linux. It complements the Disaster Recovery for SAP document,
which represents the primary resources for the overall SAP disaster recovery approach.
For SAP BOBI, refer to SAP Note 2056228 , which describes the following methods to
implement a disaster recovery environment safely.

Fully or selectively using lifecycle management or federation to promote and


distribute the content from the primary system.
Periodically copying over the CMS and file repository server contents.

This guide focuses on the second option. It won't cover all possible configuration
options for disaster recovery, but does cover a solution that features native Azure
services in combination with a SAP BOBI platform configuration.

) Important

The availability of each component in the SAP BOBI platform should be


factored in the secondary region, and you must thoroughly test the entire
disaster recovery strategy.
In case where your SAP BI platform is configured with flexible scale set with
FD=1, then you need to use PowerShell to set up Azure Site Recovery for
disaster recovery. Currently, it's the only method available to configure
disaster recovery for VMs deployed in scale set.

Reference disaster recovery architecture for SAP BOBI


platform
This reference architecture is running a multi-instance deployment of the SAP BOBI
platform, with redundant application servers. For disaster recovery, you should fail over
all the components of the SAP BOBI platform to a secondary region. In the following
diagram, Azure NetApp Files is used as the filestore, Azure Database for MySQL as the
CMS and audit repository, and Azure Application Gateway as the load balancer. The
strategy to achieve disaster recovery protection for each component is different, and
these differences are described in the following sections.

Load balancer
A load balancer is used to distribute traffic across web application servers of the SAP
BOBI platform. On Azure, you can either use Azure Load Balancer or Azure Application
Gateway for this purpose. To achieve disaster recovery for the load balancer services,
you need to implement another Azure Load Balancer or Azure Application Gateway on
the secondary region. To keep the same URL after a disaster recovery failover, you need
to change the entry in the DNS, pointing to the load-balancing service running on the
secondary region.

VMs running web and BI application servers


Use Azure Site Recovery to replicate VMs running web and BI application servers on the
secondary region. It replicates the servers and all it's attached managed disk to the
secondary region, so that when disasters and outages occur you can easily fail over to
your replicated environment and continue working. To start replicating all the SAP
application VMs to the Azure disaster recovery datacenter, follow the guidance in
Replicate a virtual machine to Azure.

File repository servers


Filestore is a disk directory where the actual files, like reports and BI documents, are
stored. It's important that all the files in the filestore are in sync to a disaster recovery
region. Based on the type of file share service you use for SAP BOBI platform running on
Linux, the appropriate disaster recovery strategy needs to be adopted to sync the
content.

Azure NetApp Files provides NFS and SMB volumes, so you can use any file-based
copy tool to replicate data between Azure regions. For more information on how
to copy a volume in another region, see FAQs About Azure NetApp Files.

You can use Azure NetApp Files cross-region replication, currently in preview .
Only changed blocks are sent over the network in a compressed, efficient format.
This minimizes the amount of data required to replicate across the regions, saving
data transfer costs. It also shortens the replication time, so you can achieve a
smaller RPO. For more information, see Requirements and considerations for using
cross-region replication.

Azure premium files only support locally redundant (LRS) and zone redundant
storage (ZRS). For the disaster recovery strategy, you can use AzCopy or Azure
PowerShell to copy your files to another storage account in a different region. For
more information, see Disaster recovery and storage account failover.

) Important

SMB Protocol for Azure Files is generally available, but NFS Protocol support
for Azure Files is currently in preview. For more information, see NFS 4.1
support for Azure Files is now in preview .

CMS database
The CMS and audit databases in the disaster recovery region must be a copy of the
databases running in primary region. Based on the database type, it's important to copy
the database to the disaster recovery region based on the RTO and RPO that your
business requires.

Azure Database for MySQL

Azure Database for MySQL provides multiple options to recover a database if there's a
disaster. Choose an appropriate option that works for your business.

Enable cross-region read replicas to enhance your business continuity and disaster
recovery planning. You can replicate from the source server to up to five replicas.
Read replicas are updated asynchronously by using MySQL's binary log replication
technology. Replicas are new servers that you manage similar to regular servers in
Azure Database for MySQL. For more information, see Read replicas in Azure
Database for MySQL.

Use the geo-restore feature to restore the server by using geo-redundant backups.
These backups are accessible even when the region on which your server is hosted
is offline. You can restore from these backups to any other region, and bring your
server back online.

7 Note

Geo-restore is only possible if you provisioned the server with geo-redundant


backup storage. Changing the Backup Redundancy Options after server
creation isn't supported. For more information, see Backup redundancy.

The following table shows the recommendation for disaster recovery of each tier used in
this example.

ノ Expand table

SAP BOBI platform Recommendation


tiers

Azure Application Parallel setup of Application Gateway on a secondary region.


Gateway

Web application Replicate by using Azure Site Recovery.


servers

BI application servers Replicate by using Site Recovery.

Azure NetApp Files File-based copy tool to replicate data to a secondary region, or by using
cross-region replication.
SAP BOBI platform Recommendation
tiers

Azure Database for Cross-region read replicas, or restore backup from geo-redundant
MySQL backups.

Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
Azure Storage types for SAP workload
Article • 07/13/2023

Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and prices. Some of
the storage types aren't, or of limited usable for SAP scenarios. Whereas several Azure storage types are
well suited or optimized for specific SAP workload scenarios. Especially for SAP HANA, some Azure storage
types got certified for the usage with SAP HANA. In this document, we're going through the different types
of storage and describe their capability and usability with SAP workloads and SAP components.

Remark about the units used throughout this article. The public cloud vendors moved to use GiB
(Gibibyte ) or TiB (Tebibyte as size units, instead of Gigabyte or Terabyte. Therefore all Azure
documentation and prizing are using those units. Throughout the document, we're referencing these size
units of MiB, GiB, and TiB units exclusively. You might need to plan with MB, GB, and TB. So, be aware of
some small differences in the calculations if you need to size for a 400 MiB/sec throughput, instead of a 250
MiB/sec throughput.

Microsoft Azure Storage resiliency


Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, Premium SSD v2, and
Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs in three copies on three
different storage nodes. Failing over to another replica and seeding of a new replica if there's a storage
node failure, is transparent. As a result of this redundancy, it's NOT required to use any kind of storage
redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is
default for these types of storage in Azure. Azure NetApp Files provides sufficient redundancy to achieve
the same SLAs as other native Azure storage.

There are several more redundancy methods, which are all described in the article Azure Storage replication
that applies to some of the different storage types Azure has to offer.

7 Note

Using Azure storage for storing database data and redo log file, LRS is the only supported resiliency
level at this point in time

Also keep in mind that different Azure storage types influence the single VM availability SLAs as released in
SLA for Virtual Machines .

Azure managed disks


Managed disks are a resource type in Azure Resource Manager that can be used instead of VHDs that are
stored in Azure Storage Accounts. Managed Disks automatically align with the [availability set][virtual-
machines-manage-availability] of the virtual machine they're attached to. With such an alignment, you
experience an improvement of the availability of your virtual machine and the services that are running in
the virtual machine. For more information, read the overview article.

7 Note
We require that new deployments of VMs that use Azure block storage for their disks (all Azure
storage except Azure NetApp Files and Azure Files) need to use Azure managed disks for the base
VHD/OS disks and data disks which store SAP database files. Independent on whether you deploy the
VMs through availability set, across Availability Zones or independent of the sets and zones. Disks that
are used for the purpose of storing backups aren't necessarily required to be managed disks.

Storage scenarios with SAP workloads


Persisted storage is needed in SAP workload in various components of the stack that you deploy in Azure.
These scenarios list at minimum like:

Persistent the base VHD of your VM that holds the operating system and other software you install in
that disk. This disk/VHD is the root of your VM. Any changes made to it, need to be persisted. So, that
the next time, you stop and restart the VM, all the changes made before still exist. Especially in cases
where the VM is getting deployed by Azure onto another host than it was running originally
Persisted data disks. These disks are VHDs you attach to store application data in. This application
data could be data and log/redo files of a database, backup files, or software installations. Means any
disk beyond your base VHD that holds the operating system
File shares or shared disks that contain your global transport directory for NetWeaver or S/4HANA.
Content of those shares is either consumed by software running in multiple VMs or is used to build
high-availability failover cluster scenarios
The /sapmnt directory or common file shares for EDI processes or similar. Content of those shares is
either consumed by software running in multiple VMs or is used to build high-availability failover
cluster scenarios

In the next few sections, the different Azure storage types and their usability for the four SAP workload
scenarios gets discussed. A general categorization of how the different Azure storage types should be used
is documented in the article What disk types are available in Azure?. The recommendations for using the
different Azure storage types for SAP workload aren't going to be majorly different.

For support restrictions on Azure storage types for SAP NetWeaver/application layer of S/4HANA, read the
SAP support note 2015553 . For SAP HANA certified and supported Azure storage types, read the article
SAP HANA Azure virtual machine storage configurations.

The sections describing the different Azure storage types will give you more background about the
restrictions and possibilities using the SAP supported storage.

Storage choices when using DBMS replication


Our reference architectures foresee the usage of DBMS functionality like SQL Server Always On, HANA
System Replication, Db2 HADR, or Oracle Data Guard. In case, you're using these technologies between two
or multiple Azure virtual machines, the storage types chosen for each of the VMs is required to be the
same. Means the storage configuration between active node and replica node in DBMS HA configuration
needs to be the same.

Storage recommendations for SAP storage scenarios


Before going into the details, we're presenting the summary and recommendations already at the
beginning of the document. Whereas the details for the particular types of Azure storage are following this
section of the document. When we summarize the storage recommendations for the SAP storage scenarios
in a table, it looks like:

Usage Standard Standard Premium Premium SSD Ultra disk Azure NetApp Azure
scenario HDD SSD Storage v2 Files Premium Files

OS disk Not Restricted Recommended Not possible Not possible Not possible Not possible
suitable suitable
(non-
prod)

Global Not Not Recommended Recommended Recommended Recommended Highly


transport supported supported Recommended
Directory

/sapmnt Not Restricted Recommended Recommended Recommended Recommended Highly


suitable suitable Recommended
(non-
prod)

DBMS Not Not Recommended Recommended Recommended Recommended2 Not supported


Data supported supported
volume
SAP HANA
M/Mv2
VM
families

DBMS log Not Not Recommended1 Recommended Recommended Recommended2 Not supported
volume supported supported
SAP HANA
M/Mv2
VM
families

DBMS Not Not Recommended Recommended Recommended Recommended2 Not supported


Data supported supported
volume
SAP HANA
Esv3/Edsv4
VM
families

DBMS log Not Not Not supported Recommended Recommended Recommended2 Not supported
volume supported supported
SAP HANA
Esv3/Edsv4
VM
families

HANA Not Not Recommended Recommended Recommended Recommended Recommended3


shared supported supported
volume

DBMS Not Restricted Recommended Recommended Recommended Only for Not supported
Data supported suitable specific Oracle
volume (non- releases on
non-HANA prod) Oracle Linux,
Usage Standard Standard Premium Premium SSD Ultra disk Db2
Azure and SAP
NetApp Azure
scenario HDD SSD Storage v2 ASE on
Files Premium Files
SLES/RHEL
Linux

DBMS log Not Restricted Recommended1 Recommended Recommended Only for Not supported
volume supported suitable specific Oracle
non-HANA (non- releases on
M/Mv2 prod) Oracle Linux,
VM Db2 and SAP
families ASE on
SLES/RHEL
Linux

DBMS log Not restricted Suitable for up Recommended Recommended Only for Not supported
volume supported suitable to medium specific Oracle
non-HANA (non- workload releases on
non- prod) Oracle Linux,
M/Mv2 Db2 and SAP
VM ASE on
families SLES/RHEL
Linux

1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes

2
Using ANF requires /hana/data and /hana/log to be on ANF

3
So far tested on SLES only

Characteristics you can expect from the different storage types list like:

Usage Standard Standard Premium Premium SSD Ultra disk Azure Azure
scenario HDD SSD Storage v2 NetApp Files Premium
Files

Throughput/ No No Yes Yes Yes Yes Yes


IOPS SLA

Latency High Medium Low submillisecond submillisecond submillisecond low


Reads to high

Latency High Medium Low submillisecond submillisecond submillisecond low


Writes to high (submillisecond1)

HANA No No yes1 Yes Yes Yes No


supported

Disk Yes Yes Yes No No Yes No


snapshots
possible

Allocation of Through Through Through Disk type not Disk type not No3 No
disks on managed managed managed disks supported supported
different disks disks with VMs with VMs
storage deployed deployed
clusters when through through
using availability availability
availability sets sets
sets
Usage Standard Standard Premium Premium SSD Ultra disk Azure Azure
scenario HDD SSD Storage v2 NetApp Files Premium
Files

Aligned with Yes Yes Yes Yes Yes In public No


Availability preview
Zones

Synchronous Not for Not for Not supported No No No Yes


Zonal managed managed for DBMS
redundancy disks disks

Asynchronous Not for Not for Not supported No No In preview No


Zonal managed managed for DBMS
redundancy disks disks

Geo Not for Not for No No No Possible No


redundancy managed managed
disks disks

1
With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes

2
Costs depend on provisioned IOPS and throughput

3 Creation of different ANF capacity pools doesn't guarantee deployment of capacity pools onto different
storage units

) Important

Check out the Azure NetApp Files section of this document to find specifics around proximity
placement of NFS volumes and VMs when less than 1 millisecond latencies are required.

Azure premium storage


Azure premium SSD storage got introduced with the goal to provide:

Low I/O latency


SLAs for IOPS and throughput
Less variability in I/O latency

This type of storage is targeting DBMS workloads, storage traffic that requires low single digit millisecond
latency, and SLAs on IOPS and throughput. Cost basis for Azure premium storage isn't the actual data
volume stored in such disks, but the size category of such a disk, independent of the amount of the data
that is stored within the disk. You also can create disks on premium storage that aren't directly mapping
into the size categories shown in the article Premium SSD. Conclusions out of this article are:

The storage is organized in ranges. For example, a disk in the range 513 GiB to 1024 GiB capacity
share the same capabilities and the same monthly costs
The IOPS per GiB aren't tracking linear across the size categories. Smaller disks below 32 GiB have
higher IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB, the IOPS rate per GiB is between 4-5
IOPS per GiB. For larger disks up to 32,767 GiB, the IOPS rate per GiB is going below 1
The I/O throughput for this storage isn't linear with the size of the disk category. For smaller disks, like
the category between 65 GiB and 128 GiB capacity, the throughput is around 780 KB per GiB. Whereas
for the extreme large disks like a 32,767 GiB disk, the throughput is around 28 KB per GiB
The IOPS and throughput SLAs can't be changed without changing the capacity of the disk

The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Suitable All systems

Data disk Suitable All systems - Specially for SAP HANA

SAP global transport directory Yes Supported

SAP sapmnt Suitable All systems

Backup storage Suitable For short term storage of backups

Shares/shared disk Not available Needs Azure Premium Files or third party

Resiliency LRS No GRS or ZRS available for disks

Latency Low-to medium -

IOPS SLA Yes -

IOPS linear to capacity semi linear in brackets Managed Disk pricing

Maximum IOPS per disk 20,000 dependent on disk size Also consider VM limits

Throughput SLA Yes -

Throughput linear to capacity Semi linear in brackets Managed Disk pricing

HANA certified Yes specially for SAP HANA

Azure Write Accelerator support No -

Disk bursting Yes -

Disk snapshots possible Yes -

Azure Backup VM snapshots possible Yes -

Costs Medium -

Azure premium storage doesn't fulfill SAP HANA storage latency KPIs with the common caching types
offered with Azure premium storage. In order to fulfill the storage latency KPIs for SAP HANA log writes,
you need to use Azure Write Accelerator caching as described in the article Enable Write Accelerator. Azure
Write Accelerator benefits all other DBMS systems for their transaction log writes and redo log writes.
Therefore, it's recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of
Azure Write Accelerator for /hana/log with Azure premium storage is mandatory.

Summary: Azure premium storage is one of the Azure storage types recommended for SAP workload. This
recommendation applies for non-production and production systems. Azure premium storage is suited to
handle database workloads. The usage of Azure Write Accelerator is going to improve write latency against
Azure premium disks substantially. However, for DBMS systems with high IOPS and throughput rates, you
need to either overprovision storage capacity. Or you need to use functionality like Windows Storage
Spaces or logical volume managers in Linux to build stripe sets that give you the desired capacity on the
one side. But also the necessary IOPS or throughput at best cost efficiency.
Azure burst functionality for premium storage
For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The
exact way how disk bursting works is described in the article Disk bursting. When you read the article, you
understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the
nominal IOPS and throughput of the disks (for details on the nominal throughput see Managed Disk
pricing ). You're going to accrue the delta of IOPS and throughput between your current usage and the
nominal values of the disk. The bursts are limited to a maximum of 30 minutes.

The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks
that contain data files for the different DBMS. The I/O workload expected against those volumes, especially
with small to mid-ranged systems is expected to look like:

Low to moderate read workload since data ideally is cached in memory. Or like with SAP HANA should
be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued regularly
Backup workload that reads in a continuous stream in cases where backups aren't executed via
storage snapshots
For SAP HANA, load of the data into memory after an instance restart

Especially on smaller DBMS systems where your workload is handling a few hundred transactions per
seconds only, such a burst functionality can make sense as well for the disks or volumes that store the
transaction or redo log. Expected workload against such a disk or volumes looks like:

Regular writes to the disk that are dependent on the workload and the nature of workload since every
commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes
Read bursts when performing transaction log or redo log backups

Azure Premium SSD v2


Azure Premium SSD v2 storage is a new version of premium storage that got introduced with the goal to
provide:

Submillisecond I/O latency for smaller read and write I/O sizes
SLAs for IOPS and throughput
Pay capacity by the provisioned GB
Provide a default set of IOPS and storage throughput per disk
Give the possibility to add more IOPS and throughput to each disk and pay separately for these extra
provisioned resources
Pass SAP HANA certification without the help of other functionality like Azure Write Accelerator or
other caches

This type of storage is targeting DBMS workloads, storage traffic that requires submillisecond latency, and
SLAs on IOPS and throughput. The Premium SSD v2 disks are delivered with a default set of 3,000 IOPS and
125 MBps throughput. And the possibility to add more IOPS and throughput to individual disks. The pricing
of the storage is structured in a way that adding more throughput or IOPS isn't influencing the price
majorly. Nevertheless, we leave it up to you to decide how the storage configuration for Premium SSD v2
will look like. For a base start, read SAP HANA Azure virtual machine Premium SSD v2 storage
configurations.
For the actual regions, this new block storage type is available and the actual restrictions read the
document Premium SSD v2.

The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Not supported No system

Data disk Suitable All systems

SAP global transport directory Yes All systems

SAP sapmnt Suitable All systems

Backup storage Suitable For short term storage of backups

Shares/shared disk Not available Needs Azure Premium Files or Azure NetApp
Files

Resiliency LRS No GRS or ZRS available for disks

Latency submillisecond -

IOPS SLA Yes -

IOPS linear to capacity semi linear Managed Disk pricing

Maximum IOPS per disk 80,000 dependent on disk size Also consider VM limits

Throughput SLA Yes -

Throughput linear to capacity Semi linear Managed Disk pricing

HANA certified Yes -

Azure Write Accelerator support No -

Disk bursting No -

Disk snapshots possible No -

Azure Backup VM snapshots No -


possible

Costs Medium -

In opposite to Azure premium storage, Azure Premium SSD v2 fulfills SAP HANA storage latency KPIs. As a
result, you DON'T need to use Azure Write Accelerator caching as described in the article Enable Write
Accelerator.

Summary: Azure Premium SSD v2 is the block storage that fits the best price/performance ratio for SAP
workloads. Azure Premium SSD v2 is suited to handle database workloads. The submillisecond latency is
ideal storage for demanding DBMS workloads. Though it's a newer storage type that got released in
November 2022. Therefore, there still might be some limitations that are going to go away over the next
few months.

Azure Ultra disk


Azure ultra disks deliver high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS
VMs. Some benefits of ultra disks include the ability to dynamically change the IOPS and throughput of the
disk, along with your workloads, without the need to restart your virtual machines (VM). Ultra disks are
suited for data-intensive workloads such as SAP DBMS workload. Ultra disks can only be used as data disks
and can't be used as base VHD disk that stores the operating system. We would recommend the usage of
Azure premium storage as based VHD disk.

As you create an ultra disk, you have three dimensions you can define:

The capacity of the disk. Ranges are from 4 GiB to 65,536 GiB
Provisioned IOPS for the disk. Different maximum values apply to the capacity of the disk. Read the
article Ultra disk for more details
Provisioned storage bandwidth. Different maximum bandwidth applies dependent on the capacity of
the disk. Read the article Ultra disk for more details

The cost of a single disk is determined by the three dimensions you can define for the particular disks
separately.

The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Doesn't work -

Data disk Suitable All systems

SAP global transport directory Yes Supported

SAP sapmnt Suitable All systems

Backup storage Suitable For short term storage of backups

Shares/shared disk Not available Needs third party

Resiliency LRS No GRS or ZRS available for disks

Latency Very low -

IOPS SLA Yes -

IOPS linear to capacity Semi linear in brackets Managed Disk pricing

Maximum IOPS per disk 1,200 to 160,000 dependent of disk capacity

Throughput SLA Yes -

Throughput linear to capacity Semi linear in brackets Managed Disk pricing

HANA certified Yes -

Azure Write Accelerator support No -

Disk bursting No -

Disk snapshots possible No -

Azure Backup VM snapshots possible No -

Costs Higher than Premium storage -


Summary: Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP
workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through
Availability Zones (zonal deployment). Ultra disk isn't supporting storage snapshots. In opposite to all other
storage, Ultra disk can't be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload
fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns
instead of sizing for maximum usage of bandwidth and IOPS.

Azure NetApp files (ANF)


Azure NetApp Files is the result out of the cooperation between Microsoft and NetApp with the goal to
provide high performing Azure native NFS and SMB shares. The emphasis is to provide high bandwidth and
low latency storage that enables DBMS deployment scenarios, and over time enable typical operational
functionality of the NetApp storage through Azure as well. NFS/SMB shares are offered in three different
service levels that differentiate in storage throughput and in price. The service levels are documented in the
article Service levels for Azure NetApp Files. For the different types of SAP workload the following service
levels are highly recommended:

SAP DBMS workload: Performance, ideally Ultra


SAPMNT share: Performance, ideally Ultra
Global transport directory: Performance, ideally Ultra

7 Note

The minimum provisioning size is a 4 TiB unit that is called capacity pool. You then create volumes out
of this capacity pool. Whereas the smallest volume you can build is 100 GiB. You can expand a capacity
pool in TiB steps. For pricing, check the article Azure NetApp Files Pricing

ANF storage is currently supported for several SAP workload scenarios:

Providing SMB or NFS shares for SAP's global transport directory


The share sapmnt in high availability scenarios as documented in:
High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for
SAP applications
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure
NetApp Files for SAP applications
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure
NetApp Files for SAP applications
SAP HANA deployments using NFS v4.1 shares for /hana/data and /hana/log volumes and/or NFS v4.1
or NFS v3 volumes for /hana/shared volumes as documented in the article SAP HANA Azure virtual
machine storage configurations
IBM Db2 in Suse or Red Hat Linux guest OS
Oracle deployments in Oracle Linux guest OS using dNFS for Oracle data and redo log volumes.
Some more details can be found in the article Azure Virtual Machines Oracle DBMS deployment for
SAP workload
SAP ASE in Suse or Red Hat Linux guest OS

7 Note
So far no DBMS workloads are supported on SMB based on Azure NetApp Files.

As already with Azure premium storage, a fixed or linear throughput size per GB can be a problem when
you're required to adhere to some minimum numbers in throughput. Like this is the case for SAP HANA.
With ANF, this problem can become more pronounced than with Azure premium disk. Using Azure
premium disk, you can take several smaller disks with a relatively high throughput per GiB and stripe across
them to be cost efficient and have higher throughput at lower capacity. This kind of striping doesn't work
for NFS or SMB shares hosted on ANF. This restriction resulted in deployment of overcapacity like:

To achieve, for example, a throughput of 250 MiB/sec on an NFS volume hosted on ANF, you need to
deploy 1.95 TiB capacity of the Ultra service level.
To achieve 400 MiB/sec, you would need to deploy 3.125 TiB capacity. But you may need the over-
provisioning of capacity to achieve the throughput you require of the volume. This over-provisioning
of capacity impacts the pricing of smaller HANA instances.
Using NFS on top of ANF for the SAP /sapmnt directory, you're usually going far with the minimum
capacity of 100 GiB to 150 GiB that is enforced by Azure NetApp Files. However customer experience
showed that the related throughput of 12.8 MiB/sec (using Ultra service level) may not be enough and
may have negative impact on the stability of the SAP system. In such cases, customers could avoid
issues by increasing the volume of the /sapmnt volume, so, that more throughput is provided to that
volume.

The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Doesn't work -

Data disk Suitable SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on
SLES/RHEL

SAP global transport directory Yes SMB and NFS

SAP sapmnt Suitable All systems SMB (Windows only) or NFS (Linux only)

Backup storage Suitable -

Shares/shared disk Yes SMB 3.0, NFS v3, and NFS v4.1

Resiliency LRS and GRS GRS available

Latency Very low -

IOPS SLA Yes -

IOPS linear to capacity strictly linear Dependent on Service Level

Throughput SLA Yes -

Throughput linear to capacity linear Dependent on Service Level

HANA certified Yes -

Disk snapshots possible Yes -

Azure Backup VM snapshots No -


possible
Capability Comment Notes/Links

Costs Higher than Premium -


storage

Other built-in functionality of ANF storage:

Capability to perform snapshots of volume


Cloning of ANF volumes from snapshots
Restore volumes from snapshots (snap-revert)
Application consistent Snapshot backup for SAP HANA and Oracle

) Important

Specifically for database deployments you want to achieve low latencies for at least your redo logs.
Especially for SAP HANA, SAP requires a latency of less than than 1 millisecond for HANA redo log
writes of smaller sizes. To get to such latencies, see the possibilities below.

) Important

Even for non-DBMS usage, you should use the preview functionality that allows you to create the NFS
share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares
into. This functionality is documented in the article Manage availability zone volume placement for
Azure NetApp Files. The motivation to have this type of Availability Zone alignment is the reduction of
risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.

You go for the closest proximity between VM and NFS share that can be arranged by using
Application Volume Groups. The advantage of Application Volume Groups, besides allocating best
proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA
deployments are distributed across different controllers in the Azure NetApp Files backend clusters.
Disadvantage of this method is that you need to go through a pinning process again. A process that
will end restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the
first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs
that have the NFS volumes mounted.
Current process of not using Availability Placement Groups. Which so far are available for SAP HANA
only. This process also uses the same manual pinning process as this is the case with Availability
Volume groups. This method is the method used for the last three years. It has the same flexibility
restrictions as the process has with Availability Volume Groups.

As preferences for allocating NFS volumes based on ANF for database specific usage, you should attempt
to allocate the NFS volume in the same zone as your VM first. Especially for non-HANA databases. Only if
latency proves to be insufficient you should go through a manual pinning process. For smaller HANA
workload or non-production HANA workload, you should follow a zonal allocation method as well. Only in
cases where performance and latency aren't sufficient you should use Application Volume Groups.

Summary: Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB
volumes or shares. The storage comes with three different service levels that provide different throughput
and IOPS in a linear manner per GiB capacity of the volume. The ANF storage is enabling to deploy SAP
HANA scale-out scenarios with a standby node. The storage is suitable for providing file shares as needed
for /sapmnt or SAP global transport directory. ANF storage come with functionality availability that is
available as native NetApp functionality.

Azure Premium Files


Azure Premium Files is a shared storage that offers SMB and NFS for a moderate price and sufficient latency
to handle shares of the SAP application layer. On top, Azure premium Files offers synchronous zonal
replication of the shares with an automatism that in case one replica fails, another replica in another zone
can take over. In opposite to Azure NetApp Files, there are no performance tiers. There also is no need for a
capacity pool. Charging is based on the real provisioned capacity of the different shares. Azure Premium
Files haven't been tested as DBMS storage for SAP workload at all. But instead the usage scenario for SAP
workload focused on all types of SMB and NFS shares as they're used on the SAP application layer. Azure
Premium Files is also suited for the usage for /hana/shared.

7 Note

So far no SAP DBMS workloads are supported on shared volumes based on Azure Premium Files.

SAP scenarios supported on Azure Premium Files list like:

Providing SMB or NFS shares for SAP's global transport directory


The share sapmnt in high availability scenarios as documented in:
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on
Azure Files
High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with NFS on Azure
Files
High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for
SAP applications
High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server

Azure Premium Files is starting with larger amount of IOPS at the minimum share size of 100 GB compared
to Azure NetApp Files. This higher bar of IOPS can avoid capacity overprovisioning to achieve certain IOPS
and throughput values. For IOPS and storage throughput, read the section Azure file share scale targets in
Azure Files scalability and performance targets.

The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Doesn't work -

Data disk Not supported for SAP -


workloads

SAP global transport directory Yes SMB and NFS

SAP sapmnt Suitable All systems SMB (Windows only) or NFS (Linux
only)

Backup storage Suitable -

Shares/shared disk Yes SMB 3.0, NFS v4.1


Capability Comment Notes/Links

Resiliency LRS and ZRS No GRS available for Azure Premium Files

Latency low -

IOPS SLA Yes -

IOPS linear to capacity strictly linear -

Throughput SLA Yes -

Throughput linear to capacity strictly linear -

HANA certified No -

Disk snapshots possible No -

Azure Backup VM snapshots No -


possible

Costs low -

Summary: Azure Premium Files is a low latency storage that allows to deploy NFS and SMB volumes or
shares. Azure Premium Files provides excellent price/performance ratio for SAP application layer shares. It
also provides synchronous zonal replication for these shares. So far, we don't support this storage type for
SAP DBMS workload. Though it can be used for /hana/shared volumes.

Azure standard SSD storage


Compared to Azure standard HDD storage, Azure standard SSD storage delivers better availability,
consistency, reliability, and latency. It's optimized for workloads that need consistent performance at lower
IOPS levels. This storage is the minimum storage used for non-production SAP systems that have low IOPS
and throughput demands. The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Restricted Non-production systems


suitable

Data disk Restricted Some non-production systems with low IOPS and latency
suitable demands

SAP global transport directory No Not supported

SAP sapmnt Restricted Non-production systems


suitable

Backup storage Suitable -

Shares/shared disk Not available Needs third party

Resiliency LRS, GRS No ZRS available for disks

Latency high Too high for SAP Global Transport directory, or production
systems

IOPS SLA No -
Capability Comment Notes/Links

Maximum IOPS per disk 500 Independent of the size of disk

Throughput SLA No -

HANA certified No -

Disk snapshots possible Yes -

Azure Backup VM snapshots Yes -


possible

Costs LOW -

Summary: Azure standard SSD storage is the minimum recommendation for non-production VMs for base
VHD, eventual DBMS deployments with relative latency insensitivity and/or low IOPS and throughput rates.
This Azure storage type isn't supported anymore for hosting the SAP Global Transport Directory.

Azure standard HDD storage


The Azure Standard HDD storage was the only storage type when Azure infrastructure got certified for SAP
NetWeaver workload in the year 2014. In the year 2014, the Azure virtual machines were small and low in
storage throughput. Therefore, this storage type was able to just keep up with the demands. The storage is
ideal for latency insensitive workloads, that you hardly experience in the SAP space. With the increasing
throughput of Azure VMs and the increased workload these VMs are producing, this storage type isn't
considered for the usage with SAP scenarios anymore. The capability matrix for SAP workload looks like:

Capability Comment Notes/Links

OS base VHD Not suitable -

Data disk Not suitable -

SAP global transport directory No Not supported

SAP sapmnt NO Not supported

Backup storage Suitable -

Shares/shared disk Not Needs Azure Files or third party


available

Resiliency LRS, GRS No ZRS available for disks

Latency high Too high for DBMS usage, SAP Global Transport directory, or
sapmnt/saploc

IOPS SLA No -

Maximum IOPS per disk 500 Independent of the size of disk

Throughput SLA No -

HANA certified No -

Disk snapshots possible Yes -


Capability Comment Notes/Links

Azure Backup VM snapshots Yes -


possible

Costs Low -

Summary: Standard HDD is an Azure storage type that should only be used to store SAP backups. It should
only be used as base VHD for rather inactive systems, like retired systems used for looking up data here
and there. But no active development, QA or production VMs should be based on that storage. Nor should
database files being hosted on that storage

Azure VM limits in storage traffic


In opposite to on-premises scenarios, the individual VM type you're selecting, plays a vital role in the
storage bandwidth you can achieve. For the different storage types, you need to consider:

Storage type Linux Windows Comments

Standard HDD Sizes for Linux Sizes for Windows Likely hard to touch the storage limits of medium or
VMs in Azure VMs in Azure large VMs

Standard SSD Sizes for Linux Sizes for Windows Likely hard to touch the storage limits of medium or
VMs in Azure VMs in Azure large VMs

Premium Sizes for Linux Sizes for Windows Easy to hit IOPS or storage throughput VM limits with
Storage VMs in Azure VMs in Azure storage configuration

Premium SSD Sizes for Linux Sizes for Windows Easy to hit IOPS or storage throughput VM limits with
v2 VMs in Azure VMs in Azure storage configuration

Ultra disk Sizes for Linux Sizes for Windows Easy to hit IOPS or storage throughput VM limits with
storage VMs in Azure VMs in Azure storage configuration

Azure NetApp Sizes for Linux Sizes for Windows Storage traffic is using network throughput bandwidth
Files VMs in Azure VMs in Azure and not storage bandwidth!

Azure Premium Sizes for Linux Sizes for Windows Storage traffic is using network throughput bandwidth
Files VMs in Azure VMs in Azure and not storage bandwidth!

As limitations, you need to note that:

The smaller the VM, the fewer disks you can attach. This restriction doesn't apply to ANF. Since you
mount NFS or SMB shares, you don't encounter a limit of number of shared volumes to be attached
VMs have I/O throughput and IOPS limits that easily could be exceeded with premium storage disks
and Ultra disks
With ANF and Azure Premium Files, the traffic to the shared volumes is consuming the VM's network
bandwidth and not storage bandwidth
With large NFS volumes in the double digit TiB capacity space, the throughput accessing such a
volume out of a single VM is going to plateau based on limits of Linux for a single session interacting
with the shared volume.

As you up-size Azure VMs in the lifecycle of an SAP system, you should evaluate the IOPS and storage
throughput limits of the new and larger VM type. In some cases, it also could make sense to adjust the
storage configuration to the new capabilities of the Azure VM.
Striping or not striping
Creating a stripe set out of multiple Azure disks into one larger volume allows you to accumulate the IOPS
and throughput of the individual disks into one volume. It's used for Azure standard storage and Azure
premium storage only. Azure Ultra disk where you can configure the throughput and IOPS independent of
the capacity of a disk, doesn't require the usage of stripe sets. Shared volumes based on NFS or SMB can't
be striped. Due to the non-linear nature of Azure premium storage throughput and IOPS, you can provision
smaller capacity with the same IOPS and throughput than large single Azure premium storage disks. That is
the method to achieve higher throughput or IOPS at lower cost using Azure premium storage. For example,
striping across two P15 premium storage disks gets you to a throughput of:

250 MiB/sec. Such a volume is going to have 512 GiB capacity. If you want to have a single disk that
gives you 250 MiB throughput per second, you would need to pick a P40 disk with 2 TiB capacity.
400 MiB/sec by striping four P10 premium storage disks with an overall capacity of 512 GiB by
striping. If you would like to have a single disk with a minimum of 500 MiB throughput per second,
you would need to pick a P60 premium storage disk with 8 TiB. Because the cost of premium storage
is near linear with the capacity, you can sense the cost savings by using striping.

Some rules need to be followed on striping:

No in-VM configured storage should be used since Azure storage keeps the data redundant already
The disks the stripe set is applied to, need to be of the same size
With Premium SSD v2 and Ultra disk, the capacity, provisioned IOPS and provisioned throughput
needs to be the same

Striping across multiple smaller disks is the best way to achieve a good price/performance ratio using Azure
premium storage. It's understood that striping can have some extra deployment and management
overhead.

For specific stripe size recommendations, read the documentation for the different DBMS, like SAP HANA
Azure virtual machine storage configurations.

Next steps
Read the articles:

Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP HANA Azure virtual machine storage configurations
SAP HANA Azure virtual machine
storage configurations
Article • 03/19/2024

Azure provides different types of storage that are suitable for Azure VMs that are
running SAP HANA. The SAP HANA certified Azure storage types that can be
considered for SAP HANA deployments list like:

Azure premium SSD or premium storage v1/v2


Ultra disk
Azure NetApp Files

To learn about these disk types, see the article Azure Storage types for SAP workload
and Select a disk type

Azure offers two deployment methods for VHDs on Azure Standard and premium
storage v1/v2. We expect you to take advantage of Azure managed disk for Azure
block storage deployments.

For a list of storage types and their SLAs in IOPS and storage throughput, review the
Azure documentation for managed disks .

) Important

Independent of the Azure storage type chosen, the file system that is used on that
storage needs to be supported by SAP for the specific operating system and DBMS.
SAP support note #2972496 lists the supported file systems for different
operating systems and databases, including SAP HANA. This applies to all volumes
SAP HANA might access for reading and writing for whatever task. Specifically
using NFS on Azure for SAP HANA, additional restrictions of NFS versions apply as
stated later in this article

The minimum SAP HANA certified conditions for the different storage types are:

Azure premium storage v1 - /hana/log is required to be supported by Azure Write


Accelerator. The /hana/data volume could be placed on premium storage v1
without Azure Write Accelerator or on Ultra disk. Azure premium storage v2 or
Azure premium SSD v2 is not supporting the usage of Azure Write Accelerator
Azure Ultra disk at least for the /hana/log volume. The /hana/data volume can be
placed on either premium storage v1/v2 without Azure Write Accelerator or in
order to get faster restart times Ultra disk
NFS v4.1 volumes on top of Azure NetApp Files for /hana/log and /hana/data. The
volume of /hana/shared can use NFS v3 or NFS v4.1 protocol

Based on experience gained with customers, we changed the support for combining
different storage types between /hana/data and /hana/log. It is supported to combine
the usage of the different Azure block storages that are certified for HANA AND NFS
shares based on Azure NetApp Files. For example, it's possible to put /hana/data onto
premium storage v1 or v2 and /hana/log can be placed on Ultra disk storage in order to
get the required low latency. If you use a volume based on ANF for /hana/data,
/hana/log volume can be placed on one of the HANA certified Azure block storage
types as well. Using NFS on top of ANF for one of the volumes (like /hana/data) and
Azure premium storage v1/v2 or Ultra disk for the other volume (like /hana/log) is
supported.

In the on-premises world, you rarely had to care about the I/O subsystems and its
capabilities. Reason was that the appliance vendor needed to make sure that the
minimum storage requirements are met for SAP HANA. As you build the Azure
infrastructure yourself, you should be aware of some of these SAP issued requirements.
Some of the minimum throughput characteristics that SAP is recommending, are:

Read/write on /hana/log of 250 MB/sec with 1 MB I/O sizes


Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes
Write activity of at least 250 MB/sec for /hana/data with 16 MB and 64 MB I/O
sizes

Given that low storage latency is critical for DBMS systems, even as DBMS, like SAP
HANA, keep data in-memory. The critical path in storage is usually around the
transaction log writes of the DBMS systems. But also operations like writing savepoints
or loading data in-memory after crash recovery can be critical. Therefore, it's mandatory
to use Azure premium storage v1/v2, Ultra disk, or ANF for /hana/data and /hana/log
volumes.

Some guiding principles in selecting your storage configuration for HANA can be listed
like:

Decide on the type of storage based on Azure Storage types for SAP workload and
Select a disk type
The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for
a VM. Overall VM storage throughput is documented in the article Memory
optimized virtual machine sizes
When deciding for the storage configuration, try to stay below the overall
throughput of the VM with your /hana/data volume configuration. SAP HANA
writing savepoints, HANA can be aggressive issuing I/Os. It's easily possible to
push up to throughput limits of your /hana/data volume when writing a savepoint.
If your disk(s) that build the /hana/data volume have a higher throughput than
your VM allows, you could run into situations where throughput utilized by the
savepoint writing is interfering with throughput demands of the redo log writes. A
situation that can impact the application throughput
If you're considering using HANA System Replication, the storage used for
/hana/data on each replica must be same and the storage type used for /hana/log
on each replica must be same. For example, using Azure premium storage v1 for
/hana/data with one VM and Azure Ultra disk for /hana/data in another VM
running a replica of the same HANA System replication configuration, isn't
supported

) Important

The suggestions for the storage configurations in this or subsequent documents


are meant as directions to start with. Running workload and analyzing storage
utilization patterns, you might realize that you're not utilizing all the storage
bandwidth or IOPS provided. You might consider downsizing on storage then. Or in
contrary, your workload might need more storage throughput than suggested with
these configurations. As a result, you might need to deploy more capacity, IOPS or
throughput. In the field of tension between storage capacity required, storage
latency needed, storage throughput and IOPS required and least expensive
configuration, Azure offers enough different storage types with different
capabilities and different price points to find and adjust to the right compromise
for you and your HANA workload.

Stripe sets versus SAP HANA data volume


partitioning
Using Azure premium storage v1 you may hit the best price/performance ratio when
you stripe the /hana/data and/or /hana/log volume across multiple Azure disks. Instead
of deploying larger disk volumes that provide the more on IOPS or throughput needed.
Creating a single volume across multiple Azure disks can be accomplished with LVM and
MDADM volume managers, which are part of Linux. The method of striping disks is
decades old and well known. As beneficial as those striped volumes are to get to the
IOPS or throughput capabilities you may need, it adds complexities around managing
those striped volumes. Especially in cases when the volumes need to get extended in
capacity. At least for /hana/data, SAP introduced an alternative method that achieves
the same goal as striping across multiple Azure disks. Since SAP HANA 2.0 SPS03, the
HANA indexserver is able to stripe its I/O activity across multiple HANA data files, which
are located on different Azure disks. The advantage is that you don't have to take care
of creating and managing a striped volume across different Azure disks. The SAP HANA
functionality of data volume partitioning is described in detail in:

The HANA Administrator's Guide


Blog about SAP HANA – Partitioning Data Volumes
SAP Note #2400005
SAP Note #2700123

Reading through the details, it's apparent that applying this functionality takes away
complexities of volume manager based stripe sets. You also realize that the HANA data
volume partitioning isn't only working for Azure block storage, like Azure premium
storage v1/v2. You can use this functionality as well to stripe across NFS shares in case
these shares have IOPS or throughput limitations.

Linux I/O Scheduler mode


Linux has several different I/O scheduling modes. Common recommendation through
Linux vendors and SAP is to reconfigure the I/O scheduler mode for disk volumes from
the mq-deadline or kyber mode to the noop (non-multiqueue) or none for
(multiqueue) mode if not done yet by the SLES saptune profiles. Details are referenced
in:

SAP Note #1984787


SAP Note #2578899
Issue with noop setting in SLES 12 SP4

On Red Hat, leave the settings as established by the specific tune profiles for the
different SAP applications.

Stripe sizes when using logical volume


managers
If you're using LVM or mdadm to build stripe sets across several Azure premium disks,
you need to define stripe sizes. These sizes differ between /hana/data and /hana/log.
Recommendation: As stripe sizes the recommendation is to use:

256 KB for /hana/data


64 KB for /hana/log
7 Note

The stripe size for /hana/data got changed from earlier recommendations calling
for 64 KB or 128 KB to 256 KB based on customer experiences with more recent
Linux versions. The size of 256 KB is providing slightly better performance. We also
changed the recommendation for stripe sizes of /hana/log from 32 KB to 64 KB in
order to get enough throughput with larger I/O sizes.

7 Note

You don't need to configure any redundancy level using RAID volumes since Azure
block storage keeps three images of a VHD. The usage of a stripe set with Azure
premium disks is purely to configure volumes that provide sufficient IOPS and/or
I/O throughput.

Accumulating multiple Azure disks underneath a stripe set, is accumulative from an IOPS
and storage throughput side. So, if you put a stripe set across over 3 x P30 Azure
premium storage v1 disks, it should give you three times the IOPS and three times the
storage throughput of a single Azure premium Storage v1 P30 disk.

) Important

In case you're using LVM or mdadm as volume manager to create stripe sets across
multiple Azure premium disks, the three SAP HANA FileSystems /data, /log and
/shared must not be put in a default or root volume group. It's highly
recommended to follow the Linux Vendors guidance which is typically to create
individual Volume Groups for /data, /log and /shared.

Considerations for the HANA shared file system


When sizing the HANA file systems, most attention is given to the data and log file
HANA systems. However, /hana/shared also plays an important role in operating a
stable HANA system, as it hosts essential components like the HANA binaries.
If undersized, /hana/shared could become I/O saturated due to excessive read/write
operations - for instance while writing a large dump, or during intensive tracing, or if
backup is written to the /hana/shared file system. Latency could also increase.

If the HANA system is in an HA configuration, slow responses from the shared file
system, i.e. /hana/shared could cause cluster resources timeouts. These timeouts may
lead to unnecessary failovers, because the HANA resource agents might incorrectly
assume that the database is not available.

The SAP guidelines for /hana/shared recommended sizes would look like:

ノ Expand table

Volume Recommended Size

/hana/shared scale-up Min(1 TB, 1 x RAM)

/hana/shared scale-out 1 x RAM of worker node


per four worker nodes

Consult the following SAP notes for more details:


3288971 - FAQ: SUSE HAE/RedHat HAA Pacemaker Cluster Resource Manager in SAP
HANA System Replication Environments
1999930 - FAQ: SAP HANA I/O Analysis

As a best practice, size /hana/shared to avoid performance bottlenecks. Remember that


a well-sized /hana/shared file system contributes to the stability and reliability of your
SAP HANA system, especially in HA scenarios.

Azure Premium Storage v1 configurations for


HANA
For detailed HANA storage configuration recommendations using Azure premium
storage v1, read the document SAP HANA Azure virtual machine Premium SSD storage
configurations.

Azure Premium SSD v2 configurations for


HANA
For detailed HANA storage configuration recommendations using Azure premium ssd
v2 storage, read the document SAP HANA Azure virtual machine Premium SSD v2
storage configurations.

Azure Ultra disk storage configuration for SAP


HANA
For detailed HANA storage configuration recommendations using Azure Ultra Disk, read
the document SAP HANA Azure virtual machine Ultra Disk storage configurations.

NFS v4.1 volumes on Azure NetApp Files


For detail on ANF for HANA, read the document NFS v4.1 volumes on Azure NetApp
Files for SAP HANA.

Next steps
For more information, see:

SAP HANA Azure virtual machine Premium SSD storage configurations.


SAP HANA Azure virtual machine Ultra Disk storage configurations.
NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
SAP HANA High Availability guide for Azure virtual machines.
SAP HANA Azure virtual machine Premium
SSD storage configurations
Article • 04/01/2024

This document is about HANA storage configurations for Azure premium storage or premium
ssd as it was introduced years back as low latency storage for DBMS and other applications that
need low latency storage. For general considerations around stripe sizes when using LVM, HANA
data volume partitioning or other considerations that are independent of the particular storage
type, check these two documents:

SAP HANA Azure virtual machine storage configurations


Azure Storage types for SAP workload

) Important

The suggestions for the storage configurations in this document are meant as directions to
start with. Running workload and analyzing storage utilization patterns, you might realize
that you aren't utilizing all the storage bandwidth or IOPS provided. You might consider
downsizing on storage then. Or in contrary, your workload might need more storage
throughput than suggested with these configurations. As a result, you might need to
deploy more capacity, IOPS or throughput. In the field of tension between storage capacity
required, storage latency needed, storage throughput and IOPS required and least
expensive configuration, Azure offers enough different storage types with different
capabilities and different price points to find and adjust to the right compromise for you
and your HANA workload.

Solutions with premium storage and Azure Write


Accelerator for Azure M-Series virtual machines
Azure Write Accelerator is a functionality that is available for Azure M-Series VMs exclusively in
combination with Azure premium storage. As the name states, the purpose of the functionality
is to improve I/O latency of writes against the Azure premium storage. For SAP HANA, Write
Accelerator is supposed to be used against the /hana/log volume only. Therefore, the
/hana/data and /hana/log are separate volumes with Azure Write Accelerator supporting the
/hana/log volume only.

) Important

When using Azure premium storage, the usage of Azure Write Accelerator for the
/hana/log volume is mandatory. Write Accelerator is available for premium storage and M-
Series and Mv2-Series VMs only. Write Accelerator is not working in combination with
other Azure VM families, like Esv3 or Edsv4.

The caching recommendations for Azure premium disks below are assuming the I/O
characteristics for SAP HANA that list like:

There hardly is any read workload against the HANA data files. Exceptions are large sized
I/Os after restart of the HANA instance or when data is loaded into HANA. Another case of
larger read I/Os against data files can be HANA database backups. As a result read caching
mostly doesn't make sense since in most of the cases, all data file volumes need to be read
completely.
Writing against the data files is experienced in bursts based by HANA savepoints and
HANA crash recovery. Writing savepoints is asynchronous and aren't holding up any user
transactions. Writing data during crash recovery is performance critical in order to get the
system responding fast again. However, crash recovery should be rather exceptional
situations
There are hardly any reads from the HANA redo files. Exceptions are large I/Os when
performing transaction log backups, crash recovery, or in the restart phase of a HANA
instance.
Main load against the SAP HANA redo log file is writes. Dependent on the nature of
workload, you can have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more.
Write latency against the SAP HANA redo log is performance critical.
All writes need to be persisted on disk in a reliable fashion

Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for
the different volumes using Azure premium storage should be set like:

/hana/data - None or read caching


/hana/log - None. Enable Write Accelerator for M- and Mv2-Series VMs, the option in the
Azure portal is "None + Write Accelerator."
/hana/shared - read caching
OS disk - don't change default caching that is set by Azure at creation time of the VM

Azure burst functionality for premium storage


For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is
offered. The exact way how disk bursting works is described in the article Disk bursting. When
you read the article, you understand the concept of accruing IOPS and throughput in the times
when your I/O workload is below the nominal IOPS and throughput of the disks (for details on
the nominal throughput see Managed Disk pricing ). You're going to accrue the delta of IOPS
and throughput between your current usage and the nominal values of the disk. The bursts are
limited to a maximum of 30 minutes.

The ideal cases where this burst functionality can be planned in is likely going to be the volumes
or disks that contain data files for the different DBMS. The I/O workload expected against those
volumes, especially with small to mid-ranged systems is expected to look like:

Low to moderate read workload since data ideally is cached in memory, or like with SAP
HANA should be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued on a
regular basis
Backup workload that reads in a continuous stream in cases where backups aren't
executed via storage snapshots
For SAP HANA, load of the data into memory after an instance restart

Especially on smaller DBMS systems where your workload is handling a few hundred
transactions per seconds only, such a burst functionality can make sense as well for the disks or
volumes that store the transaction or redo log. Expected workload against such a disk or
volumes looks like:

Regular writes to the disk that are dependent on the workload and the nature of workload
since every commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding
indexes
Read bursts when performing transaction log or redo log backups

Production recommended storage solution based on Azure


premium storage

) Important

SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write
Accelerator for the /hana/log volume. As a result, production scenario SAP HANA
deployments on Azure M-Series virtual machines are expected to be configured with Azure
Write Accelerator for the /hana/log volume.

7 Note

In scenarios that involve Azure premium storage, we are implementing burst capabilities
into the configuration. As you're using storage test tools of whatever shape or form, keep
the way Azure premium disk bursting works in mind. Running the storage tests delivered
through the SAP HWCCT or HCMT tool, we aren't expecting that all tests will pass the
criteria since some of the tests will exceed the bursting credits you can accumulate.
Especially when all the tests run sequentially without break.

7 Note
With M32ts and M32ls VMs it can happen that disk throughput could be lower than
expected using HCMT/HWCCT disk tests. Even with disk bursting or with sufficiently
provisioned I/O throughput of the underlying disks. Root cause of the observed behavior
was that the HCMT/HWCCT storage test files were completely cached in the read cache of
the Premium storage data disks. This cache is located on the compute host that hosts the
virtual machine and can cache the test files of HCMT/HWCCT completely. In such a case the
quotas listed in the column Max cached and temp storage throughput: IOPS/MBps (cache
size in GiB) in the article M-series are relevant. Specifically for M32ts and M32ls, the
throughput quota against the read cache is only 400MB/sec. As a result of the tests files
being completely cached, it is possible that despite disk bursting or higher provisioned I/O
throughput, the tests can fall slightly short of 400MB/sec maximum throughput. As an
alternative, you can test without read cache enabled on the Azure Premium storage data
disks.

7 Note

For production scenarios, check whether a certain VM type is supported for SAP HANA by
SAP in the SAP documentation for IAAS .

Recommendation: The recommended configurations with Azure premium storage for


production scenarios look like:

Configuration for SAP /hana/data volume:

ノ Expand table

VM SKU RAM Max. VM /hana/data Provisioned Maximum IOPS Burst


I/O Throughput burst IOPS
Throughput throughput

M32ts 192 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000
GiB

M32ls 256 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000
GiB

M64ls 512 1,000 MBps 4 x P10 400 MBps 680 MBps 2,000 14,000
GiB

M32(d)ms_v2 875 500 MBps 4 x P15 500 MBps 680 MBps 4,400 14,000
GiB

M48(d)s_1_v3, 974 1,560 MBps 4 x P15 500 MBps 680 MBps 4,400 14,000
M96(d)s_1_v3 GiB

M64s, 1,024 1,000 MBps 4 x P15 500 MBps 680 MBps 4,400 14,000
M64(d)s_v2 GiB
VM SKU RAM Max. VM /hana/data Provisioned Maximum IOPS Burst
I/O Throughput burst IOPS
Throughput throughput

M64ms, 1,792 1,000 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
M64(d)ms_v2 GiB

M96(d)s_2_v3 1,946 3,120 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB

M128s, 2,048 2,000 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
M128(d)s_v2 GiB

M192i(d)s_v2 2,048 2,000 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB

M128ms, 3,892 2,000 MBps 4 x P30 800 MBps no bursting 20,000 no


M128(d)ms_v2 GiB bursting

M176(d)s_3_v3 2,794 4,000 MBps 4 x P30 800 MBps no bursting 20,000 no


GiB bursting

M176(d)s_4_v3 3,750 4,000 MBps 4 x P30 800 MBps no bursting 20,000 no


GiB bursting

M192i(d)ms_v2 4,096 2,000 MBps 4 x P30 800 MBps no bursting 20,000 no


GiB bursting

M208s_v2 2,850 1,000 MBps 4 x P30 800 MBps no bursting 20,000 no


GiB bursting

M208ms_v2 5,700 1,000 MBps 4 x P40 1,000 MBps no bursting 30,000 no


GiB bursting

M416s_v2 5,700 2,000 MBps 4 x P40 1,000 MBps no bursting 30,000 no


GiB bursting

M416s_8_v2 7,600 2,000 MBps 4 x P40 1,000 MBps no bursting 30,000 no


bursting

M416ms_v2 11,400 2,000 MBps 4 x P50 1,000 MBps no bursting 30,000 no


GiB bursting

M832ixs1 14,902 larger than 4 x P601 2,000 MBps no bursting 64,000 no


GiB 2,000 Mbps bursting

M832ixs_v21 23,088 larger than 4 x P601 2,000 MBps no bursting 64,000 no


GiB 2,000 Mbps bursting

1
VM type not available by default. Please contact your Microsoft account team

2 Maximum throughput provided by the VM and throughput requirement by SAP HANA


workload, especially savepoint activity, can force you to deploy significant more premium
storage v1 capacity.
For the /hana/log volume. the configuration would look like:

ノ Expand table

VM SKU RAM Max. VM I/O /hana/log Provisioned Maximum IOPS Burst


Throughput volume Throughput burst IOPS
throughput

M32ts 192 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

M32ls 256 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

M64ls 512 1,000 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

M32(d)ms_v2 875 500 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M48(d)s_1_v3, 974 1,560 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M96(d)s_1_v3 GiB

M64s, 1,024 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M64(d)s_v2 GiB

M64ms, 1,792 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M64(d)s_v2 GiB

M96(d)s_2_v3 1,946 3,120 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M128s, 2,048 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M128(d)s_v2 GiB

M192i(d)s_v2 2,048 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M176(d)s_3_v3 2,794 4,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M176(d)s_4_v3 3,750 4,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M192i(d)ms_v2 4,096 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M208s_v2 2,850 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M208ms_v2 5,700 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M416s_v2 5,700 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
VM SKU RAM Max. VM I/O /hana/log Provisioned Maximum IOPS Burst
Throughput volume Throughput burst IOPS
throughput

M416s_8_v2 7,600 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M416ms_v2 11,400 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

M832ixs1 14,902 larger than 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB 2,000 Mbps

M832ixs_v21 23,088 larger than 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB 2,000 Mbps

1 VM type not available by default. Please contact your Microsoft account team

For the other volumes, the configuration would look like:

ノ Expand table

VM SKU RAM Max. VM I/O /hana/shared2 /root /usr/sap


Throughput volume

M32ts 192 GiB 500 MBps 1 x P15 1 x P6 1 x P6

M32ls 256 GiB 500 MBps 1 x P15 1 x P6 1 x P6

M64ls 512 GiB 1000 MBps 1 x P20 1 x P6 1 x P6

M32dms_v2, M32ms_v2 875 GiB 500 MBps 1 x P30 1 x P6 1 x P6

M48(d)s_1_v3, M96(d)s_1_v3 974 GiB 1,560 MBps 1 x P30 1 x P6 1 x P6

M64s, M64(d)s_v2 1,024 1,000 MBps 1 x P30 1 x P6 1 x P6


GiB

M64ms, M64(d)ms_v2 1,792 1,000 MBps 1 x P30 1 x P6 1 x P6


GiB

M96(d)s_2_v3 1,946 3,120 MBps 1 x P30 1 x P10 1 x P6


GiB

M128s, M128(d)s_v2 2,048 2,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M192i(d)s_v2 2,048 2,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M176(d)s_3_v3 2,794 4,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M176(d)s_4_v3 3,750 4,000 MBps 1 x P30 1 x P10 1 x P6


GiB
VM SKU RAM Max. VM I/O /hana/shared2 /root /usr/sap
Throughput volume

M128ms, M128dms_v2, 3,892 2,000 MBps 1 x P30 1 x P10 1 x P6


M128ms_v2 GiB

M192i(d)ms_v2 4,096 2,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M208s_v2 2,850 1,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M208ms_v2 5,700 1,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M416s_v2 5,700 2,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M416s_8_v2 7,600 2,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M416ms_v2 11,400 2,000 MBps 1 x P30 1 x P10 1 x P6


GiB

M832ixs1 14,902 larger than 2,000 1 x P30 1 x P10 1 x P6


GiB Mbps

M832ixs_v21 23,088 larger than 2,000 1 x P30 1 x P10 1 x P6


GiB Mbps

1
VM type not available by default. Please contact your Microsoft account team
2 Review carefully the considerations for sizing /hana/shared

Check whether the storage throughput for the different suggested volumes meets the workload
that you want to run. If the workload requires higher volumes for /hana/data and /hana/log,
you need to increase the number of Azure premium storage VHDs. Sizing a volume with more
VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual
machine type.

Azure Write Accelerator only works with Azure managed disks . So at least the Azure premium
storage disks forming the /hana/log volume need to be deployed as managed disks. More
detailed instructions and restrictions of Azure Write Accelerator can be found in the article Write
Accelerator.

You may want to use Azure Ultra disk storage instead of Azure premium storage only for the
/hana/log volume to be compliant with the SAP HANA certification KPIs when using E-series
VMs. Though, many customers are using premium storage SSD disks for the /hana/log volume
for non-production purposes or even for smaller production workloads since the write latency
experienced with premium storage for the critical redo log writes are meeting the workload
requirements. The configurations for the /hana/data volume on Azure premium storage could
look like:
ノ Expand table

VM SKU RAM Max. VM /hana/data Provisioned Maximum burst IOPS Burst


I/O Throughput throughput IOPS
Throughput

E20ds_v4 160 480 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

E20(d)s_v5 160 750 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

E32ds_v4 256 768 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

E32ds_v5 256 865 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB

E48ds_v4 384 1,152 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

E48ds_v4 384 1,315 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

E64s_v3 432 1,200 MB/s 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

E64ds_v4 504 1,200 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

E64(d)s_v5 512 1,735 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

E96(d)s_v5 672 2,600 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB

For the other volumes, including /hana/log on Ultra disk, the configuration could look like:

ノ Expand table

VM SKU RAM Max. VM /hana/log /hana/log /hana/log /hana/shared1 /root /usr/sap


I/O volume I/O IOPS volume
Throughput throughput

E20ds_v4 160 480 MBps 80 GB 250 MBps 1,800 1 x P15 1 x P6 1 x P6


GiB

E20(d)s_v5 160 750 MBps 80 GB 250 MBps 1,800 1 x P15 1 x P6 1 x P6


GiB

E32ds_v4 256 768 MBps 128 GB 250 MBps 1,800 1 x P15 1 x P6 1 x P6


GiB

E32(d)s_v5 256 865 MBps 128 GB 250 MBps 1,800 1 x P15 1 x P6 1 x P6


VM SKU RAM Max. VM /hana/log /hana/log /hana/log /hana/shared1 /root /usr/sap
I/O volume I/O IOPS volume
Throughput throughput

GiB

E48ds_v4 384 1,152 MBps 192 GB 250 MBps 1,800 1 x P20 1 x P6 1 x P6


GiB

E48(d)s_v5 384 1,315 MBps 192 GB 250 MBps 1,800 1 x P20 1 x P6 1 x P6


GiB

E64s_v3 432 1,200 MBps 220 GB 250 MBps 1,800 1 x P20 1 x P6 1 x P6


GiB

E64ds_v4 504 1,200 MBps 256 GB 250 MBps 1,800 1 x P20 1 x P6 1 x P6


GiB

E64(d)s_v5 512 1,735 MBps 256 GB 250 MBps 1,800 1 x P20 1 x P6 1 x P6


GiB

E96(d)s_v5 672 2,600 MBps 256 GB 250 MBps 1,800 1 x P20 1 x P6 1 x P6


GiB

1
Review carefully the considerations for sizing /hana/shared

Cost conscious solution with Azure premium


storage
So far, the Azure premium storage solution described in this document in section Solutions with
premium storage and Azure Write Accelerator for Azure M-Series virtual machines were meant
for SAP HANA production supported scenarios. One of the characteristics of production
supportable configurations is the separation of the volumes for SAP HANA data and redo log
into two different volumes. Reason for such a separation is that the workload characteristics on
the volumes are different. And that with the suggested production configurations, different type
of caching or even different types of Azure block storage could be necessary. For non-
production scenarios, some of the considerations taken for production systems may not apply
to more low end non-production systems. As a result the HANA data and log volume could be
combined. Though eventually with some culprits, like eventually not meeting certain throughput
or latency KPIs that are required for production systems. Another aspect to reduce costs in such
environments can be the usage of Azure Standard SSD storage. Keep in mind that choosing
Standard SSD or Standard HDD Azure storage has impact on your single VM SLAs as
documented in the article SLA for Virtual Machines .

A less costly alternative for such configurations could look like:

ノ Expand table
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap comments
I/O and volume
Throughput /hana/log
striped
with LVM
or
MDADM

DS14v2 112 768 MB/s 4 x P6 1 x E10 1 x E6 1 x E6 won't


GiB achieve
less than
1ms
storage
latency1

E16v3 128 384 MB/s 4 x P6 1 x E10 1 x E6 1 x E6 VM type


GiB not HANA
certified
won't
achieve
less than
1ms
storage
latency1

M32ts 192 500 MB/s 3 x P10 1 x E15 1 x E6 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 5,0002

E20ds_v4 160 480 MB/s 4 x P6 1 x E15 1 x E6 1 x E6 won't


GiB achieve
less than
1ms
storage
latency1

E32v3 256 768 MB/s 4 x P10 1 x E15 1 x E6 1 x E6 VM type


GiB not HANA
certified
won't
achieve
less than
1ms
storage
latency1
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap comments
I/O and volume
Throughput /hana/log
striped
with LVM
or
MDADM

E32ds_v4 256 768 MBps 4 x P10 1 x E15 1 x E6 1 x E6 won't


GiB achieve
less than
1ms
storage
latency1

M32ls 256 500 MB/s 4 x P10 1 x E15 1 x E6 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 5,0002

E48ds_v4 384 1,152 MBps 6 x P10 1 x E20 1 x E6 1 x E6 won't


GiB achieve
less than
1ms
storage
latency1

E64v3 432 1,200 MB/s 6 x P10 1 x E20 1 x E6 1 x E6 won't


GiB achieve
less than
1ms
storage
latency1

E64ds_v4 504 1200 MB/s 7 x P10 1 x E20 1 x E6 1 x E6 won't


GiB achieve
less than
1ms
storage
latency1

M64ls 512 1,000 MB/s 7 x P10 1 x E20 1 x E6 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap comments
I/O and volume
Throughput /hana/log
striped
with LVM
or
MDADM

will limit
IOPS rate
to 10,0002

M32(d)ms_v2 875 500 MB/s 6 x P15 1 x E30 1 x E6 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 5,0002

M48(d)s_1_v3, 974 1,560 MBps 7 x P15 1 x E30 1 x E6 1 x E6 Using


M96(d)s_1_v3 GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M64s, 1,024 1,000 MB/s 7 x P15 1 x E30 1 x E6 1 x E6 Using


M64(d)s_v2 GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M64ms, 1,792 1,000 MB/s 6 x P20 1 x E30 1 x E6 1 x E6 Using


M64(d)ms_v2 GiB Write
Accelerator
for
combined
data and
log volume
will limit
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap comments
I/O and volume
Throughput /hana/log
striped
with LVM
or
MDADM

IOPS rate
to 10,0002

M96(d)s_2_v3 1,946 3,120 MBps 6 x P20 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M128s, 2,048 2,000 MB/s 6 x P20 1 x E30 1 x E10 1 x E6 Using


M128(d)s_v2 GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M192i(d)s_v2 2,048 2,000 MB/s 6 x P20 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M128ms, 3,800 2,000 MB/s 5 x P30 1 x E30 1 x E10 1 x E6 Using


M128(d)ms_v2 GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap comments
I/O and volume
Throughput /hana/log
striped
with LVM
or
MDADM

M176(d)s_3_v3 2,794 4,000 MBps 4 x P30 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M176(d)s_4_v3 3,750 4,000 MBps 5 x P30 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M192i(d)ms_v2 4,096 2,000 MB/s 5 x P30 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M208s_v2 2,850 1,000 MB/s 4 x P30 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 10,0002

M208ms_v2 5,700 1,000 MB/s 4 x P40 1 x E30 1 x E10 1 x E6 Using


GiB Write
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap Accelerator
comments
I/O and volume for
combined
Throughput /hana/log
data and
striped
log volume
with LVM
will limit
or
IOPS rate
MDADM
to 10,0002

M416s_v2 5,700 2,000 MB/s 4 x P40 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M416s_8_v2 5,700 2,000 MB/s 5 x P40 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

M416ms_v2 11400 2,000 MB/s 7 x P40 1 x E30 1 x E10 1 x E6 Using


GiB Write
Accelerator
for
combined
data and
log volume
will limit
IOPS rate
to 20,0002

1
Azure Write Accelerator can't be used with the Ev4 and Ev4 VM families. As a result of using
Azure premium storage the I/O latency won't be less than 1ms

2 The VM family supports Azure Write Accelerator, but there's a potential that the IOPS limit of
Write accelerator could limit the disk configurations IOPS capabilities

3 Review carefully the considerations for sizing /hana/shared

When combining the data and log volume for SAP HANA, the disks building the striped volume
shouldn't have read cache or read/write cache enabled.
There are VM types listed that aren't certified with SAP and as such not listed in the so called
SAP HANA hardware directory . Feedback of customers was that those non-listed VM types
were used successfully for some non-production tasks.

Next steps
For more information, see:

SAP HANA High Availability guide for Azure virtual machines.


SAP HANA Azure virtual machine Premium SSD v2
storage configurations
Article • 04/01/2024

This document is about HANA storage configurations for Azure Premium SSD v2. Azure Premium SSD v2 is a
new storage that was developed to more flexible block storage with submillisecond latency for general purpose
and DBMS workload. Premium SSD v2 simplifies the way how you build storage architectures and let's you tailor
and adapt the storage capabilities to your workload. Premium SSD v2 allows you to configure and pay for
capacity, IOPS, and throughput independent of each other.

For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other
considerations that are independent of the particular storage type, check these two documents:

SAP HANA Azure virtual machine storage configurations


Azure Storage types for SAP workload

) Important

The suggestions for the storage configurations in this document are meant as directions to start with.
Running workload and analyzing storage utilization patterns, you might realize that you're not utilizing all
the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary,
your workload might need more storage throughput than suggested with these configurations. As a result,
you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage
capacity required, storage latency needed, storage throughput and IOPS required and least expensive
configuration, Azure offers enough different storage types with different capabilities and different price
points to find and adjust to the right compromise for you and your HANA workload.

Major differences of Premium SSD v2 to premium storage


and Ultra disk
The major difference of Premium SSD v2 to the existing netWeaver and HANA certified storages can be listed
like:

With Premium SSD v2, you pay the exact deployed capacity. Unlike with premium disk and Ultra disk,
where brackets of sizes are being taken to determine the costs of capacity
Every Premium SSD v2 storage disk comes with 3,000 IOPS and 125 MBps on throughput that is included
in the capacity pricing
Extra IOPS and throughput on top of the default ones that come with each disk can be provisioned at any
point in time and are charged separately
Changes to the provisioned IOPS and throughput can be executed once in 6 hours
Latency of Premium SSD v2 is lower than premium storage, but higher than Ultra disk. But is
submilliseconds, so, that it passes the SAP HANA KPIs without the help of any other functionality, like
Azure Write Accelerator
Like with Ultra disk, you can use Premium SSD v2 for /hana/data and /hana/log volumes without the
need of any accelerators or other caches.
Like Ultra disk, Azure Premium SSD doesn't offer caching options as premium storage does
With Premium SSD v2, the same storage configuration applies to the HANA certified Ev4, Ev5, and M-series
VMs that offer the same memory
Unlike premium storage, there's no disk bursting for Premium SSD v2

Not having Azure Write Accelerator support or support by other caches makes the configuration of Premium
SSD v2 for the different VM families easier and more unified and avoid variations that need to be considered in
deployment automation. Not having bursting capabilities makes throughput and IOPS delivered more
deterministic and reliable. Since Premium SSD v2 is a new storage type, there are still some restrictions related
to its features and capabilities. to read up on these limitations and differences between the different storages,
start with reading the document Azure managed disk types.

Production recommended storage solution based on Azure


premium storage

7 Note

The configurations suggested below keep the HANA minimum KPIs, as listed in SAP HANA Azure virtual
machine storage configurations in mind. Our tests so far gave no indications that with the values listed,
SAP HCMT tests would fail in throughput or latency. That stated, not all variations possible and
combinations around stripe sets stretched across multiple disks or different stripe sizes were tested. Tests
condcuted with striped volumes across multiple disks were done with the stripe sizes documented in SAP
HANA Azure virtual machine storage configurations.

7 Note

For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the SAP
documentation for IAAS .

When you look up the price list for Azure managed disks, then it becomes apparent that the cost scheme
introduced with Premium SSD v2, gives you two general paths to pursue:

You try to simplify your storage architecture by using a single disk for /hana/data and /hana/log and pay
for more IOPS and throughput as needed to achieve the levels we recommend below. With the awareness
that a single disk has a throughput level of 1,200 MBps and 80,000 IOPS.
You want to benefit of the 3,000 IOPS and 125MBps that come for free with each disk. To do so, you would
build multiple smaller disks that sum up to the capacity you need and then build a striped volume with a
logical volume manager across these multiple disks. Striping across multiple disks would give you the
possibility to reduce the IOPS and throughput cost factors. But would result in some more efforts in
automating deployments and operating such solutions.

Since we don't want to define which direction you should go, we're leaving the decision to you on whether to
take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk
approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to
stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing
storage throughput are going to grow over time. And that HANA savepoints are critical and demand high
throughput for the /hana/data volume

) Important

You have the possibility to define the sector size of Azure Premium SSD v2 as 512 Bytes or 4096 Bytes.
Default sector size is 4096 Bytes. Tests conducted with HCMT did not reveal any significant differences in
performance and throughput between the different sector sizes. This sector size is different than stripe sizes
that you need to define when using a logical volume manager.

Recommendation: The recommended starting configurations with Azure premium storage v2 for production
scenarios look like:

Configuration for SAP /hana/data volume:

ノ Expand table

VM SKU RAM Max. VM I/O Max VM /hana/data /hana/data /hana/data


Throughput IOPS capacity throughput IOPS

E20ds_v4 160 GiB 480 MBps 32,000 192 GB 425 MBps 3,000

E20(d)s_v5 160 GiB 750 MBps 32,000 192 GB 425 MBps 3,000

E32ds_v4 256 GiB 769 MBps 51,200 304 GB 425 MBps 3,000

E32ds_v5 256 GiB 865 MBps 51,200 304 GB 425 MBps 3,000

E48ds_v4 384 GiB 1,152 MBps 76,800 464 GB 425 MBps 3,000

E48ds_v4 384 GiB 1,315 MBps 76,800 464 GB 425 MBps 3,000

E64ds_v4 504 GiB 1,200 MBps 80,000 608 GB 425 MBps 3,000

E64(d)s_v5 512 GiB 1,735 MBps 80,000 608 GB 425 MBps 3,000

E96(d)s_v5 672 GiB 2,600 MBps 80,000 800 GB 425 MBps 3,000

M32ts 192 GiB 500 MBps 20,000 224 GB 425 MBps 3,000

M32ls 256 GiB 500 MBps 20,000 304 GB 425 MBps 3,000

M64ls 512 GiB 1,000 MBps 40,000 608 GB 425 MBps 3,000

M32(d)ms_v2 875 GiB 500 MBps 30,000 1056 GB 425 MBps 3,000

M48(d)s_1_v3, 974 GiB 1,560 MBps 65,000 1232 GB 600 MBps 5,000
M96(d)s_1_v3

M64s, M64(d)s_v2 1,024 1,000 MBps 40,000 1232 GB 600 MBps 5,000
GiB

M64ms, 1,792 1,000 MBps 50,000 2144 GB 600 MBps 5,000


M64(d)ms_v2 GiB

M96(d)s_2_v3 1,946 3,120 MBps 130,000 2464 GB 800 MBps 12,000


GiB

M128s, M128(d)s_v2 2,048 2,000 MBps 80,000 2464 GB 800 MBps 12,000
GiB

M192i(d)s_v2 2,048 2,000 MBps 80,000 2464 GB 800 MBps 12,000


GiB

M176(d)s_3_v3 2,794 4,000 MBps 130,000 3424 GB 1,000 MBps 15,000


GiB

M176(d)s_4_v3 3,750 4,000 MBps 130,000 4672 GB 800 MBps 12,000


GiB
VM SKU RAM Max. VM I/O Max VM /hana/data /hana/data /hana/data
Throughput IOPS capacity throughput IOPS

M128ms, 3,892 2,000 MBps 80,000 4672 GB 800 MBps 12,000


M128(d)ms_v2 GiB

M192i(d)ms_v2 4,096 2,000 MBps 80,000 4912 GB 800 MBps 12,000


GiB

M208s_v2 2,850 1,000 MBps 40,000 3424 GB 1,000 MBps 15,000


GiB

M208ms_v2 5,700 1,000 MBps 40,000 6,848 GB 1,000 MBps 15,000


GiB

M416s_v2 5,700 2,000 MBps 80,000 6,848 GB 1,200 MBps 17,000


GiB

M416s_8_v2 7,600 2,000 MBps 80,000 9,120 GB 1,250 MBps 20,000


GiB

M416ms_v2 11,400 2,000 MBps 80,000 13,680 GB 1,300 MBps 25,000


GiB

M832ixs1 14,902 larger than 2,000 80,000 19,200 GB 2,000 MBps2 40,000
GiB Mbps

M832ixs_v21 23,088 larger than 2,000 80,000 28,400 GB 2,000 MBps2 60,000
GiB Mbps

1 VM type not available by default. Please contact your Microsoft account team

2
Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially
savepoint activity, can force you to deploy significant more throughput and IOPS

For the /hana/log volume. the configuration would look like:

ノ Expand table

VM SKU RAM Max. VM I/O Max VM /hana/log /hana/log /hana/log /hana/shared2


Throughput IOPS capacity throughput IOPS capacity
using default
IOPS
and throughput

E20ds_v4 160 480 MBps 32,000 80 GB 275 MBps 3,000 160 GB


GiB

E20(d)s_v5 160 750 MBps 32,000 80 GB 275 MBps 3,000 160 GB


GiB

E32ds_v4 256 768 MBps 51,200 128 GB 275 MBps 3,000 256 GB
GiB

E32(d)s_v5 256 865 MBps 51,200 128 GB 275 MBps 3,000 256 GB
GiB

E48ds_v4 384 1,152 MBps 76,800 192 GB 275 MBps 3,000 384 GB
GiB

E48(d)s_v5 384 1,315 MBps 76,800 192 GB 275 MBps 3,000 384 GB
GiB
VM SKU RAM Max. VM I/O Max VM /hana/log /hana/log /hana/log /hana/shared2
Throughput IOPS capacity throughput IOPS capacity
using default
IOPS
and throughput

E64ds_v4 504 1,200 MBps 80,000 256 GB 275 MBps 3,000 504 GB
GiB

E64(d)s_v5 512 1,735 MBps 80,000 256 GB 275 MBps 3,000 512 GB
GiB

E96(d)s_v5 672 2,600 MBps 80,000 512 GB 275 MBps 3,000 672 GB
GiB

M32ts 192 500 MBps 20,000 96 GB 275 MBps 3,000 192 GB


GiB

M32ls 256 500 MBps 20,000 128 GB 275 MBps 3,000 256 GB
GiB

M64ls 512 1,000 MBps 40,000 256 GB 275 MBps 3,000 512 GB
GiB

M32(d)ms_v2 875 500 MBps 20,000 512 GB 275 MBps 3,000 875 GB
GiB

M48(d)s_1_v3, 974 1,560 MBps 65,000 512 GB 275 MBps 3,000 1,024 GB
M96(d)s_1_v3 GiB

M64s, M64(d)s_v2 1,024 1,000 MBps 40,000 512 GB 275 MBps 3,000 1,024 GB
GiB

M64ms, 1,792 1,000 MBps 40,000 512 GB 275 MBps 3,000 1,024 GB
M64(d)ms_v2 GiB

M96(d)s_2_v3 1,946 3,120 MBps 130,000 512 GB 300 MBps 4,000 1,024 GB
GiB

M128s, 2,048 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
M128(d)s_v2 GiB

M192i(d)s_v2 2,048 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
GiB

M176(d)s_3_v3 2,794 4,000 MBps 130,000 512 GB 300 MBps 4,000 1,024 GB
GiB

M176(d)s_4_v3 3,750 4,000 MBps 130,000 512 GB 300 MBps 4,000 1,024 GB
GiB

M128ms, 3,892 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
M128(d)ms_v2 GiB

M192i(d)ms_v2 4,096 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
GiB

M208s_v2 2,850 1,000 MBps 40,000 512 GB 300 MBps 4,000 1,024 GB
GiB

M208ms_v2 5,700 1,000 MBps 40,000 512 GB 350 MBps 4,500 1,024 GB
GiB

M416s_v2 5,700 2,000 MBps 80,000 512 GB 400 MBps 5,000 1,024 GB
VM SKU RAM Max. VM I/O Max VM /hana/log /hana/log /hana/log /hana/shared2
Throughput IOPS capacity throughput IOPS capacity
using default
IOPS
and throughput

GiB

M416s_8_v2 5,700 2,000 MBps 80,000 512 GB 400 MBps 5,000 1,024 GB
GiB

M416ms_v2 11,400 2,000 MBps 80,000 512 GB 400 MBps 5,000 1,024 GB
GiB

M832ixs1 14,902 larger than 80,000 512 GB 600 MBps 9,000 1,024 GB
GiB 2,000 Mbps

M832ixs_v21 23,088 larger than 80,000 512 GB 600 MBps 9,000 1,024 GB
GiB 2,000 Mbps

1
VM type not available by default. Please contact your Microsoft account team
2 Review carefully the considerations for sizing /hana/shared

Check whether the storage throughput for the different suggested volumes meets the workload that you want
to run. If the workload requires higher volumes for /hana/data and /hana/log, you need to increase either IOPS,
and/or throughput on the individual disks you're using.

A few examples on how combining multiple Premium SSD v2 disks with a stripe set could impact the
requirement to provision more IOPS or throughput for /hana/data is displayed in this table:

ノ Expand table

VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned

E32(d)s_v5 256 1 304 GB 3,000 3,000 0 425 MBps 125 MBps 300 MBps
GiB

E32(d)s_v5 256 2 152 GB 3,000 6,000 0 425 MBps 250 MBps 175 MBps
GiB

E32(d)s_v5 256 4 76 GB 3,000 12,000 0 425 MBps 500 MBps 0 MBps


GiB

E96(d)s_v5 672 1 304 GB 3,000 3,000 0 425 MBps 125 MBps 300 MBps
GiB

E96(d)s_v5 672 2 152 GB 3,000 6,000 0 425 MBps 250 MBps 175 MBps
GiB

E96(d)s_v5 672 4 76 GB 3,000 12,000 0 425 MBps 500 MBps 0 MBps


GiB

M128s, 2,048 1 2,464 GB 12,000 3,000 9,000 800 MBps 125 MBps 675 MBps
M128ds_v2, GiB
M128s_v2

M128s, 2,048 2 1,232 GB 12,000 6,000 6,000 800 MBps 250 MBps 550 MBps
M128ds_v2, GiB
M128s_v2
VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned

M128s, 2,048 4 616 GB 12,000 12,000 0 800 MBps 500 MBps 300 MBps
M128ds_v2, GiB
M128s_v2

M416ms_v2 11,400 1 13,680 25,000 3,000 22,000 1,200 MBps 125 MBps 1,075 MBps
GiB

M416ms_v2 11,400 2 6,840 25,000 6,000 19,000 1,200 MBps 250 MBps 950 MBps
GiB

M416ms_v2 11,400 4 3,420 25,000 12,000 13,000 1,200 MBps 500 MBps 700 MBps
GiB

M832ixs1 14,902 2 7,451 GB 40,000 6,000 34,000 2,000 MBps 250 MBps 1750 MBps
GiB

M832ixs1 14,902 4 3,726 GB 40,000 12,000 28,000 2,000 MBps 500 MBps 1500 MBps
GiB

M832ixs1 14,902 8 1,863 GB 40,000 24,000 16,000 2,000 MBps 1,000 MBps 1000 MBps
GiB

1 VM type not available by default. Please contact your Microsoft account team

For /hana/log, a similar approach of using two disks could look like:

ノ Expand table

VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned

E32(d)s_v5 256 1 128 GB 3,000 3,000 0 275 MBps 125 MBps 150 MBps
GiB

E32(d)s_v5 256 2 64 GB 3,000 6,000 0 275 MBps 250 MBps 25 MBps


GiB

E96(d)s_v5 672 1 512 GB 3,000 3,000 0 275 MBps 125 MBps 150 MBps
GiB

E96(d)s_v5 672 2 256 GB 3,000 6,000 0 275 MBps 250 MBps 25 MBps
GiB

M128s, 2,048 1 512 GB 4,000 3,000 1,000 300 MBps 125 MBps 175 MBps
M128ds_v2, GiB
M128s_v2

M128s, 2,048 2 256 GB 4,000 6,000 0 300 MBps 250 MBps 50 MBps
M128ds_v2, GiB
M128s_v2

M416ms_v2 11,400 1 512 GB 5,000 3,000 2,000 400 MBps 125 MBps 275 MBps
GiB

M416ms_v2 11,400 2 256 GB 5,000 6,000 0 400 MBps 250 MBps 150 MBps
GiB
VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned

M832ixs1 14,902 1 512 GB 9,000 3,000 6,000 600 MBps 125 MBps 475 MBps
GiB

M832ixs1 14,902 2 256 GB 9,000 6,000 3,000 600 MBps 250 MBps 350 MBps
GiB

1
VM type not available by default. Please contact your Microsoft account team

These tables combined with the prices of IOPS and throughput should give you an idea how striping across
multiple Premium SSD v2 disks could reduce the costs for the particular storage configuration you're looking at.
Based on these calculations, you can decide whether to move ahead with a single disk approach for /hana/data
and/or /hana/log.

Next steps
For more information, see:

SAP HANA High Availability guide for Azure virtual machines.


SAP HANA Azure virtual machine Ultra Disk
storage configurations
Article • 11/21/2023

This document is about HANA storage configurations for Azure Ultra Disk storage as it was introduced as
ultra low latency storage for DBMS and other applications that need ultra low latency storage. For
general considerations around stripe sizes when using LVM, HANA data volume partitioning or other
considerations that are independent of the particular storage type, check these two documents:

SAP HANA Azure virtual machine storage configurations


Azure Storage types for SAP workload

Azure Ultra disk storage configuration for SAP HANA


Another Azure storage type is called Azure Ultra disk. The significant difference between Azure storage
offered so far and Ultra disk is that the disk capabilities aren't bound to the disk size anymore. As a
customer you can define these capabilities for Ultra disk:

Size of a disk ranging from 4 GiB to 65,536 GiB


IOPS range from 100 IOPS to 160,000 IOPS (maximum depends on VM types as well)
Storage throughput from 300 MB/sec to 2,000 MB/sec

Ultra disk gives you the possibility to define a single disk that fulfills your size, IOPS, and disk throughput
range. Instead of using logical volume managers like LVM or MDADM on top of Azure premium storage
to construct volumes that fulfill IOPS and storage throughput requirements. You can run a configuration
mix between Ultra disk and premium storage. As a result, you can limit the usage of Ultra disk to the
performance critical /hana/data and /hana/log volumes and cover the other volumes with Azure
premium storage

Other advantages of Ultra disk can be the better read latency in comparison to premium storage. The
faster read latency can have advantages when you want to reduce the HANA startup times and the
subsequent load of the data into memory. Advantages of Ultra disk storage also can be felt when HANA
is writing savepoints.

7 Note

Ultra disk might not be present in all the Azure regions. For detailed information where Ultra disk is
available and which VM families are supported, check the article What disk types are available in
Azure?.

) Important

You have the possibility to define the sector size of Ultra disk as 512 Bytes or 4096 Bytes. Default
sector size is 4096 Bytes. Tests conducted with HCMT did not reveal any significant differences in
performance and throughput between the different sector sizes. This sector size is different than
stripe sizes that you need to define when using a logical volume manager.
Production recommended storage solution with pure
Ultra disk configuration
In this configuration, you keep the /hana/data and /hana/log volumes separately. The suggested values
are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as
recommended in the SAP TDI Storage Whitepaper .

The recommendations are often exceeding the SAP minimum requirements as stated earlier in this
article. The listed recommendations are a compromise between the size recommendations by SAP and
the maximum storage throughput the different VM types provide.

7 Note

Azure Ultra disk is enforcing a minimum of 2 IOPS per Gigabyte capacity of a disk

VM SKU RAM Max. VM /hana/data /hana/data /hana/data /hana/log /hana/log /hana/log


I/O volume I/O IOPS volume I/O IOPS
Throughput throughput throughput

E20ds_v4 160 480 MBps 200 GB 400 MBps 2,500 80 GB 250 MB 1,800
GiB

E32ds_v4 256 768 MBps 300 GB 400 MBps 2,500 128 GB 250 MBps 1,800
GiB

E48ds_v4 384 1152 MBps 460 GB 400 MBps 3,000 192 GB 250 MBps 1,800
GiB

E64ds_v4 504 1200 MBps 610 GB 400 MBps 3,500 256 GB 250 MBps 1,800
GiB

E64s_v3 432 1,200 MBps 610 GB 400 MBps 3,500 220 GB 250 MB 1,800
GiB

M32ts 192 500 MBps 250 GB 400 MBps 2,500 96 GB 250 MBps 1,800
GiB

M32ls 256 500 MBps 300 GB 400 MBps 2,500 256 GB 250 MBps 1,800
GiB

M64ls 512 1,000 MBps 620 GB 400 MBps 3,500 256 GB 250 MBps 1,800
GiB

M32(d)ms_v2, 875 500 MBps 1,200 GB 600 MBps 5,000 512 GB 250 MBps 2,500
GiB

M48(d)s_1_v3, 974 1,560 MBps 1,200 GB 600 MBps 5,000 512 GB 250 MBps 2,500
M96(d)s_1_v3 GiB

M64s, 1,024 1,000 MBps 1,200 GB 600 MBps 5,000 512 GB 250 MBps 2,500
M64(d)s_v2 GiB

M64ms, 1,792 1,000 MBps 2,100 GB 600 MBps 5,000 512 GB 250 MBps 2,500
M64(d)ms_v2 GiB
VM SKU RAM Max. VM /hana/data /hana/data /hana/data /hana/log /hana/log /hana/log
I/O volume I/O IOPS volume I/O IOPS
Throughput throughput throughput

M96(d)s_2_v3 1,946 3,120 MBps 2,400 GB 750 MBps 7,000 512 GB 250 MBps 2,500
GiB

M128s, 2,048 2,000 MBps 2,400 GB 750 MBps 7,000 512 GB 250 MBps 2,500
M128(d)s_v2 GiB

M192i(d)s_v2 2,048 2,000 MBps 2,400 GB 750 MBps 7,000 512 GB 250 MBps 2,500
GiB

M176(d)s_3_v3 2,794 4,000 MBps 750 MBps 7,000 512 GB 250 MBps 2,500
GiB

M176(d)s_4_v3 3,750 4,000 MBps 4,800 GB 750 MBps 9,600 512 GB 250 MBps 2,500
GiB

M128ms, 3,892 2,000 MBps 4,800 GB 750 MBps 9,600 512 GB 250 MBps 2,500
M128(d)ms_v2 GiB

M192i(d)ms_v2 4,096 2,000 MBps 4,800 GB 750 MBps 9,600 512 GB 250 MBps 2,500
GiB

M208s_v2 2,850 1,000 MBps 3,500 GB 750 MBps 7,000 512 GB 250 MBps 2,500
GiB

M208ms_v2 5,700 1,000 MBps 7,200 GB 750 MBps 14,400 512 GB 250 MBps 2,500
GiB

M416s_v2 5,700 2,000 MBps 7,200 GB 1,000 MBps 14,400 512 GB 400 MBps 4,000
GiB

M416s_8_v2 7,600 2,000 MBps 9,500 GB 1,250 MBps 20,000 512 GB 400 MBps 4,000

M416ms_v2 11,400 2,000 MBps 14,400 GB 1,500 MBps 28,800 512 GB 400 MBps 4,000
GiB

M832isx1 14902 larger than 19,200 GB 2,000 40,000 512 GB 600 MBps 9,000
GiB 2,000 Mbps MBps2

M832isx_v21 23088 larger than 28,400 GB 2,000 60,000 512 GB 600 MBps 9,000
GiB 2,000 Mbps MBps2

1
VM type not available by default. Please contact your Microsoft account team

2
Maximum throughput provided by the VM and throughput requirement by SAP HANA workload,
especially savepoint activity, can force you to deploy significant more throughput and IOPS

The values listed are intended to be a starting point and need to be evaluated against the real
demands. The advantage with Azure Ultra disk is that the values for IOPS and throughput can be
adapted without the need to shut down the VM or halting the workload applied to the system.

7 Note
So far, storage snapshots with Ultra disk storage is not available. This blocks the usage of VM
snapshots with Azure Backup Services

Next steps
For more information, see:

SAP HANA High Availability guide for Azure virtual machines.


NFS v4.1 volumes on Azure NetApp Files
for SAP HANA
Article • 04/01/2024

Azure NetApp Files provides native NFS shares that can be used for /hana/shared,
/hana/data, and /hana/log volumes. Using ANF-based NFS shares for the /hana/data
and /hana/log volumes requires the usage of the v4.1 NFS protocol. The NFS protocol
v3 isn't supported for the usage of /hana/data and /hana/log volumes when basing the
shares on ANF.

) Important

The NFS v3 protocol implemented on Azure NetApp Files is not supported to be


used for /hana/data and /hana/log. The usage of the NFS 4.1 is mandatory for
/hana/data and /hana/log volumes from a functional point of view. Whereas for
the /hana/shared volume the NFS v3 or the NFS v4.1 protocol can be used from a
functional point of view.

Important considerations
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware
of the following important considerations:

The minimum capacity pool is 4 TiB

The minimum volume size is 100 GiB

ANF-based NFS shares and the virtual machines that mount those shares must be
in the same Azure Virtual Network or in peered virtual networks in the same region

The selected virtual network must have a subnet, delegated to Azure NetApp Files.
For SAP workload, it is highly recommended to configure a /25 range for the
subnet delegated to ANF.

It's important to have the virtual machines deployed sufficient proximity to the
Azure NetApp storage for lower latency as, for example, demanded by SAP HANA
for redo log writes.
Azure NetApp Files meanwhile has functionality to deploy NFS volumes into
specific Azure Availability Zones. Such a zonal proximity is going to be sufficient
in the majority of cases to achieve a latency of less than 1 millisecond. The
functionality is in public preview and described in the article Manage availability
zone volume placement for Azure NetApp Files. This functionality isn't requiring
any interactive process with Microsoft to achieve proximity between your VM
and the NFS volumes you allocate.
To achieve most optimal proximity, the functionality of Application Volume
Groups is available. This functionality isn't only looking for most optimal
proximity, but for most optimal placement of the NFS volumes, so, that HANA
data and redo log volumes are handled by different controllers. The
disadvantage is that this method needs some interactive process with Microsoft
to pin your VMs.

Make sure the latency from the database server to the ANF volume is measured
and below 1 millisecond

The throughput of an Azure NetApp volume is a function of the volume quota and
Service level, as documented in Service level for Azure NetApp Files. When sizing
the HANA Azure NetApp volumes, make sure the resulting throughput meets the
HANA system requirements. Alternatively consider using a manual QoS capacity
pool where volume capacity and throughput can be configured and scaled
independently (SAP HANA specific examples are in this document

Try to “consolidate” volumes to achieve more performance in a larger Volume for


example, use one volume for /sapmnt, /usr/sap/trans, … if possible

Azure NetApp Files offers export policy: you can control the allowed clients, the
access type (Read&Write, Read Only, etc.).

The User ID for sidadm and the Group ID for sapsys on the virtual machines must
match the configuration in Azure NetApp Files.

Implement Linux OS parameters mentioned in SAP note 3024346

) Important

For SAP HANA workloads, low latency is critical. Work with your Microsoft
representative to ensure that the virtual machines and the Azure NetApp Files
volumes are deployed in close proximity.

) Important

If there's a mismatch between User ID for sidadm and the Group ID for sapsys
between the virtual machine and the Azure NetApp configuration, the permissions
for files on Azure NetApp volumes, mounted to the VM, would be be displayed as
nobody . Make sure to specify the correct User ID for sidadm and the Group ID for

sapsys , when on-boarding a new system to Azure NetApp Files.

NCONNECT mount option


Nconnect is a mount option for NFS volumes hosted on ANF that allows the NFS client
to open multiple sessions against a single NFS volume. Using nconnect with a value of
larger than 1 also triggers the NFS client to use more than one RPC session on the client
side (in the guest OS) to handle the traffic between the guest OS and the mounted NFS
volumes. The usage of multiple sessions handling traffic of one NFS volume, but also the
usage of multiple RPC sessions can address performance and throughput scenarios like:

Mounting of multiple ANF hosted NFS volumes with different service levels in one
VM
The maximum write throughput for a volume and a single Linux session is between
1.2 and 1.4 GB/s. Having multiple sessions against one ANF hosted NFS volume
can increase the throughput

For Linux OS releases that support nconnect as a mount option and some important
configuration considerations of nconnect, especially with different NFS server endpoints,
read the document Linux NFS mount options best practices for Azure NetApp Files.

Sizing for HANA database on Azure NetApp


Files
The throughput of an Azure NetApp volume is a function of the volume size and Service
level, as documented in Service levels for Azure NetApp Files.

Important to understand is the performance relationship the size and that there are
physical limits for a storage endpoint of the service. Each storage endpoint is going to
be dynamically injected into the Azure NetApp Files delegated subnet upon volume
creation and receive an IP address. Azure NetApp Files volumes can – depending on
available capacity and deployment logic – share a storage endpoint

The table below demonstrates that it could make sense to create a large “Standard”
volume to store backups and that it doesn't make sense to create a “Ultra” volume
larger than 12 TB because the maximal physical bandwidth capacity of a single volume
would be exceeded.
If you require more than the maximum write throughput for your /hana/data volume
than a single Linux session can provide, you could also use SAP HANA data volume
partitioning as an alternative. SAP HANA data volume partitioning stripes the I/O activity
during data reload or HANA savepoints across multiple HANA data files that are located
on multiple NFS shares. For more details on HANA data volume striping read these
articles:

The HANA Administrator's Guide


Blog about SAP HANA – Partitioning Data Volumes
SAP Note #2400005
SAP Note #2700123

ノ Expand table

Size Throughput Standard Throughput Premium Throughput Ultra

1 TB 16 MB/sec 64 MB/sec 128 MB/sec

2 TB 32 MB/sec 128 MB/sec 256 MB/sec

4 TB 64 MB/sec 256 MB/sec 512 MB/sec

10 TB 160 MB/sec 640 MB/sec 1,280 MB/sec

15 TB 240 MB/sec 960 MB/sec 1,400 MB/sec1

20 TB 320 MB/sec 1,280 MB/sec 1,400 MB/sec1

40 TB 640 MB/sec 1,400 MB/sec1 1,400 MB/sec1

1
: write or single session read throughput limits (in case NFS mount option nconnect
isn't used)

It's important to understand that the data is written to the same SSDs in the storage
backend. The performance quota from the capacity pool was created to be able to
manage the environment. The Storage KPIs are equal for all HANA database sizes. In
almost all cases, this assumption doesn't reflect the reality and the customer
expectation. The size of HANA Systems doesn't necessarily mean that a small system
requires low storage throughput – and a large system requires high storage throughput.
But generally we can expect higher throughput requirements for larger HANA database
instances. As a result of SAP's sizing rules for the underlying hardware such larger HANA
instances also provide more CPU resources and higher parallelism in tasks like loading
data after an instances restart. As a result the volume sizes should be adopted to the
customer expectations and requirements. And not only driven by pure capacity
requirements.
As you design the infrastructure for SAP in Azure you should be aware of some
minimum storage throughput requirements (for productions Systems) by SAP. These
requirements translate into minimum throughput characteristics of:

ノ Expand table

Volume type and I/O Minimum KPI demanded by Premium service Ultra service
type SAP level level

Log Volume Write 250 MB/sec 4 TB 2 TB

Data Volume Write 250 MB/sec 4 TB 2 TB

Data Volume Read 400 MB/sec 6.3 TB 3.2 TB

Since all three KPIs are demanded, the /hana/data volume needs to be sized toward the
larger capacity to fulfill the minimum read requirements. if you're using manual QoS
capacity pools, the size and throughput of the volumes can be defined independently.
Since both capacity and throughput are taken from the same capacity pool, the pool‘s
service level and size must be large enough to deliver the total performance (see
example here)

For HANA systems, which aren't requiring high bandwidth, the ANF volume throughput
can be lowered by either a smaller volume size or, using manual QoS, by adjusting the
throughput directly. And in case a HANA system requires more throughput the volume
could be adapted by resizing the capacity online. No KPIs are defined for backup
volumes. However the backup volume throughput is essential for a well performing
environment. Log – and Data volume performance must be designed to the customer
expectations.

) Important

Independent of the capacity you deploy on a single NFS volume, the throughput, is
expected to plateau in the range of 1.2-1.4 GB/sec bandwidth utilized by a
consumer in a single session. This has to do with the underlying architecture of the
ANF offer and related Linux session limits around NFS. The performance and
throughput numbers as documented in the article Performance benchmark test
results for Azure NetApp Files were conducted against one shared NFS volume
with multiple client VMs and as a result with multiple sessions. That scenario is
different to the scenario we measure in SAP. Where we measure throughput from a
single VM against an NFS volume. Hosted on ANF.
To meet the SAP minimum throughput requirements for data and log, and according to
the guidelines for /hana/shared, the recommended sizes would look like:

ノ Expand table

Volume Size Size Supported NFS


Premium Storage Ultra Storage tier protocol
tier

/hana/log/ 4 TiB 2 TiB v4.1

/hana/data 6.3 TiB 3.2 TiB v4.1

/hana/shared scale- Min(1 TB, 1 x RAM) Min(1 TB, 1 x RAM) v3 or v4.1


up

/hana/shared scale- 1 x RAM of worker 1 x RAM of worker v3 or v4.1


out node node
per four worker per four worker
nodes nodes

/hana/logbackup 3 x RAM 3 x RAM v3 or v4.1

/hana/backup 2 x RAM 2 x RAM v3 or v4.1

For all volumes, NFS v4.1 is highly recommended.


Review carefully the considerations for sizing /hana/shared, as appropriately sized
/hana/shared volume contributes to system's stability.

The sizes for the backup volumes are estimations. Exact requirements need to be
defined based on workload and operation processes. For backups, you could
consolidate many volumes for different SAP HANA instances to one (or two) larger
volumes, which could have a lower service level of ANF.

7 Note

The Azure NetApp Files, sizing recommendations stated in this document are
targeting the minimum requirements SAP expresses towards their infrastructure
providers. In real customer deployments and workload scenarios, that may not be
enough. Use these recommendations as a starting point and adapt, based on the
requirements of your specific workload.

Therefore you could consider to deploy similar throughput for the ANF volumes as
listed for Ultra disk storage already. Also consider the sizes for the sizes listed for the
volumes for the different VM SKUs as done in the Ultra disk tables already.
 Tip

You can re-size Azure NetApp Files volumes dynamically, without the need to
unmount the volumes, stop the virtual machines or stop SAP HANA. That allows

flexibility to meet your application both expected and unforeseen throughput


demands.

Documentation on how to deploy an SAP HANA scale-out configuration with standby


node using ANF based NFS v4.1 volumes is published in SAP HANA scale-out with
standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server.

Linux Kernel Settings


To successfully deploy SAP HANA on ANF, Linux kernel settings need to be implemented
according to SAP note 3024346 .

For systems using High Availability (HA) using pacemaker and Azure Load Balancer
following settings need to be implemented in file /etc/sysctl.d/91-NetApp-HANA.conf

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1

Systems running with no pacemaker and Azure Load Balancer should implement these
settings in /etc/sysctl.d/91-NetApp-HANA.conf

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

Deployment with zonal proximity


To get a zonal proximity of your NFS volumes and VMs, you can follow the instructions
as described in Manage availability zone volume placement for Azure NetApp Files. With
this method, the VMs and the NFS volumes are going to be in the same Azure
Availability Zone. In most of the Azure regions, this type of proximity should be
sufficient to achieve less than 1 millisecond latency for the smaller redo log writes for
SAP HANA. This method doesn't require any interactive work with Microsoft to place
and pin VMs into specific datacenter. As a result, you're flexible with change VM sizes
and families within all the VM types and families offered in the Availability Zone you
deployed. So, that you can react flexible on chanign conditions or move faster to more
cost efficient VM sizes or families. We recommend this method for non-production
systems and production systems that can work with redo log latencies that are closer to
1 millisecond. The functionality is currently in public preview.

Deployment through Azure NetApp Files


application volume group for SAP HANA (AVG)
To deploy ANF volumes with proximity to your VM, a new functionality called Azure
NetApp Files application volume group for SAP HANA (AVG) got developed. There's a
series of articles that document the functionality. Best is to start with the article
Understand Azure NetApp Files application volume group for SAP HANA. As you read
the articles, it becomes clear that the usage of AVGs involves the usage of Azure
proximity placement groups as well. Proximity placement groups are used by the new
functionality to tie into with the volumes that are getting created. To ensure that over
the lifetime of the HANA system, the VMs aren't going to be moved away from the ANF
volumes, we recommend using a combination of Avset/ PPG for each of the zones you
deploy into. The order of deployment would look like:

Using the form you need to request a pinning of the empty AvSet to a compute
HW to ensure that VMs aren't going to move
Assign a PPG to the Availability Set and start a VM assigned to this Availability Set
Use Azure NetApp Files application volume group for SAP HANA functionality to
deploy your HANA volumes
The proximity placement group configuration to use AVGs in an optimal way would look
like:

The diagram shows that you're going to use an Azure proximity placement group for
the DBMS layer. So, that it can get used together with AVGs. It's best to just include only
the VMs that run the HANA instances in the proximity placement group. The proximity
placement group is necessary, even if only one VM with a single HANA instance is used,
for the AVG to identify the closest proximity of the ANF hardware. And to allocate the
NFS volume on ANF as close as possible to the VM(s) that are using the NFS volumes.

This method generates the most optimal results as it relates to low latency. Not only by
getting the NFS volumes and VMs as close together as possible. But considerations of
placing the data and redo log volumes across different controllers on the NetApp
backend are taken into account as well. Though, the disadvantage is that your VM
deployment is pinned down to one datacenter. With that you're losing flexibilities in
changing VM types and families. As a result, you should limit this method to the systems
that absolutely require such low storage latency. For all other systems, you should
attempt the deployment with a traditional zonal deployment of the VM and ANF. In
most cases this is sufficient in terms of low latency. This also ensures a easy maintenance
and administration of the VM and ANF.

Availability
ANF system updates and upgrades are applied without impacting the customer
environment. The defined SLA is 99.99% .

Volumes and IP addresses and capacity pools


With ANF, it's important to understand how the underlying infrastructure is built. A
capacity pool is only a construct, which provides a capacity and performance budget
and unit of billing, based on capacity pool service level. A capacity pool has no physical
relationship to the underlying infrastructure. When you create a volume on the service, a
storage endpoint is created. A single IP address is assigned to this storage endpoint to
provide data access to the volume. If you create several volumes, all the volumes are
distributed across the underlying bare metal fleet, tied to this storage endpoint. ANF has
a logic that automatically distributes customer workloads once the volumes or/and
capacity of the configured storage reaches an internal pre-defined level. You might
notice such cases because a new storage endpoint, with a new IP address, gets created
automatically to access the volumes. The ANF service doesn't provide customer control
over this distribution logic.

Log volume and log backup volume


The “log volume” (/hana/log) is used to write the online redo log. Thus, there are open
files located in this volume and it makes no sense to snapshot this volume. Online redo
logfiles are archived or backed up to the log backup volume once the online redo log
file is full or a redo log backup is executed. To provide reasonable backup performance,
the log backup volume requires a good throughput. To optimize storage costs, it can
make sense to consolidate the log-backup-volume of multiple HANA instances. So that
multiple HANA instances use the same volume and write their backups into different
directories. Using such a consolidation, you can get more throughput with since you
need to make the volume a bit larger.

The same applies for the volume you use write full HANA database backups to.

Backup
Besides streaming backups and Azure Back service backing up SAP HANA databases as,
described in the article Backup guide for SAP HANA on Azure Virtual Machines, Azure
NetApp Files opens the possibility to perform storage-based snapshot backups.

SAP HANA supports:


Storage-based snapshot backup support for single container system with SAP
HANA 1.0 SPS7 and higher
Storage-based snapshot backup support for Multi Database Container (MDC)
HANA environments with a single tenant with SAP HANA 2.0 SPS1 and higher
Storage-based snapshot backup support for Multi Database Container (MDC)
HANA environments with multiple tenants with SAP HANA 2.0 SPS4 and higher

Creating storage-based snapshot backups is a simple four-step procedure,

1. Creating a HANA (internal) database snapshot - an activity you or tools need to


perform
2. SAP HANA writes data to the datafiles to create a consistent state on the storage -
HANA performs this step as a result of creating a HANA snapshot
3. Create a snapshot on the /hana/data volume on the storage - a step you or tools
need to perform. There's no need to perform a snapshot on the /hana/log volume
4. Delete the HANA (internal) database snapshot and resume normal operation - a
step you or tools need to perform

2 Warning

Missing the last step or failing to perform the last step has severe impact on SAP
HANA's memory demand and can lead to a halt of SAP HANA

BACKUP DATA FOR FULL SYSTEM CREATE SNAPSHOT COMMENT 'SNAPSHOT-2019-03-


18:11:00';

az netappfiles snapshot create -g mygroup --account-name myaccname --pool-


name mypoolname --volume-name myvolname --name mysnapname
BACKUP DATA FOR FULL SYSTEM CLOSE SNAPSHOT BACKUP_ID 47110815 SUCCESSFUL
SNAPSHOT-2020-08-18:11:00';

This snapshot backup procedure can be managed in various ways, using various tools.
One example is the Python script “ntaphana_azure.py” available on GitHub
https://github.com/netapp/ntaphana This is sample code, provided “as-is” without
any maintenance or support.

U Caution

A snapshot in itself isn't a protected backup since it's located on the same physical
storage as the volume you just took a snapshot of. It's mandatory to “protect” at
least one snapshot per day to a different location. This can be done in the same
environment, in a remote Azure region or on Azure Blob storage.

Available solutions for storage snapshot based application consistent backup:

Microsoft What is Azure Application Consistent Snapshot tool is a command-line


tool that enables data protection for third-party databases. It handles all the
orchestration required to put the databases into an application consistent state
before taking a storage snapshot. After the storage snapshot has been taken, the
tool returns the databases to an operational state. AzAcSnap supports snapshot
based backups for HANA Large Instance and Azure NetApp Files. for more details,
read the article What is Azure Application Consistent Snapshot tool
For users of Commvault backup products, another option is Commvault IntelliSnap
V.11.21 and later. This or later versions of Commvault offer Azure NetApp Files
snapshot support. The article Commvault IntelliSnap 11.21 provides more
information.

Back up the snapshot using Azure blob storage


Back up to Azure blob storage is a cost effective and fast method to save ANF-based
HANA database storage snapshot backups. To save the snapshots to Azure Blob storage,
the AzCopy tool is preferred. Download the latest version of this tool and install it, for
example, in the bin directory where the Python script from GitHub is installed. Download
the latest AzCopy tool:
root # wget -O azcopy_v10.tar.gz https://aka.ms/downloadazcopy-v10-linux &&
tar -xf azcopy_v10.tar.gz --strip-components=1
Saving to: ‘azcopy_v10.tar.gz’

The most advanced feature is the SYNC option. If you use the SYNC option, azcopy
keeps the source and the destination directory synchronized. The usage of the
parameter --delete-destination is important. Without this parameter, azcopy isn't
deleting files at the destination site and the space utilization on the destination side
would grow. Create a Block Blob container in your Azure storage account. Then create
the SAS key for the blob container and synchronize the snapshot folder to the Azure
Blob container.

For example, if a daily snapshot should be synchronized to the Azure blob container to
protect the data. And only that one snapshot should be kept, the command below can
be used.

root # > azcopy sync '/hana/data/SID/mnt00001/.snapshot'


'https://azacsnaptmytestblob01.blob.core.windows.net/abc?sv=2021-02-
02&ss=bfqt&srt=sco&sp=rwdlacup&se=2021-02-04T08:25:26Z&st=2021-02-
04T00:25:26Z&spr=https&sig=abcdefghijklmnopqrstuvwxyz' --recursive=true --
delete-destination=true

Next steps
Read the article:

SAP HANA high availability for Azure virtual machines


Using Azure Premium Files NFS and
SMB for SAP workload
Article • 04/01/2024

This document is about Azure Premium Files file shares used for SAP workload. Both
NFS volumes and SMB file shares are covered. For considerations around Azure NetApp
Files for SMB or NFS volumes, see the following two documents:

Azure Storage types for SAP workload


NFS v4.1 volumes on Azure NetApp Files for SAP HANA

) Important

The suggestions for the storage configurations in this document are meant as
directions to start with. Running workload and analyzing storage utilization
patterns, you might realize that you are not utilizing all the storage bandwidth or
IOPS provided. You might consider downsizing on storage then. Or in contrary,
your workload might need more storage throughput than suggested with these
configurations. As a result, you might need to deploy more capacity to increase
IOPS or throughput. In the field of tension between storage capacity required,
storage latency needed, storage throughput and IOPS required and least expensive
configuration, Azure offers enough different storage types with different
capabilities and different price points to find and adjust to the right compromise
for you and your SAP workload.

For SAP workloads, the supported uses of Azure Files shares are:

sapmnt volume for a distributed SAP system


transport directory for SAP landscape
/hana/shared for HANA scale-out. Review carefully the considerations for sizing
/hana/shared, as appropriately sized /hana/shared volume contributes to system's
stability
file interface between your SAP landscape and other applications

7 Note

No SAP DBMS workloads are supported on Azure Premium Files volumes, NFS or
SMB. For support restrictions on Azure storage types for SAP
NetWeaver/application layer of S/4HANA, read the SAP support note 2015553
Important considerations for Azure Premium
Files shares with SAP
When you plan your deployment with Azure Files, consider the following important
points. The term share in this section applies to both SMB share and NFS volume.

The minimum share size is 100 gibibytes (GiB). With Azure Premium Files, you pay
for the capacity of the provisioned shares.
Size your file shares not only based on capacity requirements, but also on IOPS
and throughput requirements. For details, see Azure files share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues with NFS
on Azure Files, consult Troubleshoot Azure file share performance.
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces.
Don't use the saptrans share for any other activity, such as interfaces.
If your SAP system has a heavy load of batch jobs, you might have millions of job
logs. If the SAP batch job logs are stored in the file system, pay special attention to
the sizing of the sapmnt share. Reorganize the job log files regularly as per SAP
note 16083 . As of SAP_BASIS 7.52, the default behavior for the batch job logs is
to be stored in the database. For details, see SAP note 2360818 | Job log in the
database .
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also scalability and performance targets for storage accounts.
Be careful to not exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than five SAP systems in a single
storage account. This guideline helps you avoid exceeding the storage account
limits and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for non-production and production
SAP systems in the same storage account.
Use a private endpoint with Azure Files. In the unlikely event of a zonal failure, your
NFS sessions automatically redirect to a healthy zone. You don't have to remount
the NFS shares on your VMs. Use of private link can result in extra charges for the
data processed, see details about private link pricing .
If you're deploying your VMs across availability zones, use a storage account with
ZRS in the Azure regions that support ZRS.
Azure Premium Files doesn't currently support automatic cross-region replication
for disaster recovery scenarios. See guidelines on DR for SAP applications for
available options.
Carefully consider when consolidating multiple activities into one file share or multiple
file shares in one storage accounts. Distributing these shares onto separate storage
accounts improves throughput, resiliency and simplifies the performance analysis. If
many SAP SIDs and shares are consolidated onto a single Azure Files storage account
and the storage account performance is poor due to hitting the throughput limits. It can
become difficult to identify which SID or volume is causing the problem.

NFS additional considerations


We recommend that you deploy on SLES 15 SP2 or higher, RHEL 8.4 or higher to
benefit from NFS client improvements.
Mount the NFS shares with documented mount options, with troubleshooting
information available for mount or connection problems.
For SAP J2EE systems, placing /usr/sap/<SID>/J<nr> on NFS on Azure Files isn't
supported.

SMB additional considerations


SAP software provisioning manager (SWPM) version 1.0 SP32, SWPM 2.0 SP09 or
higher is required to use Azure Files SMB. SAPInst patch must be 749.0.91 or
higher. If SWPM/SAPInst doesn't accept more than 13 characters for file share
server, then the SWPM version is too old.
During the installation of the SAP PAS Instance, SWPM/SAPInst will ask to input a
transport hostname. The FQDN of the storage account should be entered
<storage_account>.file.core.windows.net or with IP address/hostname of the
private endpoint, if used.
When you integrate the active directory domain with Azure Files SMB for SAP high
availability deployment, the SAP users and groups must be added to the ‘sapmnt’
share. The SAP users should have permission Storage File Data SMB Share
Elevated Contributor set in the Azure portal.

Next steps
For more information, see:

Azure Storage types for SAP workload


SAP HANA High Availability guide for Azure virtual machines
Managing SAP HANA data footprint for
balancing cost and performance
Article • 10/04/2023

Data archiving has always been a critical decision-making item and is heavily used by
many companies to organize their legacy data to achieve cost benefits, balancing the
need to comply with regulations and retain data for certain period with the cost of
storing the data. Customers planning to migrate to S/4HANA or HANA based solution
or reduce existing data storage footprint can leverage on the various data tiering
options supported on Azure.

This article describes options on Azure with emphasis on classifying the data usage
pattern.

Overview
SAP HANA is an in-memory database and is supported on SAP certified servers. Azure
provides more than 100 solutions certified to run SAP HANA. In-memory capabilities
of SAP HANA allow customers to execute business transactions at an incredible speed.
But do you need fast access to all data, at any given point in time? Food for thought.

Most organizations choose to offload less accessed SAP data to HANA storage tier OR
archive legacy data to an extended solution to attain maximum performance out of their
investment. This tiering of data helps balance SAP HANA footprint and reduces cost and
complexity throughout effectively.

Customers can refer to the table below for data tier characteristics and choose to move
data to the temperature tier as per desired usage.

Classification Hot Data Warm Data Cold Data

Frequently accessed High Medium Low

Expected performance High Medium Low

Business critical High Medium Low

Frequently accessed, high-value data is classified as "hot" and is stored in-memory on


the SAP HANA database. Less frequently accessed "warm" data is offloaded from in-
memory and stored on HANA storage tier making it unified part of SAP HANA system.
Finally, legacy or rarely accessed data is stored on low-cost storage tiers like disk or
Hadoop, which remains accessible at any time.

"One size fits all" approach does not work here. Post data characterization is done, the
next step is to map SAP solution to the data tiering solution that is supported by SAP on
Azure.

SAP Solution Hot Warm Cold

Native SAP HANA SAP HANA Dynamic Tiering, DLM with Data Intelligence,
certified HANA extension Node, NSE DLM with Hadoop
VMs

SAP S/4HANA SAP Data aging via NSE SAP IQ


certified
VMs

SAP Business SAP Data aging via NSE SAP IQ


Suite on HANA certified
VMs

SAP BW/4 HANA SAP NSE, HANA extension Node NLS with SAP IQ and Hadoop,
certified Data Intelligence with ADLS
VMs

SAP BW on SAP NSE, HANA extension Node NLS with SAP IQ and Hadoop,
HANA certified Data Intelligence with ADLS
VMs

2462641 - Is HANA Dynamic Tiering supported for Business Suite on HANA, or other
SAP applications ( S/4, BW ) ? - SAP ONE Support Launchpad

2140959 - SAP HANA Dynamic Tiering - Additional Information - SAP ONE Support
Launchpad

2799997 - FAQ: SAP HANA Native Storage Extension (NSE) - SAP ONE Support
Launchpad

2816823 - Use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP
Business Suite powered by SAP HANA - SAP ONE Support Launchpad

Configuration

Warm Data Tiering


SAP HANA Dynamic Tiering for Azure Virtual Machines
SAP HANA infrastructure configurations and operations on Azure - Azure Virtual
Machines | Microsoft Learn

SAP HANA Native Storage Extension


SAP HANA Native Storage Extension (NSE) is native technology available starting SAP
HANA 2.0 SPS 04. NSE is a built-in disk-based extension to in-memory column store
data of SAP HANA. Customers do not need special hardware or certification for NSE.
Any HANA certified Azure virtual machines are valid to implement NSE.

Overview

The capacity of SAP HANA database with NSE is the amount of hot data memory and
warm data stored on disk. NSE allocates buffer cache in HANA main memory and is
sized separately from SAP HANA hot and working memory. As per SAP documentation,
buffer cache is enabled by default and is sized by default as 10% of HANA memory.
Please be informed NSE is not a replacement for data archiving as it does not reduce
HANA disk size. Unlike data archiving, activation of NSE can be reversed.

SAP HANA Native Storage Extension | SAP Help Portal

2799997 - FAQ: SAP HANA Native Storage Extension (NSE) - SAP ONE Support
Launchpad

2973243 - Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA
and SAP Business Suite powered by SAP HANA - SAP ONE Support Launchpad

NSE is supported for scale-up and scale-out systems. Availability for scale out systems
starts with SAP HANA 2.0 SPS 04. Please refer SAP Note 2927591 to understand the
functional restrictions.

2927591 - SAP HANA Native Storage Extension 2.0 SPS 05 Functional Restrictions - SAP
ONE Support Launchpad

SAP HANA NSE disaster recovery on Azure can be achieved using a variety of methods,
including:

HANA System replication: HANA System replication allows you to create a copy of
your SAP HANA NSE system in another Azure zone or region of choice. This copy is
periodically replicated with your production SAP HANA NSE system. In the event of
a disaster, fail over can be triggered to the disaster recovery SAP HANA NSE
system.

Backup and restore: You can also use backup and restore to protect your SAP
HANA NSE system from disaster. You can back up your SAP HANA NSE system to
Azure Backup, and then restore it to a new SAP HANA NSE system in the event of a
disaster. Native Azure backup capabilities can be leveraged here.

Azure Site Recovery: Azure Site Recovery is a disaster recovery service that can be
used to replicate and recover your SAP HANA NSE system to another Azure region.
Azure Site Recovery provides several features that make it a good choice for SAP
HANA NSE disaster recovery, such as:

Asynchronous replication, which can reduce the impact of replication on your


production SAP HANA NSE system.

Point-in-time restore, which allows you to restore your SAP HANA NSE system
to a specific point in time.

Automated failover and failback, which can help you to quickly recover your SAP
HANA NSE system in the event of a disaster.

The best method for SAP HANA NSE disaster recovery on Azure will depend on your
specific needs and requirements.

Restore SAP HANA database instances on Azure VMs - Azure Backup | Microsoft Learn

SAP HANA Extension Node

HANA Extension nodes are supported for BW on HANA, BW/4HANA and SAP HANA
native applications. For SAP BW on HANA, you will need SAP HANA 1.0 SP 12 as
minimum HANA . release and BW 7.4 SP12 as minimum BW release. For SAP HANA
native applications, you will need HANA 2 SPS03 as minimum HANA release.

The extension nodes setup is based on HANA scale-out offering. Customers with scale-
up architecture need to extend to scale-out deployment. Apart from HANA standard
license, no additional license is required. Extension node cannot share the same OS,
network and disk with HANA standard node.

Networking Configuration

Configure the networking settings for the Azure VMs to ensure proper communication
between the SAP HANA primary node and the extension nodes. This includes
configuring Azure virtual network (VNet) settings, subnets, and network security groups
(NSGs) to allow the necessary network traffic.

High Availability and Monitoring

Implement high availability mechanisms, such as clustering or replication, to ensure that


the SAP HANA system remains resilient in case of node failures. Additionally, set up
monitoring and alerting mechanisms to keep track of the health and performance of the
SAP HANA system on Azure.

Data Backup and Recovery

Implement a robust backup and recovery strategy to protect your SAP HANA data.
Azure offers various backup options, including Azure Backup or SAP HANA-specific
backup tools. Configure regular backups of both the primary and extension nodes to
ensure data integrity and availability.

Advantages of SAP HANA Extension Node

Data tiering and extension nodes for SAP HANA on Azure (Large Instances) - Azure
Virtual Machines | Microsoft Learn

Cold Data Tiering


SAP DLM (Data Lifecycle Management) provides tools and methodologies provided by
SAP to manage the lifecycle of data SAP HANA to low-cost storage.

Let's explore three common scenarios for SAP HANA data tiering using Azure services.

Data Tiering with SAP Data Intelligence

SAP Data Intelligence enables organizations to discover, integrate, orchestrate, and


govern data from various sources, both within and outside the enterprise.

SAP Data Intelligence enables the integration of SAP HANA with Azure Data Lake
Storage. Cold data can be seamlessly moved from the in-memory tier to ADLS,
leveraging its cost-effective storage capabilities. SAP Data Intelligence facilitates the
orchestration of data pipelines, allowing for transparent access and query execution on
data residing in ADLS.

You can leverage the capabilities and services offered by Azure in conjunction with SAP
Data Intelligence. Here are a few integration options:
Azure Data Lake Storage integration

SAP Data Intelligence supports integration with Azure Data Lake Storage, which is a
scalable and secure data storage solution in Azure. You can configure connections in
SAP Data Intelligence to access and process data stored in Azure Data Lake Storage. This
allows you to leverage the power of SAP Data Intelligence for data ingestion, data
transformation, and advanced analytics on data residing in Azure.

SAP Data Intelligence provides a wide range of connectors and transformations that
facilitate data movement and transformation tasks. You can configure SAP Data
Intelligence pipelines to extract cold data from SAP HANA, transform it if necessary, and
load it into Azure Blob Storage. This ensures seamless data transfer and enables further
processing or analysis on the tiered data.

SAP HANA provides query federation capabilities that seamlessly combine data from
different storage tiers. With SAP HANA Smart Data Access (SDA) and SAP Data
Intelligence, you can federate queries to access data stored in SAP HANA and Azure
Blob Storage as if it were in a single location. This transparent data access allows users
and applications to retrieve and analyze data from both tiers without the need for
manual data movement or complex integration.

Azure Synapse Analytics integration

Azure Synapse Analytics is a cloud-based analytics service that combines big data and
data warehousing capabilities. You can integrate SAP Data Intelligence with Azure
Synapse Analytics to perform advanced analytics and data processing on large volumes
of data. SAP Data Intelligence can connect to Azure Synapse Analytics to execute data
pipelines, transformations, and machine learning tasks leveraging the power of Azure
Synapse Analytics.

Azure services integration

SAP Data Intelligence can also integrate with other Azure services like Azure Blob
Storage, Azure SQL Database, Azure Event Hubs, and more. This allows you to leverage
the capabilities of these Azure services within your data workflows and processing tasks
in SAP Data Intelligence.

Data Tiering with SAP IQ


SAP IQ (formerly Sybase IQ), a highly scalable columnar database, can be utilized as a
storage option for cold data in the SAP HANA Data Tiering landscape. With SAP Data
Intelligence, organizations can set up data pipelines to move cold data from SAP HANA
to SAP IQ. This approach provides efficient compression and query performance for
historical or less frequently accessed data.

You can provision virtual machines (VMs) in Azure and install SAP IQ on those VMs.
Azure Blob Storage is a scalable and cost-effective cloud storage service provided by
Microsoft Azure. With SAP HANA Data Tiering, organizations can integrate SAP IQ with
Azure Blob Storage to store the data that has been tiered off from SAP HANA.

SAP HANA Data Tiering enables organizations to define policies and rules to
automatically move cold data from SAP HANA to SAP IQ in Azure Blob Storage. This
data movement can be performed based on data aging criteria or business rules. Once
the data is in SAP IQ, it can be efficiently compressed and stored, optimizing storage
utilization.

SAP HANA provides query federation capabilities, allowing queries to seamlessly access
and combine data from SAP HANA and SAP IQ as if it were in a single location. This
transparent data access ensures that users and applications can retrieve and analyze
data from both tiers without the need for manual data movement or complex
integration.

It's important to note that the specific steps and configurations may vary based on your
requirements, SAP IQ version, and Azure deployment options. Therefore, referring to the
official documentation and consulting with SAP and Azure experts is highly
recommended for a successful deployment of SAP IQ on Azure with data tiering.

Data Tiering with NLS on Hadoop


Near-Line Storage (NLS) on Hadoop offers a cost-effective solution for managing cold
data with SAP HANA. SAP Data Intelligence enables seamless integration between SAP
HANA and Hadoop-based storage systems, such as Hadoop Distributed File System
(HDFS). Data pipelines can be established to move cold data from SAP HANA to NLS on
Hadoop, allowing for efficient data archiving and retrieval.

Implement SAP BW NLS with SAP IQ on Azure | Microsoft Learn


Azure Virtual Machines high availability
for SAP NetWeaver
Article • 02/10/2023

Azure Virtual Machines is the solution for organizations that need compute, storage,
and network resources, in minimal time, and without lengthy procurement cycles. You
can use Azure Virtual Machines to deploy classic applications such as SAP NetWeaver-
based ABAP, Java, and an ABAP+Java stack. Extend reliability and availability without
additional on-premises resources. Azure Virtual Machines supports cross-premises
connectivity, so you can integrate Azure Virtual Machines into your organization's on-
premises domains, private clouds, and SAP system landscape.

This series of articles covers:

Architecture and scenarios.

Infrastructure preparation.

SAP installation steps for deploying high-availability SAP systems in Azure by using
the Azure Resource Manager deployment model.

) Important

We strongly recommend that you use the Azure Resource Manager


deployment model for your SAP installations. It offers many benefits that are
not available in the classic deployment model. Learn more about Azure
deployment models.

SAP high availability on:


Windows, using Windows Server Failover Cluster (WSFC)
Linux, using Linux Cluster Framework

In these articles, you learn how to help protect single point of failure (SPOF)
components, such as SAP Central Services (ASCS/SCS) and database management
systems (DBMS). You also learn about redundant components in Azure, such as SAP
application server.

High-availability architecture and scenarios for


SAP NetWeaver
Summary: In this article, we discuss high availability architecture of an SAP system in
Azure. We discuss how to solve high availability of SAP single point of failure (SPOF) and
redundant components and the specifics of Azure infrastructure high availability. We
also cover how these parts relate to SAP system components. Additionally, the
discussion is broken out for Windows and Linux specifics. Various SAP high-availability
scenarios are covered as well.

Updated: October 2017

Azure Virtual Machines high availability architecture and scenarios for SAP
NetWeaver

The article covers both Windows and Linux.

Azure infrastructure preparation for SAP


NetWeaver high-availability deployment
Summary: In the articles listed here, we cover the steps that you can take to deploy
Azure infrastructure in preparation for SAP installation. To simplify Azure infrastructure
deployment, SAP Azure Resource Manager templates are used to automate the whole
process.

Updated: March 2019

Prepare Azure infrastructure for SAP high availability by using a Windows


failover cluster and shared disk for SAP ASCS/SCS instances

Prepare Azure infrastructure for SAP high availability by using a Windows


failover cluster and file share for SAP ASCS/SCS instances

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux
Enterprise Server cluster framework for SAP ASCS/SCS instances

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux
Enterprise Server cluster framework for SAP ASCS/SCS instances with Azure
NetApp files

Prepare Azure infrastructure for SAP ASCS/SCS high availability - set up


GlusterFS on RHEL

Prepare Azure infrastructure for SAP ASCS/SCS high availability - set up


Pacemaker on RHEL
Installation of an SAP NetWeaver high
availability system in Azure
Summary: The articles listed here present step-by-step examples of the installation and
configuration of a high-availability SAP system in a Windows Server Failover Clustering
cluster and Linux cluster framework in Azure.

Updated: March 2019

Install SAP NetWeaver high availability by using a Windows failover cluster and
shared disk for SAP ASCS/SCS instances

Install SAP NetWeaver high availability by using a Windows failover cluster and
file share for SAP ASCS/SCS instances

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server
cluster framework for SAP ASCS/SCS instances

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server
cluster framework for SAP ASCS/SCS instances with Azure NetApp Files

Install SAP NetWeaver ASCS/SCS in high availability configuration on RHEL

Install SAP NetWeaver ASCS/SCS in high availability configuration on RHEL with


Azure NetApp Files
High-availability architecture and
scenarios for SAP NetWeaver
Article • 06/02/2023

Terminology definitions
High availability: Refers to a set of technologies that minimize IT disruptions by
providing business continuity of IT services through redundant, fault-tolerant, or
failover-protected components inside the same data center. In our case, the data center
resides within one Azure region.

Disaster recovery: Also refers to the minimizing of IT services disruption and their
recovery, but across various data centers that might be hundreds of miles away from
one another. In our case, the data centers might reside in various Azure regions within
the same geopolitical region or in locations as established by you as a customer.

Overview of high availability


SAP high availability in Azure can be separated into three types:

Azure infrastructure high availability:

For example, high availability can include compute (VMs), network, or storage and
its benefits for increasing the availability of SAP applications.

Utilizing Azure infrastructure VM restart to protect SAP applications:

If you decide not to use functionalities such as Windows Server Failover Clustering
(WSFC) or Pacemaker on Linux, Azure VM restart is utilized. It restores functionality
in the SAP systems if there are any planned and unplanned downtime of the Azure
physical server infrastructure and overall underlying Azure platform.

SAP application high availability:

To achieve full SAP system high availability, you must protect all critical SAP system
components. For example:
Redundant SAP application servers.
Unique components. An example might be a single point of failure (SPOF)
component, such as an SAP ASCS/SCS instance or a database management
system (DBMS).
SAP high availability in Azure differs from SAP high availability in an on-premises
physical or virtual environment.

There's no sapinst-integrated SAP high-availability configuration for Linux as there is for


Windows. For information about SAP high availability on-premises for Linux, see High
availability partner information .

Azure infrastructure high availability

SLA for single-instance virtual machines


There's currently a single-VM SLA of 99.9% with premium storage. To get an idea about
what the availability of a single VM might be, you can build the product of the various
available Azure Service Level Agreements .

The basis for the calculation is 30 days per month, or 43,200 minutes. For example, a
0.05% downtime corresponds to 21.6 minutes. As usual, the availability of the various
services is calculated in the following way:

(Availability Service #1/100) x (Availability Service #2/100) x (Availability Service #3/100)


*…

For example:

(99.95/100) x (99.9/100) x (99.9/100) = 0.9975 or an overall availability of 99.75%.

Multiple instances of virtual machines in the same


availability set
For all virtual machines that have two or more instances deployed in the same
availability set, we guarantee that you have virtual machine connectivity to at least one
instance at least 99.95% of the time.

When two or more VMs are part of the same availability set, each virtual machine in the
availability set is assigned an update domain and a fault domain by the underlying Azure
platform.

Update domains guarantee that multiple VMs aren't rebooted at the same time
during the planned maintenance of an Azure infrastructure. Only one VM is
rebooted at a time.
Fault domains guarantee that VMs are deployed on hardware components that
don't share a common power source and network switch. When servers, a network
switch, or a power source undergo an unplanned downtime, only one VM is
affected.

For more information, see manage the availability of virtual machines in Azure using
availability set.

Azure Availability Zones


Azure is in process of rolling out a concept of Azure Availability Zones throughout
different Azure Regions . In Azure regions where Availability Zones are offered, the
Azure regions have multiple data centers, which are independent in supply of power
source, cooling, and network. Reason for offering different zones within a single Azure
region is to enable you to deploy applications across two or three Availability Zones
offered. Assuming that issues in power sources and/or network would affect one
Availability Zone infrastructure only, your application deployment within an Azure
region is still fully functional. Eventually with some reduced capacity since some VMs in
one zone might be lost. But VMs in the other two zones are still up and running. The
Azure regions that offer zones are listed in Azure Availability Zones.

On using Availability Zones, there are some things to consider. The considerations list
like:

You can't deploy Azure Availability Sets within an Availability Zone. Only possibility
to combine Availability sets and Availability Zones is with proximity placement
groups. For more information, see article Combine availability sets and availability
zones with proximity placement groups.
You can't use the Basic Load Balancer to create failover cluster solutions based on
Windows Failover Cluster Services or Linux Pacemaker. Instead you need to use the
Azure Standard Load Balancer SKU.
Azure Availability Zones aren't giving any guarantees of certain distance between
the different zones within one region.
The network latency between different Azure Availability Zones within the different
Azure regions might be different from Azure region to region. There would be
cases, where you as a customer can reasonably run the SAP application layer
deployed across different zones since the network latency from one zone to the
active DBMS VM is still acceptable from a business process impact. Whereas there
could be customer scenarios where the latency between the active DBMS VM in
one zone and an SAP application instance in a VM in another zone can be too
intrusive and not acceptable for the SAP business processes. As a result, the
deployment architectures need to be different with an active/active architecture for
the application or active/passive architecture if latency is too high.
Using Azure managed disks is mandatory for deploying into Azure Availability
Zones.

Virtual Machine Scale Set with Flexible Orchestration


In Azure, Virtual Machine Scale Sets with Flexible orchestration offers a means of
achieving high availability for SAP workloads, much like other deployment frameworks
such as availability sets and availability zones. With flexible scale set, VMs can be
distributed across various availability zones and fault domains, making it a suitable
option for deploying highly available SAP workloads.

Virtual machine scale set with flexible orchestration offers the flexibility to create the
scale set within a region or span it across availability zones. On creating, the flexible
scale set within a region with platformFaultDomainCount>1 (FD>1), the VMs deployed
in the scale set would be distributed across specified number of fault domains in the
same region. On the other hand, creating the flexible scale set across availability zones
with platformFaultDomainCount=1 (FD=1) would distribute the VMs across different
zones and the scale set would also distribute VMs across different fault domains within
each zone on a best effort basis. For SAP workload only flexible scale set with FD=1 is
supported.

The advantage of using flexible scale sets with FD=1 for cross zonal deployment, instead
of traditional availability zone deployment is that the VMs deployed with the scale set
would be distributed across different fault domains within the zone in a best-effort
manner. To avoid the limitations associated with utilizing proximity placement group for
ensuring VMs availability across all Azure datacenters or under each network spine, it's
advised to deploy SAP workload across availability zones using flexible scale set with
FD=1. This deployment strategy ensures that VMs deployed in each zone aren't
restricted to a single datacenter or network spine, and all SAP system components, such
as databases, ASCS/ERS, and application tier are scoped at zonal level.

So, for new SAP workload deployment across availability zones, we advise to use flexible
scale set with FD=1. For more information, see virtual machine scale set for SAP
workload document.

Planned and unplanned maintenance of virtual machines


Two types of Azure platform events can affect the availability of your virtual machines:

Planned maintenance events are periodic updates made by Microsoft to the


underlying Azure platform. The updates improve overall reliability, performance,
and security of the platform infrastructure that your virtual machines run on.
Unplanned maintenance events occur when the hardware or physical
infrastructure underlying your virtual machine has failed in some way. It might
include local network failures, local disk failures, or other rack level failures. When
such a failure is detected, the Azure platform automatically migrates your virtual
machine from the unhealthy physical server that hosts your virtual machine to a
healthy physical server. Such events are rare, but they might also cause your virtual
machine to reboot.

For more information, see maintenance of virtual machines in Azure.

Azure Storage redundancy


The data in your storage account is always replicated to ensure durability and high
availability, meeting the Azure Storage SLA even in the face of transient hardware
failures.

Because Azure Storage keeps three images of the data by default, the use of RAID 5 or
RAID 1 across multiple Azure disks is unnecessary.

For more information, see Azure Storage replication.

Azure Managed Disks


Managed Disks is a resource type in Azure Resource Manager, is a recommended
storage option instead of virtual hard disks (VHDs) that are stored in Azure storage
accounts. Managed disks automatically align with an Azure availability set of the virtual
machine they're attached to. They increase the availability of your virtual machine and
the services that are running on it.

For more information, see Azure Managed Disks overview.

We recommend that you use managed disks because they simplify the deployment and
management of your virtual machines.

Comparison of different deployment types for


SAP workload
Here's a quick summary of the various deployment types that are available for SAP
workloads.
Features Virtual Machine Scale Set with Availability Availability Set
Flexible Orchestration (FD=1) Zone

Deployment Instances land across 1, 2 or 3 Instances Instances land within


behavior availability zones and distributed land across region and distributed
across different racks within each 1, 2 or 3 across different
zone on best effort basis availability fault/update domain
zones

Assign VM and Yes Yes No


managed disks to
specific Availability
zone

Fault domain - Yes No Yes, based on the


Max spreading number of fault
(Azure will domains defined
maximally spread during creation.
instances)

Compute to No No Yes
storage fault
domain alignment

Capacity Yes (assign capacity reservation at Yes No


Reservation VM level)

7 Note

Update domains have been deprecated in Flexible Orchestration mode. For


more information, see Migrate deployments and resources to Virtual
Machine Scale Sets in Flexible orchestration
For more information on compute to storage fault domain alignment, see
Choosing the right number of fault domains for Virtual Machine Scale Set
and How do availability sets work?.
To enable capacity reservation, it is important to check the capacity
reservation's limitations and restrictions.

High availability deployment options for SAP


workload
When deploying a high availability SAP workload on Azure, it's important to take into
account the various deployment types available, and how they can be applied across
different Azure regions (such as across zones, in a single zone, or in a region with no
zones). Following table illustrates several high availability options for SAP systems in
Azure regions.

System Across different zones in a In a singe zone of a In a region with no


type region region zones

High Flexible scale set with FD=1 Availability Sets with Availability Sets
Availability Proximity Placement
SAP system Groups

Availability Sets and Availability Flexible scale set with Flexible scale set with
Zones with Proximity Placement FD=1 (select only one FD=1 (no zones are
Groups zone) defined)

Availability Zones Availability Sets

Deployment across different zones in a region: For the highest availability, SAP
systems should be deployed across different zones in a region. This ensures that if
one zone is unavailable, the SAP system continues to be available in another zone.
If you're deploying new SAP workload across availability zones, it's advised to use
flexible virtual machine scales set with FD=1 deployment option. It allows you to
deploy multiple VMs across different zones in a region without worrying about
capacity constraints or placement groups. The scale set framework makes sure that
the VMs deployed with the scale set would be distributed across different fault
domains within the zone in a best effort manner. All the high available SAP
components like SAP ASCS/ERS, SAP databases are distributed across different
zones, whereas multiple application servers in each zone are distributed across
different fault domain on best effort basis.
Deployment in a single zone of a region: To deploy your high-availability SAP
system regionally in a location with multiple availability zones, and if it's essential
for all components of the system to be in a single zone, then it's advised to use
Availability Sets with Proximity Placement Groups deployment option. This
approach allows you to group all SAP system components in a single availability
zone, ensuring that the virtual machines within the availability set are spread
across different fault and update domains. While this deployment aligns compute
to storage fault domains, proximity isn't guaranteed. However, as this deployment
option is regional, it doesn't support Azure Site Recovery for zone-to-zone disaster
recovery. Moreover, this option restricts the entire SAP deployment to one data
center, which may lead to capacity limitations if you need to change the SKU size
or scale-out application instances.
Deployment in a region with no zones: If you're deploying your SAP system in a
region that doesn't have any zones, it's advised to use Availability sets. This option
provides redundancy and fault tolerance by placing VMs in different fault domains
and update domains.

) Important

It should be noted that the deployment options for Azure regions are only
suggestions. The most suitable deployment strategy for your SAP system will
depend on your particular requirements and environment.

Utilizing Azure infrastructure high availability


to protect SAP applications
If you decide not to use functionalities such as WSFC or Pacemaker on Linux (supported
for SUSE Linux Enterprise Server 12 and later, and Red Hat Enterprise Linux 7 and later),
Azure VM restart is utilized. It restores functionality in the SAP systems if there are any
planned and unplanned downtime of the Azure physical server infrastructure and overall
underlying Azure platform.

For more information about the approach, see Utilize Azure infrastructure VM restart to
achieve higher availability of the SAP system.

High availability of SAP applications on Azure


IaaS
To achieve full SAP system high availability, you must protect all critical SAP system
components. For example:

Redundant SAP application servers.


Unique components. An example might be a single point of failure (SPOF)
component, such as an SAP ASCS/SCS instance or a database management system
(DBMS).

The next sections discuss how to achieve high availability for all three critical SAP system
components.

High-availability architecture for SAP application servers

Windows and Linux


You usually don't need a specific high-availability solution for the SAP application server
and dialog instances. You achieve high availability by redundancy, and you configure
multiple dialog instances in various instances of Azure virtual machines. You should have
at least two SAP application instances installed in two instances of Azure virtual
machines.

Depending on the deployment type (flexible scale set with FD=1, availability zone or
availability set), you must distribute your SAP application server instances accordingly to
achieve redundancy.

Flexible scale set with platformFaultDomainCount=1 (FD=1): SAP application


servers deployed with flexible scale set (FD=1) distribute the virtual machines
across different availability zones and the scales set would also distribute VMs
across different fault domains within each zone on a best effort basis. This ensures
that if one zone is unavailable, the SAP application servers deployed in another
zone continues to be available.
Availability zone: SAP application servers deployed across availability zones ensure
that VMs are span across different zones to achieve redundancy. This ensures that
if one zone is unavailable, the SAP application servers deployed in another zone
continues to be available. For more information, see SAP workload configurations
with Azure Availability Zones
Availability set: SAP application servers deployed in availability set ensure that
VMs are distributed across different fault domains and update domains. On
placing VMs across different update domains, ensure that VMs aren't updated at
the same time during planned maintenance downtime. Whereas, placing VMs in
different fault domain ensures that VM is protected from hardware failures or
power interruptions within a data center. But the number of fault and update
domains that you can use in Azure availability set within an Azure scale unit is
finite. If you keep adding VMs to a single availability set, two or more VMs would
eventually end up in the same fault or update domain. For more information, see
the Azure availability sets section of the Azure virtual machines planning and
implementation for SAP NetWeaver document.

Unmanaged disks only: When using unmanaged disks with availability set, it's
important to recognize that the Azure storage account becomes a single point of failure.
Therefore, it's imperative to posses a minimum of two Azure storage accounts, in which
at least two virtual machines are distributed. In an ideal setup, the disks of each virtual
machine that is running an SAP dialog instance would be deployed in a different storage
account.

) Important
We strongly recommend that you use Azure managed disks for your SAP high-
availability installations. Because managed disks automatically align with the
availability set of the virtual machine they are attached to, they increase the
availability of your virtual machine and the services that are running on it.

High-availability architecture for an SAP ASCS/SCS


instance on Windows
Windows

You can use a WSFC solution to protect the SAP ASCS/SCS instance. Based on the type
of cluster share configuration (file share or shared disk), you can refer to appropriate
solution based on your storage type.

Cluster share - File share


High Availability of SAP ASCS/SCS instance using SMB on Azure Files.
High Availability of SAP ASCS/SCS instance using SMB on Azure NetApp Files.
High Availability of SAP ASCS/SCS instance using Scale Out File Server (SOFS).

Cluster share - Shared disk


High availability of SAP ASCS/SCS instance using Azure shared disk.
High availability of SAP ASCS/SCS instance using SIOS.

High-availability architecture for an SAP ASCS/SCS


instance on Linux

Linux

On Linux, the configuration of SAP ASCS/SCS instance clustering depends on the


operating system distribution and the type of storage being used. It's recommended to
implement the suitable solution according to your specific OS cluster framework.

SUSE Linux Enterprise Server (SLES)


High Availability of SAP ASCS/SCS instance using NFS with simple mount.
High Availability of SAP ASCS/SCS instance using NFS on Azure Files.
High Availability of SAP ASCS/SCS instance using NFS on Azure NetApp Files.
High Availability of SAP ASCS/SCS instance using NFS Server.

Red Hat Enterprise Linux (RHEL)


High Availability of SAP ASCS/SCS instance using NFS on Azure Files.
High Availability of SAP ASCS/SCS instance using NFS on Azure NetApp Files.

SAP NetWeaver multi-SID configuration for a clustered


SAP ASCS/SCS instance
Window

Multi-SID is supported with WSFC, using file share and shared disk. For more
information about multi-SID high-availability architecture on Windows, see:

File share: SAP ASCS/SCS instance multi-SID high availability for Windows Server
Failover Clustering and file share.
Shared disk: SAP ASCS/SCS instance multi-SID high availability for Windows Server
Failover Clustering and shared disk.

Linux

Multi-SID clustering is supported on Linux Pacemaker clusters for SAP ASCS/ERS, limited
to five SAP SIDs on the same cluster. For more information about multi-SID high-
availability architecture on Linux, see:

SUSE Linux Enterprise Server (SLES): HA for SAP NW on Azure VMs on SLES for SAP
applications multi-SID guide.
Red Hat Linux Enterprise (RHEL): HA for SAP NW on Azure VMs on RHEL for SAP
applications multi-SID guide.

High-availability of DBMS instance


In an SAP system, the DBMS servers as the single point of failure as well. So, it's
important to protect the database by implementing high-availability solution. The high
availability solution of DBMS varies based on the database used for SAP system. Based
on your database, follow the guidelines to achieve high availability on your database.

Database DR recommendation

SAP HANA HANA System Replication (HSR)

Oracle Oracle Data Guard

IBM DB2 High availability disaster recovery (HADR)

Microsoft SQL Microsoft SQL Always On


Database DR recommendation

SAP ASE ASE HADR Always On


Utilize Azure infrastructure VM restart
to achieve “higher availability” of an
SAP system
Article • 04/25/2023

This section applies to:

Windows and Linux

If you decide not to use functionalities such as Windows Server Failover Clustering
(WSFC) or Pacemaker on Linux (currently supported only for SUSE Linux Enterprise
Server [SLES] 12 and later), Azure VM restart is utilized. It protects SAP systems against
planned and unplanned downtime of the Azure physical server infrastructure and overall
underlying Azure platform.

7 Note

Azure VM restart primarily protects VMs and not applications. Although VM restart
doesn't offer high availability for SAP applications, it does offer a certain level of
infrastructure availability. It also indirectly offers “higher availability” of SAP
systems. There is also no SLA for the time it takes to restart a VM after a planned or
unplanned host outage, which makes this method of high availability unsuitable for
the critical components of an SAP system. Examples of critical components might
be an ASCS/SCS instance or a database management system (DBMS).

Another important infrastructure element for high availability is storage. For example,
the Azure Storage SLA is 99.9% availability. If you deploy all VMs and their disks in a
single Azure storage account, potential Azure Storage unavailability will cause the
unavailability of all VMs that are placed in that storage account and all SAP components
that are running inside of the VMs.

Instead of putting all VMs into a single Azure storage account, you can use dedicated
storage accounts for each VM. By using multiple independent Azure storage accounts,
you increase overall VM and SAP application availability.

Azure managed disks are automatically placed in the fault domain of the virtual machine
they are attached to. If you place two virtual machines in an availability set and use
managed disks, the platform takes care of distributing the managed disks into different
fault domains as well. If you plan to use a premium storage account, we highly
recommend using managed disks.

A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high
availability and storage accounts might look like this:

A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high
availability and managed disks might look like this:

For critical SAP components, you have achieved the following so far:

High availability of SAP application servers


SAP application server instances are redundant components. Each SAP application
server instance is deployed on its own VM, which is running in a different Azure
fault and upgrade domain. For more information, see the Fault domains and
Update domains sections.

You can ensure this configuration by using Azure availability sets. For more
information, see the Azure availability sets section.

Potential planned or unplanned unavailability of an Azure fault or upgrade domain


will cause unavailability of a restricted number of VMs with their SAP application
server instances.

Each SAP application server instance is placed in its own Azure storage account.
The potential unavailability of one Azure storage account will cause the
unavailability of only one VM with its SAP application server instance. However, be
aware that there is a limit on the number of Azure storage accounts within one
Azure subscription. To ensure automatic start of an ASCS/SCS instance after the
VM reboot, set the Autostart parameter in the ASCS/SCS instance start profile.

For more information, see High availability for SAP application servers.

Even if you use managed disks, the disks are stored in an Azure storage account
and might be unavailable in the event of a storage outage.

Higher availability of SAP ASCS/SCS instances

In this scenario, utilize Azure VM restart to protect the VM with the installed SAP
ASCS/SCS instance. In the case of planned or unplanned downtime of Azure
servers, VMs are restarted on another available server. As mentioned earlier, Azure
VM restart primarily protects VMs and not applications, in this case the ASCS/SCS
instance. Through the VM restart, you indirectly reach “higher availability” of the
SAP ASCS/SCS instance.

To ensure an automatic start of ASCS/SCS instance after the VM reboot, set the
Autostart parameter in the ASCS/SCS instance start profile. This setting means that
the ASCS/SCS instance as a single point of failure (SPOF) running in a single VM
will determine the availability of the whole SAP landscape.

Higher availability of the DBMS server

As in the preceding SAP ASCS/SCS instance use case, you utilize Azure VM restart
to protect the VM with installed DBMS software, and you achieve “higher
availability” of DBMS software through VM restart.
A DBMS that's running in a single VM is also a SPOF, and it is the determinative
factor for the availability of the whole SAP landscape.

Using Autostart for SAP instances


SAP offers a setting that lets you start SAP instances immediately after the start of the
OS within the VM. The instructions are documented in SAP Knowledge Base Article
1909114 . However, SAP no longer recommends the use of the setting, because it does
not allow control of the order of instance restarts if more than one VM is affected or if
multiple instances are running per VM.

Assuming a typical Azure scenario of one SAP application server instance in a VM and a
single VM eventually getting restarted, Autostart is not critical. But you can enable it by
adding the following parameter into the start profile of the SAP Advanced Business
Application Programming (ABAP) or Java instance:

Autostart = 1

7 Note

The Autostart parameter has certain shortcomings as well. Specifically, the


parameter triggers the start of an SAP ABAP or Java instance when the related
Windows or Linux service of the instance is started. That sequence occurs when the
operating system boots up. However, restarts of SAP services are also a common
occurrence for SAP Software Lifecycle Management functionality such as Software
Update Manger (SUM) or other updates or upgrades. These functionalities are not
expecting an instance to be restarted automatically. Therefore, the Autostart
parameter should be disabled before you run such tasks. The Autostart parameter
also should not be used for SAP instances that are clustered, such as ASCS/SCS/CI.

For more information about Autostart for SAP instances, see the following articles:

Start or stop SAP along with your Unix Server Start/Stop


Starting and stopping SAP NetWeaver management agents

Next steps
For information about full SAP NetWeaver application-aware high availability, see SAP
application high availability on Azure IaaS.
SAP workload configurations with Azure
Availability Zones
Article • 06/01/2023

Additionally to the deployment of the different SAP architecture layers in Azure


availability sets, Azure Availability Zones can be used for SAP workload deployments as
well. An Azure Availability Zone is defined as: "Unique physical locations within a region.
Each zone is made up of one or more datacenters equipped with independent power,
cooling, and networking". Azure Availability Zones aren't available in all regions. For
Azure regions that provide Availability Zones, check the Azure region map . This map is
going to show you which regions provide or are announced to provide Availability
Zones.

As of the typical SAP NetWeaver or S/4HANA architecture, you need to protect three
different layers:

SAP application layer, which can be one to a few dozen VMs. You want to minimize
the chance of VMs getting deployed on the same host server. You also want those
VMs in an acceptable proximity to the DBMS layer to keep network latency in an
acceptable window
SAP ASCS/SCS layer that is representing a single point of failure in the SAP
NetWeaver and S/4HANA architecture. You usually look at two VMs that you want
to cover with a failover framework. Therefore, these VMs should be allocated in
different infrastructure fault domains
SAP DBMS layer, which represents a single point of failure as well. In the usual
cases, it consists out of two VMs that are covered by a failover framework.
Therefore, these VMs should be allocated in different infrastructure fault domains.
Exceptions are SAP HANA scale-out deployments where more than two VMs are
can be used

The major differences between deploying your critical VMs through availability sets or
Availability Zones are:

Deploying with an availability set is lining up the VMs within the set in a single
zone or datacenter (whatever applies for the specific region). As a result the
deployment through the availability set isn't protected by power, cooling or
networking issues that affect the dataceter(s) of the zone as a whole. On the plus
side, the VMs are aligned with update and fault domains within that zone or
datacenter. Specifically for the SAP ASCS or DBMS layer where we protect two VMs
per availability set, the alignment with fault domains prevents that both VMs are
ending up on the same host hardware.
On deploying VMs through Azure Availability Zones and choosing different zones
(maximum of three possible), is going to deploy the VMs across the different
physical locations and with that adds protection from power, cooling or
networking issues that affect the dataceter(s) of the zone as a whole. However, as
you deploy more than one VM of the same VM family into the same Availability
Zone, there's no protection from those VMs ending up on the same host or same
fault domain. As a result, deploying through Availability Zones is ideal for the SAP
ASCS and DBMS layer where we usually look at two VMs each. For the SAP
application layer, which can be drastically more than two VMs, you might need to
fall back to a different deployment model (see later)

Your motivation for a deployment across Azure Availability Zones should be that you, on
top of covering failure of a single critical VM or ability to reduce downtime for software
patching within a critical, want to protect from larger infrastructure issues that might
affect the availability of one or multiple Azure datacenters.

As another resiliency deployment functionality, Azure introduced Virtual machine scale


sets with flexible orchestration for SAP workload. Virtual machine scale set provides
logical grouping of platform managed virtual machines. The flexible orchestration of
virtual machine scale set provides the option to create the scale set within a region or
span it across availability zones. On creating, the flexible scale set within a region with
platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be
distributed across specified number of fault domains in the same region. On the other
hand, creating the flexible scale set across availability zones with
platformFaultDomainCount=1 (FD=1) would distribute the virtual machines across
different zones and the scale set would also distribute VMs across different fault
domains within each zone on a best effort basis. For SAP workload only flexible scale
set with FD=1 is supported. The advantage of using flexible scale sets with FD=1 for
cross zonal deployment, instead of traditional availability zone deployment is that the
VMs deployed with the scale set would be distributed across different fault domains
within the zone in a best-effort manner. For more information, see deployment guide of
flexible scale set for SAP workload.

Considerations for deploying across Availability


Zones
Consider the following when you use Availability Zones:
The maximum network roundtrip latency between Azure Availability Zones is
stated in the document Regions and availability zones.
The experienced network roundtrip latency isn't necessarily indicative to the real
geographical distance of the datacenters that form the different zones. The
network roundtrip latency is also influenced by the cable connectivities and the
routing of the cables between these different datacenters.
Availability Zones aren't an ideal DR solution. Natural disasters can cause
widespread damage in world regions, including heavy damage to power
infrastructures. The distances between various zones might not be large enough to
constitute a proper DR solution.
The network latency across Availability Zones isn't the same in all Azure regions. In
some cases, you can deploy and run the SAP application layer across different
zones because the network latency from one zone to the active DBMS VM is
acceptable. But in some Azure regions, the latency between the active DBMS VM
and the SAP application instance, when deployed in different zones, might not be
acceptable for SAP business processes. In these cases, the deployment architecture
needs to be different, with an active/active architecture for the application, or an
active/passive architecture where cross-zone network latency is too high.
When deciding where to use Availability Zones, base your decision on the network
latency between the zones. Network latency plays an important role in two areas:
Latency between the two DBMS instances that need to have synchronous
replication. The higher the network latency, the more likely it affects the
scalability of your workload.
The difference in network latency between a VM running an SAP dialog instance
in-zone with the active DBMS instance and a similar VM in another zone. As this
difference increases, the influence on the running time of business processes
and batch jobs also increases, dependent on whether they run in-zone with the
DBMS or in a different zone.

When you deploy Azure VMs across Availability Zones and establish failover solutions
within the same Azure region, some restrictions apply:

You must use Azure Managed Disks when you deploy to Azure Availability
Zones.
The mapping of zone enumerations to the physical zones is fixed on an Azure
subscription basis. If you're using different subscriptions to deploy your SAP
systems, you need to define the ideal zones for each subscription. If you want to
compare the logical mapping of your different subscriptions, consider the Avzone-
Mapping script
You can't deploy Azure availability sets within an Azure Availability Zone unless you
use Azure Proximity Placement Group. The way how you can deploy the SAP DBMS
layer and the central services across zones and at the same time deploy the SAP
application layer using availability sets and still achieve close proximity of the VMs
is documented in the article Azure Proximity Placement Groups for optimal
network latency with SAP applications. If you aren't using Azure proximity
placement groups, you need to choose one or the other as a deployment
framework for virtual machines.
You can't use an Azure Basic Load Balancer to create failover cluster solutions
based on Windows Server Failover Clustering or Linux Pacemaker. Instead, you
need to use the Azure Standard Load Balancer SKU.

The ideal Availability Zones combination


If you want to deploy an SAP NetWeaver or S/4HANA system across zones, there are
two architecture patterns you can deploy:

Active/active: The pair of VMs running ASCS/SCS and the pair of VMS running the
DBMS layer are distributed across two zones. The number of VMs running the SAP
application layer are deployed in even numbers across the same two zones. If a
DBMS or ASCS/SCS VM is failing over, some of the open and active transactions
might be rolled back. But users are remaining logged in. It doesn't really matter in
which of the zones the active DBMS VM and the application instances run. This
architecture is the preferred architecture to deploy across zones.
Active/passive: The pair of VMs running ASCS/SCS and the pair of VMS running the
DBMS layer are distributed across two zones. The number of VMs running the SAP
application layer are deployed into one of the Availability Zones. You run the
application layer in the same zone as the active ASCS/SCS and DBMS instance. You
use this deployment architecture if the network latency across the different zones
is too high to run the application layer distributed across the zones. Instead the
SAP application layer needs to run in the same zone as the active ASCS/SCS and/or
DBMS instance. If an ASCS/SCS or DBMS VM fails over to the secondary zone, you
might encounter higher network latency and with that a reduction of throughput.
And you're required to fail back the previously failed over VM as soon as possible
to get back to the previous throughput levels. If a zonal outage occurs, the
application layer needs to be failed over to the secondary zone. An activity that
users experience as complete system shutdown. In some of the Azure regions, this
architecture is the only viable architecture when you want to use Availability Zones.
If you can't accept the potential impact of an ASCS/SCS or DBMS VMS failing over
to the secondary zone, you might be better of staying with availability set
deployments

So before you decide how to use Availability Zones, you need to determine:
The network latency among the three zones of an Azure region. Knowing the
network latency between the zones of a region is going to enable you to choose
the zones with the least network latency in cross-zone network traffic.
The difference between VM-to-VM latency within one of the zones, of your
choosing, and the network latency across two zones of your choosing.
A determination of whether the VM types that you need to deploy are available in
the two zones that you selected. With some VMs SKUs, you might encounter
situations in which some SKUs are available in only two of the three zones.

Network latency between and within zones


To determine the latency between the different zones, you need to:

Deploy the VM SKU you want to use for your DBMS instance in all three zones.
Make sure Azure Accelerated Networking is enabled when you take this
measurement. Accelerated Networking is the default setting since a few years.
Nevertheless, check whether it's enabled and working
When you find the two zones with the least network latency, deploy another three
VMs of the VM SKU that you want to use as the application layer VM across the
three Availability Zones. Measure the network latency against the two DBMS VMs
in the two DBMS zones that you selected.
Use niping as a measuring tool. This tool, from SAP, is described in SAP support
notes #500235 and #1100926 . Focus on the commands documented for
latency measurements. Because ping doesn't work through the Azure Accelerated
Networking code paths, we don't recommend that you use it.

You don't need to perform these tests manually. You can find a PowerShell procedure
Availability Zone Latency Test that automates the latency tests described.

Based on your measurements and the availability of your VM SKUs in the Availability
Zones, you need to make some decisions:

Define the ideal zones for the DBMS layer.


Determine whether you want to distribute your active SAP application layer across
one, two, or all three zones, based on differences of network latency in-zone versus
across zones.
Determine whether you want to deploy an active/passive configuration or an
active/active configuration, from an application point of view. (These
configurations are explained later in this article.)

In making these decisions, also take into account SAP's network latency
recommendations, as documented in SAP note #1100926 .
) Important

The measurements and decisions you make are valid for the Azure subscription you
used when you took the measurements. If you use another Azure subscription, the
mapping of enumerated zones might be different for another Azure subscription.
As a result, you need to repeat the measurements or find out the mapping of the
new subscription realitve to the old subscription the tool Avzone-Mapping
script .

) Important

It's expected that the measurements described earlier will provide different results
in every Azure region that supports Availability Zones . Even if your network
latency requirements are the same, you might need to adopt different deployment
strategies in different Azure regions because the network latency between zones
can be different. In some Azure regions, the network latency among the three
different zones can be vastly different. In other regions, the network latency among
the three different zones might be more uniform. The claim that there's always a
network latency between 1 and 2 milliseconds isn't correct. The network latency
across Availability Zones in Azure regions can't be generalized.

Active/Active deployment
This deployment architecture is called active/active because you deploy your active SAP
application servers across two or three zones. The SAP Central Services instance that
uses enqueue replication will be deployed between two zones. The same is true for the
DBMS layer, which will be deployed across the same zones as SAP Central Service. When
considering this configuration, you need to find the two Availability Zones in your
region that offer cross-zone network latency that's acceptable for your workload and
your synchronous DBMS replication. You also want to be sure the delta between
network latency within the zones you selected and the cross-zone network latency isn't
too large.

Nature of the SAP architecture is that, unless you configure it differently, users and
batch jobs can be executed in the different application instances. The side effect of this
fact with the active/active deployment is that batch jobs might be executed by any SAP
application instances independent on whether those run in the same zone with the
active DBMS or not. If the difference in network latency between the difference zones is
small compared to network latency within a zone, the difference in run times of batch
jobs might not be significant. However, the larger the difference of network latency
within a zone, compared to across zone network traffic is, the run time of batch jobs can
be impacted more if the job got executed in a zone where the DBMS instance isn't
active. It's on you as a customer to decide what acceptable differences in run time are.
And with that what the tolerable network latency for cross zones traffic is for your
workload.

Azure regions where such an active/active deployment could be possible without


significant large differences in run time and throughput within the application layer
deployed across different Availability Zones, list like:

Australia East (two of the three zones)


Brazil South (all three zones)
Central India (all three zones)
Central US (all three zones)
East Asia (all three zones)
East US (two of the three zones)
East US2 (all three zones)
Germany West Central (all three zones)
Israel Central (all three zones)
Italy North (two of the three zones)
Korea Central (all three zones)
Poland Central (all three zones)
Qatar Central (all three zones)
North Europe (all three zones)
Norway East (two of the three zones)
South Africa North (two of the three)
South Central US (all three zones)
Southeast Asia (all three zones)
Sweden Central (all three zones)
Switzerland North (all three zones)
UAE North (all three zones)
UK South (two of the three zones)
West Europe (two of the three zones)
West US2 (all three zones)
West US3 (all three zones)

The region list provided doesn't relief you as a customer to test your workload to decide
whether an active/active deployment architecture is possible.

Azure regions where the active/active SAP deployment architecture across zones might
not be possible, list like:
Canada Central
France Central
Japan East

Though for your individual workload, it might work. Therefore, you should test before
you decide for an architecture. Azure is constantly working to improve quality and
latency of its networks. Measurements conducted years back might not reflect current
conditions anymore.

Dependent on what you're willing to tolerate on run time differences other regions not
listed could qualify as well.

A simplified schema of an active/active deployment across two zones could look like
this:

The following considerations apply for this configuration:

Not using Azure Proximity Placement Group, you treat the Azure Availability Zones
as fault domains for all the VMs because availability sets can't be deployed in
Azure Availability Zones.
If you want to combine zonal deployments for the DBMS layer and central services,
but want to use Azure availability sets for the application layer, you need to use
Azure proximity groups as described in the article Azure Proximity Placement
Groups for optimal network latency with SAP applications.
For the load balancers of the failover clusters of SAP Central Services and the
DBMS layer, you need to use the Standard SKU Azure Load Balancer. The Basic
Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with
its subnets, is stretched across zones. You don't need separate virtual networks and
subnets for each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks .
Unmanaged disks aren't supported for zonal deployments.
Azure Premium Storage, Ultra SSD storage, or ANF don't support any type of
storage replication across zones. For DBMS deployments, we rely on database
methods to replicate data across zones
For SMB and NFS shares based on Azure Premium Files , zonal redundancy with
synchronous replication is offered. Check this document for availability of ZRS for
Azure Premium Files in the region you want to deploy into. The usage of zonal
replicated NFS and SMB shares is fully supported with SAP application layer
deployments and high availability failover clusters for NetWeaver or S/4HANA
centrals services. Documents that cover these cases are:
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files
Premium SMB for SAP applications
The third zone is used to host the SBD device if you build a SUSE Linux Pacemaker
cluster and use SBD devices instead of the Azure Fencing Agent. Or for more
application instances.
To achieve run time consistency for critical business processes, you can try to direct
certain batch jobs and users to application instances that are in-zone with the
active DBMS instance by using SAP batch server groups, SAP logon groups, or RFC
groups. However, in zonal failover process, you would need to manually move
these groups to instances running on VMs that are in-zone with the active DB VM.
You might want to deploy dormant dialog instances in each of the zones.

) Important

In this active/active scenario charges for cross zone traffic apply. Check the
document Bandwidth Pricing Details . The data transfer between the SAP
application layer and SAP DBMS layer is quite intensive. Therefore the active/active
scenario can contribute to costs.
Active/Passive deployment
If you can't find an acceptable delta between the network latency within one zone and
the latency of cross-zone network traffic, you can deploy an architecture that has an
active/passive character from the SAP application layer point of view. You define an
active zone, which is the zone where you deploy the complete application layer and
where you attempt to run both the active DBMS and the SAP Central Services instance.
With such a configuration, you need to make sure you don't have extreme run time
variations, depending on whether a job runs in-zone with the active DBMS instance or
not, in business transactions and batch jobs.

Azure regions where this type of deployment architecture across different zones could
be preferable are:

Canada Central
France Central
Japan East
Norway East
South Africa North

The basic layout of the architecture looks like this:

The following considerations apply for this configuration:


Availability sets can't be deployed in Azure Availability Zones. To mitigate, you can
use Azure proximity placement groups as documented in the article Azure
Proximity Placement Groups for optimal network latency with SAP applications.
When you use this architecture, you need to monitor the status closely and try to
keep the active DBMS and SAP Central Services instances in the same zone as your
deployed application layer. If there was a failover of SAP Central Service or the
DBMS instance, you want to make sure that you can manually fail back into the
zone with the SAP application layer deployed as quickly as possible.
For the load balancers of the failover clusters of SAP Central Services and the
DBMS layer, you need to use the Standard SKU Azure Load Balancer. The Basic
Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with
its subnets, is stretched across zones. You don't need separate virtual networks for
each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks .
Unmanaged disks aren't supported for zonal deployments.
Azure Premium Storage, Ultra SSD storage, or ANF don't support any type of
storage replication across zones. For DBMS deployments, we rely on database
methods to replicate data across zones
For SMB and NFS shares based on Azure Premium Files , zonal redundancy with
synchronous replication is offered. Check this document for availability of ZRS for
Azure Premium Files in the region you want to deploy into. The usage of zonal
replicated NFS and SMB shares is fully supported with SAP application layer
deployments and high availability failover clusters for NetWeaver or S/4HANA
centrals services. Documents that cover these cases are:
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files
Premium SMB for SAP applications
The third zone is used to host the SBD device if you build a SUSE Linux Pacemaker
cluster and use SBD devices instead of the Azure Fencing Agent. Or for additional
application instances.
You should deploy dormant VMs in the passive zone (from a DBMS point of view)
so you can start application resources for the case of a zone failure. Another
possibility could be to use Azure Site Recovery , which is able to replicate active
VMs to dormant VMs between zones.
You should invest in automation that allows you to automatically start the SAP
application layer in the second zone if a zonal outage occurs.
Combined high availability and disaster
recovery configuration
Microsoft doesn't share any information about geographical distances between the
facilities that host different Azure Availability Zones in an Azure region. Still, some
customers are using zones for a combined HA and DR configuration that promises a
recovery point objective (RPO) of zero. An RPO of zero means that you shouldn't lose
any committed database transactions even in disaster recovery cases.

7 Note

We recommend that you use a configuration like this only in certain circumstances.
For example, you might use it when data can't leave the Azure region for security or
compliance reasons.

Here's one example of how such a configuration might look:


The following considerations apply for this configuration:

You're either assuming that there's a significant distance between the facilities
hosting an Availability Zone or you're forced to stay within a certain Azure region.
Availability sets can't be deployed in Azure Availability Zones. To compensate for
that, you can use Azure proximity placement groups as documented in the article
Azure Proximity Placement Groups for optimal network latency with SAP
applications.
When you use this architecture, you need to monitor the status closely, and try to
keep the active DBMS and SAP Central Services instances in the same zone as your
deployed application layer. If there was a failover of SAP Central Service or the
DBMS instance, you want to make sure that you can manually fail back into the
zone with the SAP application layer deployed as quickly as possible.
You should have production application instances preinstalled in the VMs that run
the active QA application instances.
In a zonal failure case, shut down the QA application instances and start the
production instances instead. You need to use virtual names for the application
instances to make this work.
For the load balancers of the failover clusters of SAP Central Services and the
DBMS layer, you need to use the Standard SKU Azure Load Balancer. The Basic
Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with
its subnets, is stretched across zones. You don't need separate virtual networks for
each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks .
Unmanaged disks aren't supported for zonal deployments.
Azure Premium Storage, Ultra SSD storage, or ANF don't support any type of
storage replication across zones. For DBMS deployments, we rely on database
methods to replicate data across zones
For SMB and NFS shares based on Azure Premium Files , zonal redundancy with
synchronous replication is offered. Check this document for availability of ZRS for
Azure Premium Files in the region you want to deploy into. The usage of zonal
replicated NFS and SMB shares is fully supported with SAP application layer
deployments and high availability failover clusters for NetWeaver or S/4HANA
centrals services. Documents that cover these cases are:
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files
Premium SMB for SAP applications
The third zone is used to host the SBD device if you build a SUSE Linux Pacemaker
cluster and use SBD devices instead of the Azure Fencing Agent.

Next steps
Here are some next steps for deploying across Azure Availability Zones:

Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster


shared disk in Azure
Prepare Azure infrastructure for SAP high availability by using a Windows failover
cluster and file share for SAP ASCS/SCS instances
Configuration options to minimize
network latency with SAP applications
Article • 04/24/2024

) Important

In November 2021 we made significant changes in the way how proximity


placement groups should be used with SAP workload in zonal deployments.

SAP applications based on the SAP NetWeaver or SAP S/4HANA architecture are
sensitive to network latency between the SAP application tier and the SAP database tier.
This sensitivity is the result of most of the business logic running in the application layer.
Because the SAP application layer runs the business logic, it issues queries to the
database tier at a high frequency, at a rate of thousands or tens of thousands per
second. In most cases, the nature of these queries is simple. They can often be run on
the database tier in 500 microseconds or less.

The time spent on the network to send such a query from the application tier to the
database tier and receive the result sent back has a major impact on the time it takes to
run business processes. This sensitivity to network latency is why you might want to
achieve certain minimum network latency in SAP deployment projects. See SAP Note
#1100926 - FAQ: Network performance for guidelines on how to classify the network
latency.

In many Azure regions, the number of datacenters has grown. At the same time,
customers, especially for high-end SAP systems, are using more special VM families like
Mv2 or Mv3 family and newer. These Azure virtual machine types aren't always available
in each of the datacenters that collect into an Azure region. These facts can create
opportunities to optimize network latency between the SAP application layer and the
SAP DBMS layer.

Azure provides different deployment options for SAP workloads. For the chosen
deployment type you have options to optimize network latency, if needed. Detailed
information about each option is thoroughly described in the following sections within
this article:

Proximity Placement Groups


Virtual Machine Scale Set with Flexible Orchestration
Proximity Placement Groups
Proximity placement groups enable the grouping of different VM types under a single
network spine, ensuring optimal low network latency between them. When the first VM
is deployed in proximity placement group, that VM gets bound to a specific network
spine. As all the other VMs that are going to be deployed into the same proximity
placement group, those VMs get grouped under the same network spine. As appealing
as this prospect sounds, the usage of the construct introduces some restrictions and
pitfalls as well:

You can't assume that all Azure VM types are available in every and all Azure
datacenters or under each and every network spine. As a result, the combination of
different VM types within one proximity placement group can be severely
restricted. These restrictions occur because the host hardware that is needed to
run a certain VM type might not be present in the datacenter or under the network
spine to which the proximity placement group was assigned
As you resize parts of the VMs that are within one proximity placement group, you
can't automatically assume that in all cases the new VM type is available in the
same datacenter or under the network spine the proximity placement group got
assigned to
As Azure decommissions hardware it might force certain VMs of a proximity
placement group into another Azure datacenter or another network spine. For
details covering this case, read the document Proximity placement groups

) Important

As a result of the potential restrictions, proximity placement groups should be only


used:

When necessary in certain scenarios (see later)


When the network latency between application layer and DBMS layer is too
high and impacts the workload
Only on granularity of a single SAP system and not for a whole system
landscape or a complete SAP landscape
In a way to keep the different VM types and the number of VMs within a
proximity placement group to a minimum

The scenarios where proximity placement groups can be used to optimize network
latency:
You want to deploy the critical resources of your SAP workload across different
availability zones and on the other hand need VMs of the application tier to be
spread across different fault domains by using availability sets in each of the zones.
In this case, as later described in the document, proximity placement groups are
the glue needed.
You deploy the SAP workload with availability sets. Where the SAP database tier,
the SAP application tier and ASCS/SCS VMs are grouped in three different
availability sets. In such a case, you want to make sure that the availability sets
aren't spread across the complete Azure region since this could, dependent on the
Azure region, result in network latency that could impact SAP workload negatively.
You use proximity placement groups to group VMs together to achieve lowest
possible network latency between the services hosted in the VMs. For example,
latency within an availability zone alone does not meet the application
requirements.

As for deployment scenario #2, in many regions, especially regions without availability
zones and most regions with availability zones, the network latency independent on
where the VMs land is acceptable. Though there are some regions of Azure that can't
provide a sufficiently good experience without collocating the three different availability
sets without the usage of proximity placement groups.

What are proximity placement groups?


An Azure proximity placement group is a logical construct. When a proximity placement
group is defined, it's bound to an Azure region and an Azure resource group. When
VMs are deployed, a proximity placement group is referenced by:

The first Azure VM deployed under a network spine with many Azure compute
units and low network latency. Such a network spine often matches a single Azure
datacenter. You can think of the first virtual machine as a "scope VM" that is
deployed into a compute scale unit based on Azure allocation algorithms that are
eventually combined with deployment parameters.
All subsequent VMs deployed that reference the proximity placement group are
going to be deployed under the same network spine as the first virtual machine.

7 Note

If there's no host hardware deployed that could run a specific VM type under the
network spine where the first VM was placed, the deployment of the requested VM
type won’t succeed. You’ll get an allocation failure message that indicates that the
VM can't be supported within the perimeter of the proximity placement group.
To reduce risk of the above, it's recommended to use the intent option when creating
the proximity placement group. The intent option allows you to list the VM types that
you're intending to include into the proximity placement group. This list of VM types will
be taken to find the best datacenter that hosts these VM types. If such a datacenter is
found, the PPG is going to be created and is scoped for the datacenter that fulfills the
VM SKU requirements. If there's no such datacenter found, the creation of the proximity
placement group is going to fail. You can find more information in the documentation
PPG - Use intent to specify VM sizes. Be aware that actual capacity situations aren't
taken into account in the checks triggered by the intent option. As a result, there still
could be allocation errors rooted in insufficient capacity available.

A single Azure resource group can have multiple proximity placement groups assigned
to it. But a proximity placement group can be assigned to only one Azure resource
group.

For more information and deployment examples of proximity placement groups, see the
available documentation.

Proximity placement groups with zonal deployments


It's important to provide a reasonably low network latency between the SAP application
tier and the DBMS tier. In most situations a zonal deployment alone fulfills this
requirement. For a limited set of scenarios, a zonal deployment alone might not meet
the application latency requirements. Such situations require VM placement as close as
possible and enable reasonably low network latency, an Azure proximity placement
group can be defined for such an SAP system.

Avoid bundling several SAP production or nonproduction systems into a single


proximity placement group. Avoid bundles of SAP systems because the more systems
you group in a proximity placement group, the higher the chances:

That you require a VM type that isn't available under the network spine into which
the proximity placement group was assigned to.
That resources of nonmainstream VMs, like M-Series VMs, could eventually be
unfulfilled when you need to expand the number of VMs into a proximity
placement group over time.

Based on many improvements deployed by Microsoft into the Azure regions to reduce
network latency within an Azure availability zone, the deployment guidance when using
proximity placement groups for zonal deployments, looks like:
The difference to the recommendation given so far is that the database VMs in the two
zones are no more a part of the proximity placement groups. The proximity placement
groups per zone are now scoped with the deployment of the VM running the SAP
ASCS/SCS instances. This also means that for the regions where availability zones are
collected by multiple datacenters, the ASCS/SCS instance, and the application tier could
run under one network spine and the database VMs could run under another network
spine. Though with the network improvements made, the network latency between the
SAP application tier and the DBMS tier still should be sufficient for sufficiently good
performance and throughput. The advantage of this new configuration is that you have
more flexibility in resizing VMs or moving to new VM types with either the DBMS layer
or/and the application layer of the SAP system.

For the special case of using Azure NetApp Files (ANF) for the DBMS environment and
the ANF related new functionality of Azure NetApp Files application volume group for
SAP HANA and its necessity for proximity placement groups, check the document NFS
v4.1 volumes on Azure NetApp Files for SAP HANA.

Proximity placement groups with availability set


deployments
In this case, the purpose is to use proximity placement groups to collocate the VMs that
are deployed through different availability sets. In this usage scenario, you aren't using a
controlled deployment across different availability zones in a region. Instead you want
to deploy the SAP system by using availability sets. As a result, you have at least an
availability set for the DBMS VMs, ASCS/SCS VMs, and the application tier VMs. Since
you can't specify at deployment time of a VM an availability set AND an availability
zone, you can't control where the VMs in the different availability sets are going to be
allocated. This could result in some Azure regions that the network latency between
different VMs, still could be too high to give a sufficiently good performance experience.
So the resulting architecture would look like:

In this graphic, a single proximity placement group would be assigned to a single SAP
system. This PPG gets assigned to the three availability sets. The proximity placement
group is then scoped by deploying the first database tier VMs into the DBMS availability
set. This architecture recommendation collocates all VMs under the same network spine.
It's introducing the restrictions mentioned earlier in this article. Therefore, the proximity
placement group architecture should be used sparsely.

Combine availability sets and availability zones with


proximity placement groups
One of the problems to using availability zones for SAP system deployments is that you
can’t deploy the SAP application tier by using availability sets within the specific
availability zone. You want the SAP application tier to be deployed in the same zones as
the SAP ASCS/SCS VMs. Referencing an availability zone and an availability set when
deploying a single VM isn't possible so far. But just deploying a VM instructing an
availability zone, you lose the ability to make sure the application layer VMs are spread
across different update and failure domains.

By using proximity placement groups, you can bypass this restriction. Here's the
deployment sequence:
Create a proximity placement group.
Deploy your anchor VM, recommended being the ASCS/SCS VM, by referencing an
availability zone.
Create an availability set that references the Azure proximity placement group.
(See the command later in this article.)
Deploy the application layer VMs by referencing the availability set and the
proximity placement group.

) Important

It is important to understand that disks of the application layer VMs are not
guaranteed to be allocated in the same availability zone as the VMs are directed to
using the proximity placement group. The result of the deployment shown in the
next steps may be that the VMs are allocated in the same network spine and with
that the same availability zone as the anchor VM. But the respctive disks (base VHD
and mounted Azure block storage disks) may not be allocated under the same
network spine or even the same availabity zone. Instead the disks of those VMs can
be allocated in any of the datacenters of the specific region. Though the disks of
the anchor VM that got deployed by defining a zone are going to be deployed in
the same zone as the VM got deployed.

Instead of deploying the first VM as demonstrated in the previous section, you reference
an availability zone and the proximity placement group when you deploy the VM:

Azure PowerShell

New-AzVm -ResourceGroupName "ppgexercise" -Name "centralserviceszone1" -


Location "westus2" -OpenPorts 80,3389 -Zone "1" -ProximityPlacementGroup
"collocate" -Size "Standard_E8s_v4"

A successful deployment of this virtual machine would host the ASCS/SCS instance of
the SAP system in one availability zone. In this case, the VM and the base VHD of the
VM and potentially mounted Azure block storage disks are allocated within the same
availability zone. The scope of the proximity placement group is fixed to one of the
network spines in the availability zone you defined.

In the next step, you need to create the availability sets you want to use for the
application layer of your SAP system.

Define and create the proximity placement group. The command for creating the
availability set requires an additional reference to the proximity placement group ID (not
the name). You can get the ID of the proximity placement group by using this command:
Azure PowerShell

Get-AzProximityPlacementGroup -ResourceGroupName "ppgexercise" -Name


"collocate"

When you create the availability set, you need to consider additional parameters when
you're using managed disks (default unless specified otherwise) and proximity
placement groups:

Azure PowerShell

New-AzAvailabilitySet -ResourceGroupName "ppgexercise" -Name "ppgavset" -


Location "westus2" -ProximityPlacementGroupId "/subscriptions/my very long
ppg id string" -sku "aligned" -PlatformUpdateDomainCount 3 -
PlatformFaultDomainCount 2

Ideally, you should use three fault domains. But the number of supported fault domains
can vary from region to region. In this case, the maximum number of fault domains
possible for the specific regions is two. To deploy your application layer VMs, you need
to add a reference to your availability set name and the proximity placement group
name, as shown here:

Azure PowerShell

New-AzVm -ResourceGroupName "ppgexercise" -Name "appinstance1" -Location


"westus2" -OpenPorts 80,3389 -AvailabilitySetName "myppgavset" -
ProximityPlacementGroup "collocate" -Size "Standard_E16s_v4"

7 Note

The disks of the VMs deployed into the availability set above are not forced to be
allocated in the same availability zone as the VM is. Though you achieved that the
application layer VMs are spread across different fault domains under the same
network spine as the anchor VM is allocated, the disks, though also allocated in
different fault domains may be allocated in different locations on a region wide
scope.

The result of this deployment is:

A Central Services for your SAP system that's located in a specific availability
zone(s).
An SAP application layer that's located through availability sets in the same
network spine as the SAP Central services (ASCS/SCS) VM or VMs.
7 Note

Because you deploy one DBMS and ASCS/SCS VMs into one zone and the second
DBMS and ASCS/SCS VMs into another zone to create a high availability
configurations, you'll need a different proximity placement group for each of the
zones. The same is true for any availability set that you use.

Change proximity placement group configurations of an


existing system
If you implemented proximity placement groups as of the recommendations given so
far, and you want to adjust to the new configuration, you can do so with the methods
described in these articles:

Deploy VMs to proximity placement groups using Azure CLI.


Deploy VMs to proximity placement groups using PowerShell.

You can also use these commands for cases where you're getting allocation errors in
cases where you can't move to a new VM type with an existing VM in the proximity
placement group.

Virtual Machine Scale Set with Flexible


orchestration
To avoid the limitations associated with proximity placement group, it's advised to
deploy SAP workload across availability zones using flexible scale set with FD=1. This
deployment strategy ensures that VMs deployed in each zone aren't restricted to a
single datacenter or network spine, and all SAP system components, such as databases,
ASCS/ERS, and application tier are scoped within a zone. With all SAP system
components being scoped at the zonal level, the network latency between different
components of a single SAP system must be sufficient to ensure satisfactory
performance and throughput. The key benefit of this new deployment option with
flexible scale set with FD=1 is that it provides greater flexibility in resizing VMs or
switching to new VM types for all layers of SAP system. Also, the scale set would allocate
VMs across multiple fault domains within a single zone, which is ideal for running
multiple VMs of the application tier in each zone. For more information, see virtual
machine scale set for SAP workload document.
In a nonproduction or non-HA environment, it's possible to deploy all SAP system
components, including the database, ASCS, and application tier, within a single zone
using a flexible scale set with FD=1.

Previously recommended deployment options


This section includes details about previously recommended deployment options to
optimize network latency for SAP. With new features and Azure growth over time,
details within this section should only be applied in rare cases only.

Proximity placement groups for whole SAP system with


zonal deployments
The proximity placement group usage that we recommended so far, looks like in this
graphic.

You create a proximity placement group (PPG) in each of the two availability zones you
deployed your SAP system into. All the VMs of a particular zone are part of the
individual proximity placement group of that particular zone. You start in each zone with
deploying the DBMS VM to scope the PPG and then deploy the ASCS VM into the same
zone and PPG. In a third step, you create an Azure availability set, assign the availability
set to the scoped PPG and deploy the SAP application layer into it. The advantage of
this configuration was that all the components are nicely aligned underneath the same
network spine. The large disadvantage is that your flexibility in resizing virtual machines
can be limited.

Based on many improvements deployed by Microsoft into the Azure regions to reduce
network latency within an Azure availability zone, the current deployment guidance for
zonal deployments in this article exists.

Proximity placement groups and HANA Large Instances


If some of your SAP systems rely on HANA Large Instances for the database layer, you
can experience significant improvements in network latency between the HANA Large
Instances unit and Azure VMs when you're using HANA Large Instances units that are
deployed in Revision 4 rows or stamps. One improvement is that HANA Large Instances
units, as they're deployed, deploy with a proximity placement group. You can use that
proximity placement group to deploy your application layer VMs. As a result, those VMs
will be deployed in the same datacenter that hosts your HANA Large Instances unit.

To determine whether your HANA Large Instances unit is deployed in a Revision 4 stamp
or row, check the article Azure HANA Large Instances control through Azure portal. In
the attributes overview of your HANA Large Instances unit, you can also determine the
name of the proximity placement group because it was created when your HANA Large
Instances unit was deployed. The name that appears in the attributes overview is the
name of the proximity placement group that you should deploy your application layer
VMs into.

As compared to SAP systems that use only Azure virtual machines, when you use HANA
Large Instances, you have less flexibility in deciding how many Azure resource groups to
use. All the HANA Large Instances units of a HANA Large Instances tenant are grouped
in a single resource group, as described this article. Unless you deploy into different
tenants to separate, for example, production and non-production systems or other
systems, all your HANA Large Instances units will be deployed in one HANA Large
Instances tenant. This tenant has a one-to-one relationship with a resource group. But a
separate proximity placement group will be defined for each of the single units.

As a result, the relationships among Azure resource groups and proximity placement
groups for a single tenant will be as shown here:


Next steps
Check out the documentation:

SAP workloads on Azure: planning and deployment checklist


Deploy VMs to proximity placement groups using Azure CLI
Deploy VMs to proximity placement groups using PowerShell
Considerations for Azure Virtual Machines DBMS deployment for SAP workloads
Public endpoint connectivity for Virtual
Machines using Azure Standard Load
Balancer in SAP high-availability
scenarios
Article • 03/10/2023

The scope of this article is to describe configurations, that will enable outbound
connectivity to public end point(s). The configurations are mainly in the context of High
Availability with Pacemaker for SUSE / RHEL.

If you are using Pacemaker with Azure fence agent in your high availability solution,
then the VMs must have outbound connectivity to the Azure management API. The
article presents several options to enable you to select the option that is best suited for
your scenario.

Overview
When implementing high availability for SAP solutions via clustering, one of the
necessary components is Azure Load Balancer. Azure offers two load balancer SKUs:
standard and basic.

Standard Azure load balancer offers some advantages over the Basic load balancer. For
instance, it works across Azure Availability zones, it has better monitoring and logging
capabilities for easier troubleshooting, reduced latency. The “HA ports” feature covers all
ports, that is, it is no longer necessary to list all individual ports.

There are some important differences between the basic and the standard SKU of Azure
load balancer. One of them is the handling of outbound traffic to public end point. For
full Basic versus Standard SKU load balancer comparison, see Load Balancer SKU
comparison.

When VMs without public IP addresses are placed in the backend pool of internal (no
public IP address) Standard Azure load balancer, there is no outbound connectivity to
public end points, unless additional configuration is done.

If a VM is assigned a public IP address, or the VM is in the backend pool of a load


balancer with public IP address, it will have outbound connectivity to public end points.
SAP systems often contain sensitive business data. It is rarely acceptable for VMs
hosting SAP systems to be accessible via public IP addresses. At the same time, there are
scenarios, which would require outbound connectivity from the VM to public end points.

Examples of scenarios, requiring access to Azure public end point are:

Azure Fence Agent requires access to management.azure.com and


login.microsoftonline.com
Azure Backup
Azure Site Recovery
Using public repository for patching the Operating system
The SAP application data flow may require outbound connectivity to public end
point

If your SAP deployment doesn’t require outbound connectivity to public end points, you
don’t need to implement the additional configuration. It is sufficient to create internal
standard SKU Azure Load Balancer for your high availability scenario, assuming that
there is also no need for inbound connectivity from public end points.

7 Note

When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points.
If the VMs have either public IP addresses or are already in the backend pool of
Azure Load balancer with public IP address, the VM will already have outbound
connectivity to public end points.

Read the following papers first:

Azure Standard Load Balancer


Azure Standard Load Balancer overview - comprehensive overview of Azure
Standard Load balancer, important principles, concepts, and tutorials
Outbound connections in Azure - scenarios on how to achieve outbound
connectivity in Azure
Load balancer outbound rules- explains the concepts of load balancer
outbound rules and how to create outbound rules
Azure Firewall
Azure Firewall Overview- overview of Azure Firewall
Tutorial: Deploy and configure Azure Firewall - instructions on how to configure
Azure Firewall via Azure portal
Virtual Networks -User defined rules - Azure routing concepts and rules
Security Groups Service Tags - how to simplify your Network Security Groups and
Firewall configuration with service tags

Option 1: Additional external Azure Standard


Load Balancer for outbound connections to
internet
One option to achieve outbound connectivity to public end points, without allowing
inbound connectivity to the VM from public end point, is to create a second load
balancer with public IP address, add the VMs to the backend pool of the second load
balancer and define only outbound rules.
Use Network Security Groups to control the public end points, that are accessible for
outbound calls from the VM.
For more information, see Scenario 2 in document Outbound connections.
The configuration would look like:

Important considerations
You can use one additional Public Load Balancer for multiple VMs in the same
subnet to achieve outbound connectivity to public end point and optimize cost
Use Network Security Groups to control which public end points are accessible
from the VMs. You can assign the Network Security Group either to the subnet, or
to each VM. Where possible, use Service tags to reduce the complexity of the
security rules.
Azure standard Load balancer with public IP address and outbound rules allows
direct access to public end point. If you have corporate security requirements to
have all outbound traffic pass via centralized corporate solution for auditing and
logging, you may not be able to fulfill the requirement with this scenario.

 Tip

Where possible, use Service tags to reduce the complexity of the Network Security
Group .

Deployment steps
1. Create Load Balancer
a. In the Azure portal , click All resources, Add, then search for Load Balancer
b. Click Create
c. Load Balancer Name MyPublicILB
d. Select Public as a Type, Standard as SKU
e. Select Create Public IP address and specify as a name MyPublicILBFrondEndIP
f. Select Zone Redundant as Availability zone
g. Click Review and Create, then click Create

2. Create Backend pool MyBackendPoolOfPublicILB and add the VMs.


a. Select the Virtual network
b. Select the VMs and their IP addresses and add them to the backend pool

3. Create outbound rules.

Azure CLI

az network lb outbound-rule create --address-pool


MyBackendPoolOfPublicILB --frontend-ip-configs MyPublicILBFrondEndIP --
idle-timeout 30 --lb-name MyPublicILB --name MyOutBoundRules --
outbound-ports 10000 --enable-tcp-reset true --protocol All --resource-
group MyResourceGroup

4. Create Network Security group rules to restrict access to specific Public End Points.
If there is existing Network Security Group, you can adjust it. The example below
shows how to enable access to the Azure management API:
a. Navigate to the Network Security Group
b. Click Outbound Security Rules
c. Add a rule to Deny all outbound Access to Internet.
d. Add a rule to Allow access to AzureCloud, with priority lower than the priority
of the rule to deny all internet access.

The outbound security rules would look like:

For more information on Azure Network security groups, see Security Groups .

Option 2: Azure Firewall for outbound


connections to internet
Another option to achieve outbound connectivity to public end points, without allowing
inbound connectivity to the VM from public end points, is with Azure Firewall. Azure
Firewall is a managed service, with built-in High Availability and it can span multiple
Availability Zones.
You will also need to deploy User Defined Route, associated with subnet where VMs and
the Azure load balancer are deployed, pointing to the Azure firewall, to route traffic
through the Azure Firewall.
For details on how to deploy Azure Firewall, see Deploy And Configure Azure Firewall.

The architecture would look like:


Important considerations
Azure firewall is cloud native service, with built-in High Availability and it supports
zonal deployment.
Requires additional subnet that must be named AzureFirewallSubnet.
If transferring large data sets outbound of the virtual network where the SAP VMs
are located, to a VM in another virtual network, or to public end point, it may not
be cost effective solution. One such example is copying large backups across
virtual networks. For details see Azure Firewall pricing.
If the corporate Firewall solution is not Azure Firewall, and you have security
requirements to have all outbound traffic pass though centralized corporate
solution, this solution may not be practical.

 Tip

Where possible, use Service tags to reduce the complexity of the Azure Firewall
rules.

Deployment steps
1. The deployment steps assume that you already have Virtual network and subnet
defined for your VMs.
2. Create Subnet AzureFirewallSubnet in the same Virtual Network, where the VMS
and the Standard Load Balancer are deployed.
a. In Azure portal, Navigate to the Virtual Network: Click All Resources, Search for
the Virtual Network, Click on the Virtual Network, Select Subnets.
b. Click Add Subnet. Enter AzureFirewallSubnet as Name. Enter appropriate
Address Range. Save.

3. Create Azure Firewall.


a. In Azure portal select All resources, click Add, Firewall, Create. Select Resource
group (select the same resource group, where the Virtual Network is).
b. Enter name for the Azure Firewall resource. For instance, MyAzureFirewall.
c. Select Region and select at least two Availability zones, aligned with the
Availability zones where your VMs are deployed.
d. Select your Virtual Network, where the SAP VMs and Azure Standard Load
balancer are deployed.
e. Public IP Address: Click create and enter a name. For Instance
MyFirewallPublicIP.

4. Create Azure Firewall Rule to allow outbound connectivity to specified public end
points. The example shows how to allow access to the Azure Management API
public endpoint.
a. Select Rules, Network Rule Collection, then click Add network rule collection.
b. Name: MyOutboundRule, enter Priority, Select Action Allow.
c. Service: Name ToAzureAPI. Protocol: Select Any. Source Address: enter the
range for your subnet, where the VMs and Standard Load Balancer are deployed
for instance: 11.97.0.0/24. Destination ports: enter *.
d. Save
e. As you are still positioned on the Azure Firewall, Select Overview. Note down
the Private IP Address of the Azure Firewall.

5. Create route to Azure Firewall


a. In Azure portal select All resources, then click Add, Route Table, Create.
b. Enter Name MyRouteTable, select Subscription, Resource group, and Location
(matching the location of your Virtual network and Firewall).
c. Save
The firewall rule would look like:

6. Create User Defined Route from the subnet of your VMs to the private IP of
MyAzureFirewall.
a. As you are positioned on the Route Table, click Routes. Select Add.
b. Route name: ToMyAzureFirewall, Address prefix: 0.0.0.0/0. Next hop type: Select
Virtual Appliance. Next hop address: enter the private IP address of the firewall
you configured: 11.97.1.4.
c. Save

Option 3: Using Proxy for Pacemaker calls to


Azure Management API
You could use proxy to allow Pacemaker calls to the Azure management API public end
point.

Important considerations
If there is already corporate proxy in place, you could route outbound calls to
public end points through it. Outbound calls to public end points will go through
the corporate control point.
Make sure the proxy configuration allows outbound connectivity to Azure
management API: https://management.azure.com and
https://login.microsoftonline.com
Make sure there is a route from the VMs to the Proxy
Proxy will handle only HTTP/HTTPS calls. If there is additional need to make
outbound calls to public end point over different protocols (like RFC), alternative
solution will be needed
The Proxy solution must be highly available, to avoid instability in the Pacemaker
cluster
Depending on the location of the proxy, it may introduce additional latency in the
calls from the Azure Fence Agent to the Azure Management API. If your corporate
proxy is still on the premises, while your Pacemaker cluster is in Azure, measure
latency and consider, if this solution is suitable for you
If there isn’t already highly available corporate proxy in place, we do not
recommend this option as the customer would be incurring extra cost and
complexity. Nevertheless, if you decide to deploy additional proxy solution, for the
purpose of allowing outbound connectivity from Pacemaker to Azure Management
public API, make sure the proxy is highly available, and the latency from the VMs
to the proxy is low.

Pacemaker configuration with Proxy


There are many different Proxy options available in the industry. Step-by-step
instructions for the proxy deployment are outside of the scope of this document. In the
example below, we assume that your proxy is responding to MyProxyService and
listening to port MyProxyPort.
To allow pacemaker to communicate with the Azure management API, perform the
following steps on all cluster nodes:

1. Edit the pacemaker configuration file /etc/sysconfig/pacemaker and add the


following lines (all cluster nodes):

Console

sudo vi /etc/sysconfig/pacemaker
# Add the following lines
http_proxy=http://MyProxyService:MyProxyPort
https_proxy=http://MyProxyService:MyProxyPort

2. Restart the pacemaker service on all cluster nodes.

SUSE

Console

# Place the cluster in maintenance mode


sudo crm configure property maintenance-mode=true
#Restart on all nodes
sudo systemctl restart pacemaker
# Take the cluster out of maintenance mode
sudo crm configure property maintenance-mode=false

Red Hat

Console
# Place the cluster in maintenance mode
sudo pcs property set maintenance-mode=true
#Restart on all nodes
sudo systemctl restart pacemaker
# Take the cluster out of maintenance mode
sudo pcs property set maintenance-mode=false

Other options
If outbound traffic is routed via third party, URL-based firewall proxy:

if using Azure fence agent make sure the firewall configuration allows outbound
connectivity to the Azure management API: https://management.azure.com and
https://login.microsoftonline.com

if using SUSE's Azure public cloud update infrastructure for applying updates and
patches, see Azure Public Cloud Update Infrastructure 101

Next steps
Learn how to configure Pacemaker on SUSE in Azure
Learn how to configure Pacemaker on Red Hat in Azure
SAP HANA high availability for Azure
virtual machines
Article • 02/10/2023

You can use numerous Azure capabilities to deploy mission-critical databases like SAP
HANA on Azure VMs. This article provides guidance on how to achieve availability for
SAP HANA instances that are hosted in Azure VMs. The article describes several
scenarios that you can implement by using the Azure infrastructure to increase
availability of SAP HANA in Azure.

Prerequisites
This article assumes that you are familiar with infrastructure as a service (IaaS) basics in
Azure, including:

How to deploy virtual machines or virtual networks via the Azure portal or
PowerShell.
Using the Azure cross-platform command-line interface (Azure CLI), including the
option to use JavaScript Object Notation (JSON) templates.

This article also assumes that you are familiar with installing SAP HANA instances, and
with administrating and operating SAP HANA instances. It's especially important to be
familiar with the setup and operations of HANA system replication. This includes tasks
like backup and restore for SAP HANA databases.

These articles provide a good overview of using SAP HANA in Azure:

Manual installation of single-instance SAP HANA on Azure VMs


Set up SAP HANA system replication in Azure VMs
Back up SAP HANA on Azure VMs

It's also a good idea to be familiar with these articles about SAP HANA:

High availability for SAP HANA


FAQ: High availability for SAP HANA
Perform system replication for SAP HANA
SAP HANA 2.0 SPS 01 What’s new: High availability
Network recommendations for SAP HANA system replication
SAP HANA system replication
SAP HANA service auto-restart
Configure SAP HANA system replication
Beyond being familiar with deploying VMs in Azure, before you define your availability
architecture in Azure, we recommend that you read Manage the availability of Windows
virtual machines in Azure.

Service level agreements for Azure components


Azure has different availability SLAs for different components, like networking, storage,
and VMs. All SLAs are documented. For more information, see Microsoft Azure Service
Level Agreements .

SLA for Virtual Machines describes three different SLAs, for three different
configurations:

A single VM that uses Azure premium SSDs for the OS disk and all data disks. This
option provides a monthly uptime of 99.9 percent.
Multiple (at least two) VMs that are organized in an Azure availability set. This
option provides a monthly uptime of 99.95 percent.
Multiple (at least two) VMs that are organized in an Availablity Zone. This option
provided a monthly uptime of 99.99 percent.

Measure your availability requirement against the SLAs that Azure components can
provide. Then, choose your scenarios for SAP HANA to achieve your required level of
availability.

Next steps
Learn about SAP HANA availability within one Azure region.
Learn about SAP HANA availability across Azure regions.
SAP HANA availability within one Azure
region
Article • 06/20/2023

This article describes several availability scenarios for SAP HANA within one Azure
region. Azure has many regions, spread throughout the world. For the list of Azure
regions, see Azure regions . For deploying SAP HANA on VMs within one Azure region,
Microsoft offers deployment of a single VM with a HANA instance. For increased
availability, you can deploy two VMs with two HANA instances using either a flexible
scale set with FD=1, availability zones or an availability set that uses HANA system
replication for availability.

Azure regions that provide Availability Zones consist of multiple data centers, each with
its own power source, cooling, and network infrastructure. The purpose of offering
different zones within a single Azure region is to enable the deployment of applications
across two or three available Availability Zones. By distributing your application
deployment across zones, any power or networking issues affecting a specific Azure
Availability Zone infrastructure wouldn't fully disrupt your application's functionality
within the Azure region. While there might be some reduced capacity, such as the
potential loss of VMs in one zone, the VMs in the remaining zones would continue to
operate without interruption. To set up two HANA instances in separate VMs spanning
across different zones, you have the option to deploy VMs using either the flexible scale
set with FD=1 or availability zones deployment option.

For increased availability within a region, it's advised to deploy two VMs with two HANA
instances using an availability set. An Azure Availability Set is a logical grouping
capability that ensures that the VM resources configured within Availability Set are
failure-isolated from each other when they're deployed within an Azure datacenter.
Azure ensures that the VMs you place within an Availability Set run across multiple
physical servers, compute racks, storage units, and network switches. In some Azure
documentation, this configuration is referred to as placements in different update and
fault domains. These placements usually are within an Azure datacenter. Assuming that
power source and network issues would affect the datacenter that you're deploying, all
your capacity in one Azure region would be affected.

The placement of datacenters that represent Azure Availability Zones is a compromise


between delivering acceptable network latency between services deployed in different
zones, and a distance between datacenters. Natural catastrophes ideally wouldn't affect
the power, network supply, and infrastructure for all Availability Zones in this region.
However, as monumental natural catastrophes have shown, Availability Zones might not
always provide the availability that you want within one region. Think about Hurricane
Maria that hit the island of Puerto Rico on September 20, 2017. The hurricane basically
caused a nearly 100 percent blackout on the 90-mile-wide island.

Single-VM scenario
In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use
Azure Premium Storage to host the operating system disk and all your data disks. The
Azure uptime SLA of 99.9 percent and the SLAs of other Azure components is sufficient
for you to fulfill your availability SLAs for your customers. In this scenario, you have no
need to use an Azure Availability Set for VMs that run the DBMS layer. In this scenario,
you rely on two different features:

Azure VM auto restart (also referred to as Azure service healing)


SAP HANA auto restart

Azure VM auto restart, or service healing, is a functionality in Azure that works on two
levels:

The Azure server host checks the health of a VM that's hosted on the server host.
The Azure fabric controller monitors the health and availability of the server host.

A health check functionality monitors the health of every VM that's hosted on an Azure
server host. If a VM falls into a non-healthy state, a reboot of the VM can be initiated by
the Azure host agent that checks the health of the VM. The fabric controller checks the
health of the host by checking many different parameters that might indicate issues with
the host hardware. It also checks on the accessibility of the host via the network. An
indication of problems with the host can lead to the following events:

If the host signals a bad health state, a reboot of the host and a restart of the VMs
that were running on the host is triggered.
If the host isn't in a healthy state after successful reboot, a redeployment of the
VMs that were originally on the now unhealthy node onto a healthy host server is
initiated. In this case, the original host is marked as not healthy. It won't be used
for further deployments until it's cleared or replaced.
If the unhealthy host has problems during the reboot process, an immediate
restart of the VMs on a healthy host is triggered.

With the host and VM monitoring provided by Azure, Azure VMs that experience host
issues are automatically restarted on a healthy Azure host.

) Important
Azure service healing will not restart Linux VMs where the guest OS is in a kernel
panic state. The default settings of the commonly used Linux releases, are not
automatically restarting VMs or server where the Linux kernel is in panic state.
Instead the default foresees to keep the OS in kernel panic state to be able to
attach a kernel debugger to analyze. Azure is honoring that behavior by not
automatically restarting a VM with the guest OS in such a state. Assumption is that
such occurrences are extremely rare. You could overwrite the default behavior to
enable a restart of the VM. To change the default behavior enable the parameter
'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds.
Common recommended values are to wait for 20-30 seconds before triggering the
reboot through this parameter. For more information, see sysctl.conf .

The second feature that you rely on in this scenario is the fact that the HANA service
that runs in a restarted VM starts automatically after the VM reboots. You can set up
HANA service auto restart through the watchdog services of the various HANA
services.

You might improve this single-VM scenario by adding a cold failover node to an SAP
HANA configuration. In the SAP HANA documentation, this setup is called host
autofailover . This configuration might make sense in an on-premises deployment
situation where the server hardware is limited, and you dedicate a single-server node as
the host autofailover node for a set of production hosts. But in Azure, where the
underlying infrastructure of Azure provides a healthy target server for a successful VM
restart, it doesn't make sense to deploy SAP HANA host autofailover. Because of Azure
service healing, there's no reference architecture that foresees a standby node for HANA
host autofailover.

Special case of SAP HANA scale-out configurations in


Azure
High availability architectures based on standby node or HANA System Replication can
be found in following documents. In cases where standby nodes or HANA system
replication high availability isn't used in SAP HANA scale-out configurations, you can
depend on Azure VMs' service healing capabilities and the automatic restart of the SAP
HANA instance once the VM is operational again.

RedHat Enterprise Linux


High availability of SAP HANA scale-out system with HSR on RHEL.
Deploy a SAP HANA scale-out system with standby node on Azure VMs by
using Azure NetApp Files on RHEL.
SUSE Linux Enterprise Server
High availability of SAP HANA scale-out system with HSR on SLES.
Deploy a SAP HANA scale-out system with standby node on Azure VMs by
using Azure NetApp Files on SLES.

Availability scenarios for two different VMs


To ensure the availability of the HANA system within a specific region, you have the
option to configure two VMs across the availability zones of the region or within the
region. To achieve this objective, you can configure the VMs using flexible scale set,
availability zones or availability set deployment option. The base setup in Azure would
look like:

To illustrate the different SAP HANA availability scenarios, a few of the layers in the
diagram are omitted. The diagram shows only layers that depict VMs, hosts, Availability
Sets, and Azure regions. Azure Virtual Network instances, resource groups, and
subscriptions don't play a role in the scenarios described in this section.

Replicate backups to a second virtual machine


One of the most rudimentary setups is to use backups. In particular, you might have
transaction log backups shipped from one VM to another Azure VM. You can choose the
Azure Storage type. In this setup, you're responsible for scripting the copy of scheduled
backups that are conducted on the first VM to the second VM. If you need to use the
second VM instances, you must restore the full, incremental/differential, and transaction
log backups to the point that you need.

The architecture looks like:

This setup isn't well suited to achieving great Recovery Point Objective (RPO) and
Recovery Time Objective (RTO) times. RTO times especially would suffer due to the need
to fully restore the complete database by using the copied backups. However, this setup
is useful for recovering from unintended data deletion on the main instances. With this
setup, at any time, you can restore to a certain point in time, extract the data, and
import the deleted data into your main instance. Hence, it might make sense to use a
backup copy method in combination with other high-availability functionality.

While backups are being copied, you might be able to use a smaller VM than the main
VM that the SAP HANA instance is running on. Keep in mind that you can attach a
smaller number of VHDs to smaller VMs. For information about the limits of individual
VM types, see Sizes for Linux virtual machines in Azure.

SAP HANA system replication without automatic failover


The scenarios described in this section use SAP HANA system replication. For the SAP
documentation, see System replication . Scenarios without automatic failover aren't
common for configurations within one Azure region. A configuration without automatic
failover, though avoiding a Pacemaker setup, obligates you to monitor and failover
manually. Since this takes and efforts as well, most customers are relying on Azure
service healing instead. There are some edge cases where this configuration might help
in terms of failure scenarios. Or, in some cases, a customer might want to realize more
efficiency.

SAP HANA system replication without auto failover and without


data preload
In this scenario, you use SAP HANA system replication to move data in a synchronous
manner to achieve an RPO of 0. On the other hand, you have a long enough RTO that
you don't need either failover or data preloading into the HANA instance cache. In this
case, it's possible to achieve further economy in your configuration by taking the
following actions:

Run another SAP HANA instance in the second VM. The SAP HANA instance in the
second VM takes most of the memory of the virtual machine. In case a failover to
the second VM, you need to shut down the running SAP HANA instance that has
the data fully loaded in the second VM, so that the replicated data can be loaded
into the cache of the targeted HANA instance in the second VM.
Use a smaller VM size on the second VM. If a failover occurs, you have an
additional step before the manual failover. In this step, you resize the VM to the
size of the source VM.

The scenario looks like:

7 Note

Even if you don't use data preload in the HANA system replication target, you need
at least 64 GB of memory. You also need enough memory in addition to 64 GB to
keep the rowstore data in the memory of the target instance.

SAP HANA system replication without auto failover and with data
preload

In this scenario, data that's replicated to the HANA instance in the second VM is
preloaded. This eliminates the two advantages of not preloading data. In this case, you
can't run another SAP HANA system on the second VM. You also can't use a smaller VM
size. Hence, customers rarely implement this scenario.

SAP HANA system replication with automatic failover


In the standard and most common availability configuration within one Azure region,
two Azure VMs running Linux with HA packages have a failover cluster defined. The HA
Linux cluster is based on the Pacemaker framework using SLES or RHEL with a fencing
device SLES or RHEL as an example.

From an SAP HANA perspective, the replication mode that's used is synced and an
automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot
standby node. The standby node receives a synchronous stream of change records from
the primary SAP HANA instance. As transactions are committed by the application at the
HANA primary node, the primary HANA node waits to confirm the commit to the
application until the secondary SAP HANA node confirms that it received the commit
record. SAP HANA offers two synchronous replication modes. For details and for a
description of differences between these two synchronous replication modes, see the
SAP article Replication modes for SAP HANA system replication .

The overall configuration looks like:


You might choose this solution because it enables you to achieve an RPO=0 and a low
RTO. Configure the SAP HANA client connectivity so that the SAP HANA clients use the
virtual IP address to connect to the HANA system replication configuration. Such a
configuration eliminates the need to reconfigure the application if a failover to the
secondary node occurs. In this scenario, the Azure VM SKUs for the primary and
secondary VMs must be the same.

Next steps
For step-by-step guidance on setting up these configurations in Azure, see:

Set up SAP HANA system replication in Azure VMs


High availability for SAP HANA by using system replication

For more information about SAP HANA availability across Azure regions, see:

SAP HANA availability across Azure regions


SAP HANA availability across Azure
regions
Article • 06/19/2023

This article describes scenarios related to SAP HANA availability across different Azure
regions. Because of the distance between Azure regions, setting up SAP HANA
availability in multiple Azure regions involves special considerations.

Why deploy across multiple Azure regions


Azure regions often are separated by large distances. Depending on the geopolitical
region, the distance between Azure regions might be hundreds of miles, or even several
thousand miles, like in the United States. Because of the distance, network traffic
between assets that are deployed in two different Azure regions experience significant
network roundtrip latency. The latency is significant enough to exclude synchronous
data exchange between two SAP HANA instances under typical SAP workloads.

On the other hand, organizations often have a distance requirement between the
location of the primary datacenter and a secondary datacenter. A distance requirement
helps provide availability if a natural disaster occurs in a wider geographic location.
Examples include the hurricanes that hit the Caribbean and Florida in September and
October 2017. Your organization might have at least a minimum distance requirement.
For most Azure customers, a minimum distance definition requires you to design for
availability across Azure regions . Because the distance between two Azure regions is
too large to use the HANA synchronous replication mode, RTO and RPO requirements
might force you to deploy availability configurations in one region, and then
supplement with additional deployments in a second region.

Another aspect to consider in this scenario is failover and client redirect. The assumption
is that a failover between SAP HANA instances in two different Azure regions always is a
manual failover. Because the replication mode of SAP HANA system replication is set to
asynchronous, there's a potential that data committed in the primary HANA instance
hasn't yet made it to the secondary HANA instance. Therefore, automatic failover isn't
an option for configurations where the replication is asynchronous. Even with manually
controlled failover, as in a failover exercise, you need to take measures to ensure that all
the committed data on the primary side made it to the secondary instance before you
manually move over to the other Azure region.

Azure Virtual Network uses a different IP address range. The IP addresses are deployed
in the second Azure region. So, you either need to change the SAP HANA client
configuration, or preferably, you need to create steps to change the name resolution.
This way, the clients are redirected to the new secondary site's server IP address. For
more information, see the SAP article Client connection recovery after takeover .

Simple availability between two Azure regions


You might choose not to put any availability configuration in place within a single
region, but still have the demand to have the workload served if a disaster occurs.
Typical cases for such scenarios are nonproduction systems. Although having the system
down for half a day or even a day is sustainable, you can't allow the system to be
unavailable for 48 hours or more. To make the setup less costly, run another system that
is even less important in the VM. The other system functions as a destination. You can
also size the VM in the secondary region to be smaller, and choose not to preload the
data. Because the failover is manual and entails many more steps to fail over the
complete application stack, the additional time to shut down the VM, resize it, and then
restart the VM is acceptable.

If you're using the scenario of sharing the DR target with a QA system in one VM, you
need to take these considerations into account:

There are two operation modes with delta_datashipping and logreplay, which
are available for such a scenario
Both operation modes have different memory requirements without preloading
data
Delta_datashipping might require drastically less memory without the preload
option than logreplay could require. See chapter 4.3 of the SAP document How To
Perform System Replication for SAP HANA
The memory requirement of logreplay operation mode without preload isn't
deterministic and depends on the columnstore structures loaded. In extreme cases,
you might require 50% of the memory of the primary instance. The memory for
logreplay operation mode is independent on whether you chose to have the data
preloaded set or not.
7 Note

In this configuration, you can't provide an RPO=0 because your HANA system
replication mode is asynchronous. If you need to provide an RPO=0, this
configuration isn't the configuration of choice.

A small change that you can make in the configuration might be to configure data as
preloading. However, given the manual nature of failover and the fact that application
layers also need to move to the second region, it might not make sense to preload data.

Combine availability within one region and


across regions
A combination of availability within and across regions might be driven by these factors:

A requirement of RPO=0 within an Azure region.


The organization isn't willing or able to have global operations affected by a major
natural catastrophe that affects a larger region. This was the case for some
hurricanes that hit the Caribbean over the past few years.
Regulations that demand distances between primary and secondary sites that are
clearly beyond what Azure availability zones can provide.

In these cases, you can set up what SAP calls an SAP HANA multi-tier system replication
configuration by using HANA system replication. The architecture would look like:
SAP introduced multi-target system replication with HANA 2.0 SPS3. Multi-target
system replication brings some advantages in update scenarios. For example, the DR site
(Region 2) isn't impacted when the secondary HA site is down for maintenance or
updates. You can find out more about HANA multi-target system replication at the SAP
Help Portal . Possible architecture with multi-target replication would look like:

If the organization has requirements for high availability readiness in the second(DR)
Azure region, then the architecture would look like:
Using logreplay as operation mode, this configuration provides an RPO=0, with low
RTO, within the primary region. The configuration also provides decent RPO if a move to
the second region is involved. The RTO times in the second region are dependent on
whether data is preloaded. Many customers use the VM in the secondary region to run a
test system. In that use case, the data can't be preloaded.

) Important

The operation modes between the different tiers need to be homogeneous. You
can't use logreplay as operation mode between tier 1 and tier 2 and
delta_datashipping to supply tier 3. You can only choose the one or the other
operation mode that needs to be consistent for all tiers. Since delta_datashipping is
not suitable to give you an RPO=0, the only reasonable operation mode for such a
multi-tier configuration remains logreplay. For details about operation modes and
some restrictions, see the SAP article Operation modes for SAP HANA system
replication .

Next steps
For step-by-step guidance on setting up these configurations in Azure, see:

Set up SAP HANA system replication in Azure VMs


High availability for SAP HANA by using system replication
Disaster recovery overview and
infrastructure guidelines for SAP
workload
Article • 05/08/2024

Many organizations running critical business applications on Azure set up both High
Availability (HA) and Disaster Recovery (DR) strategy. The purpose of high availability is
to increase the SLA of business systems by eliminating single points of failure in the
underlying system infrastructure. High Availability technologies reduce the effect of
unplanned infrastructure failure and help with planned maintenance. Disaster Recovery
is defined as policies, tools, and procedures to enable the recovery or continuation of
vital technology infrastructure and systems following a geographically widespread
natural or human-induced disaster.

To achieve high availability for SAP workload on Azure, virtual machines are typically
deployed in an availability set, availability zones or in flexible scale set to protect
applications from infrastructure maintenance or failure within region. But the
deployment doesn’t protect applications from widespread disaster within region. So to
protect applications from regional disaster, disaster recovery strategy for the
applications should be in place. Disaster recovery is a documented and structured
approach that is designed to assist an organization in executing the recovery processes
in response to a disaster, and to protect or minimize IT services disruption and promote
recovery.

This document provides details on protecting SAP workloads from large scale
catastrophe by implementing structured DR approach. The details in this document are
presented at an abstract level, based on different Azure services and SAP components.
Exact DR strategy and the order of recovery for your SAP workload must be tested,
documented, and fine tuned regularly. Also, the document focuses on the Azure-to-
Azure DR strategy for SAP workload.

General disaster recovery plan considerations


SAP workload on Azure runs on virtual machines in combination with different Azure
services to deploy different layers (central services, application servers, database server)
of a typical SAP NetWeaver application. In general, a DR strategy should be planned for
the entire IT landscape running on Azure, which means to take into account non-SAP
applications as well. The business solution running in SAP systems might not run as
whole, if the dependent services or assets aren't recovered on the DR site. So you need
to come up with a well-defined comprehensive DR plan considering all the components
and systems.

For DR on Azure, organizations should consider different scenarios that might trigger
failover.

SAP application or business process availability.


Azure services (like virtual machines, storage, load balancer etc.) unavailability
within a region due to widespread failure.
Potential threats and vulnerabilities to the application (for example, Application
layer DDoS attack)
Business compliance required operational tasks to test DR strategy (for example,
DR failure exercise to be performed every year as per compliance).

To achieve the recovery goal for different scenarios, organization must outline Recovery
Time Objective (RTO) and Recovery Point Objective (RPO) for their workload based on
the business requirements. RTO describes the amount of time application can be down,
typically measured in hours, minutes, or seconds. Whereas RPO describes the amount of
transactional data that is acceptable by business to lose in order for normal operations
to resume. Identifying RTO and RPO of your business is crucial, as it would help you
design your DR strategy optimally. The components (compute, storage, database etc.)
involved in SAP workload are replicated to the DR region using different techniques
(Azure native services, native DB replication technology, custom scripts). Each technique
provides different RPO, which must be accounted for when designing a DR strategy. On
Azure, you can use some of the Azure native services like Azure Site Recovery, Azure
Backup that can help you to meet RTO and RPO of your SAP workloads. Refer to SLA of
Azure Site Recovery and Azure Backup to optimally align with your RTO and RPO.

Design consideration for disaster recovery on


Azure
There are different elements to consider when designing a disaster recovery solution on
Azure. The principles and concepts that are considered to design on-premises disaster
recovery solutions apply to Azure as well. But in Azure, region selection is a key part in
design strategy for disaster recovery. So, keep the following points in mind when
choosing DR region on Azure.

Business or regulatory compliance requirements could specify a distance


requirement between a primary and disaster recovery site. A distance requirement
helps to provide availability if a natural disaster occurs in a wider geography. In
such case, an organization can choose another Azure region as their disaster
recovery site. Azure regions are often separated by a large distance that might be
hundreds or even thousands of kilometers like in the United States. Because of the
distance, the network roundtrip latency could be higher, which might result into
higher RPO.

Customers who want to mimic their on-premises metro DR strategy on Azure can
use availability zones for disaster recovery. But zone-to-zone DR strategy might fall
short of resilience requirement if there’s geographically widespread natural
disaster.

On Azure, each region is paired with another region within the same geography
(except for Brazil South). This approach allows for platform provided replication of
resources across region. The benefit of choosing paired region can be found in
region pairs document. If an organization chooses to use Azure paired regions
several additional points for an SAP workload needs to be considered:

Not all Azure services offer cross-regional replication in paired region.

The Azure services and features in paired Azure regions might not be
symmetrical. For example, Azure NetApp Files, VM SKUs like M-Series available
in the Primary region might not be available in the paired region. To check if the
Azure product or services is available in a region, see Azure Products by
Region .

GRS option is available for storage account with standard storage type that
replicates data to paired region. But standard storage isn't suitable for SAP
DBMS or virtual data disks.

The Azure backup service used to back up supported solutions can replicate
backups only between paired regions. For all your other data, run your own
replications with native DBMS features like SQL Server Always On, SAP HANA
System Replication, and other services. Use a combination of Azure Site
Recovery, rsync or robocopy, and other third-party software for the SAP
application layer.

Reference SAP workload deployment


After identifying a DR region, it's important that the breadth of Azure core services (like
network, compute, storage) configured in primary region is available and can be
configured in DR region. Organization must develop a DR deployment pattern for SAP
workload. The deployment pattern varies and must align with the organization's needs.
Deploy production SAP workload into your primary region and non-production
workload into disaster recovery region.
Deploy all SAP workload (production and non-production) into your primary
region. Disaster recovery region is only used if there's a failover.

The following reference architecture shows typical SAP NetWeaver system running on
Azure along with high availability in primary region. The secondary site shown down
below is the disaster recovery site where the SAP systems will be restored after a
disaster event. Both primary and disaster recovery regions are part of the same
subscription. To achieve DR for SAP workload, you need to identify recovery strategy for
each SAP layer along with the different Azure services that the application uses.

Organizations should plan and design a DR strategy for their entire IT landscape. Usually
SAP systems running in production environment are integrated with different services
and interfaces like Active directory, DNS, third-party application, and so on. So you must
include the non-SAP systems and other services in your disaster recovery planning as
well. This document focuses on the recovery planning for SAP applications. But you can
expand the size and scope of the DR planning for dependent components to fit your
requirements.


Infrastructure components of DR solution for
SAP workload
An SAP workload running on Azure uses different infrastructure components to run a
business solution. To plan DR for such solution, it's essential that all infrastructure
components configured in the primary region are available, and can be configured in
the DR region as well. Following infrastructure components should be factored in when
designing DR solution for SAP workload on Azure.

Network
Compute
Storage

Network
ExpressRoute extends your on-premises network into the Microsoft cloud over a
private connection with the help of a connectivity provider. On designing disaster
recovery architecture, one must account for building a robust backend network
connectivity using geo-redundant ExpressRoute circuit. It's advised setting up at
least one ExpressRoute circuit from on-premises to the primary region, and the
other should connect to the disaster recovery region. Refer to the Designing of
Azure ExpressRoute for disaster recovery article, which describes different
scenarios to design disaster recovery for ExpressRoute.

7 Note

Consider setting up a site-to-site (S2S) VPN as a backup of Azure


ExpressRoute. For more information, see Using S2S VPN as a backup for
Azure ExpressRoute Private Peering.

Virtual network and subnets span all availability zones in a region. For DR across
two regions, you need to configure separate virtual networks and subnets on the
disaster recovery region. Refer to About networking in Azure VM disaster recovery
to learn more on the networking setup on DR region.

Azure Standard Load Balancer provides networking elements for the high-
availability design of your SAP systems. For clustered systems, Standard Load
Balancer provides the virtual IP address for the cluster service, like ASCS/SCS
instances and databases running on VMs. To run highly available SAP system on
the DR site, a separate load balancer must be created and the cluster configuration
should be adjusted accordingly.

Azure Application Gateway is a web traffic load-balancer. With its Web Application
Firewall functionality, its well suited service to expose web applications to the
internet with improved security. Azure Application Gateway can service either
public (internet) or private clients, or both, depending on the configuration. After
failover, to accept similar incoming HTTPs traffic on DR region, a separate Azure
Application Gateway must be configured in the DR region.

As networking components (like virtual network, firewall etc.) are created


separately in the DR region, you need to make sure that the SAP workload on DR
region is adapted to the networking changes like DNS update, firewall etc.

Virtual networks in both regions are independent and to establish communication


between the two, you need to enable virtual network peering between the two
regions.

Virtual machines
On Azure, different components of a single SAP system run on virtual machines
with different SKU types. For DR, protection of an application (SAP NetWeaver and
non-SAP) running on Azure VMs can be enabled by replicating components using
Azure Site Recovery to another Azure region or zone. With Azure Site Recovery,
Azure VMs are replicated continuously from primary to disaster recovery site.
Depending on the selected Azure DR region, the VM SKU type might not be
available on the DR site. You need to make sure that the required VM SKU types
are available in the Azure DRregion as well. Check Azure Products by Region to
see if the required VM family SKU type is available or not.

) Important

If SAP system is configured with flexible scale set with FD=1, then you need to
use PowerShell to set up Azure Site Recovery for disaster recovery. Currently,
it's the only method available to configure disaster recovery for VMs deployed
in scale set.

For databases running on Azure virtual machines, it's recommended to use native
database replication technology to synchronize data to the disaster recovery site.
The large VMs on which the databases are running might not be available in all
regions. If you're using availability zones for disaster recovery, you should check
that the respective VM SKUs are available in the zone of your disaster recovery site.

7 Note

It isn't advised using Azure Site Recovery for databases, as it doesn’t


guarantee DB consistency and has data churn limitation.

With production applications running on the primary region at all time, reserve
instances are typically used to economize Azure costs. If using reserved
instances, you need to sign up for 1-year or a 3-year term commitment that might
not be cost effective for DR site. Also setting up Azure Site Recovery doesn’t
guarantee you the capacity of the required VM SKU during your failover. To make
sure that the VM SKU capacity is available, you can consider an option to enable
on-demand capacity reservation. It reserves compute capacity in an Azure region
or an Azure availability zone for any duration of time without commitment. Azure
Site Recovery is integrated with on-demand capacity reservation. With this
integration, you can use the power of capacity reservation with Azure Site Recovery
to reserve compute capacity in the DR site and guarantee your failovers. For more
information, read on-demand capacity reservation limitations and restrictions.

An Azure subscription has quotas for VM families (for example, Mv2 family) and
other resources. Sometimes organizations want to use different Azure subscription
for DR. Each subscription (primary and DR) might have different quotas assigned
for each VM family. Make sure that the subscription used for the DR site has
enough compute quotas available.

Storage
On enabling Azure Site Recovery for a VM to set up DR, the local managed disks
attached to the VMs are replicated to the DR region. During replication, the VM
disk writes are sent to a cache storage account in the source region. Data is sent
from there to the target region, and recovery points are generated from the data.
When you fail over a VM during DR, a recovery point is used to restore the VM in
the target region. But Azure Site Recovery doesn’t support all storages types that
are available in Azure. For more information, see Azure Site Recovery support
matrix for storages.

For SAP system running on Windows with Azure shared disk, you could use Azure
Site Recovery with Azure Shared Disk (preview) . As the feature is in public
preview, we don't recommend implementing the scenario for most critical SAP
production workloads. For more information on supported scenarios for Azure
Shared Disk, see Support matrix for shared disks in Azure VM disaster recovery
(preview)

In addition to Azure managed data disks attached to VMs, different Azure native
storage solutions are used to run SAP application on Azure. The DR approach for
each Azure storage solution might differ, as not all storage services available in
Azure are supported with Azure Site Recovery. Below are the list of storage type
that is typically used for SAP workload.

ノ Expand table

Storage type DR strategy recommendation

Managed disk Azure Site Recovery

NFS on Azure files (LRS or Custom script to replicate data between two sites (for
ZRS) example, rsync)

NFS on Azure NetApp Files Use Cross-region replication of Azure NetApp Files volumes

Azure Shared Disk (LRS or Azure Site Recovery with Azure Shared Disk (in preview)
ZRS)

SMB on Azure files (LRS or Use RoboCopy to copy files between two sites
ZRS)

SMB on Azure NetApp Files Use Cross-region replication of Azure NetApp Files volumes

For custom built storage solutions like NFS cluster, you need to make sure the
appropriate DR strategy is in place.

Different native Azure storage services (like Azure Files, Azure NetApp Files) might
not be not available in all regions. So to have similar SAP setup on the DR region
after failover, ensure the respective storage service is offered in DR site. For more
information, check Azure Products by Region .

If you're using zone redundancy storage (ZRS) for Azure Files, and Azure Shared
Disk in your primary region, and you want to maintain same ZRS redundancy
option in DR region as well, refer to [Premium file shares ZRS support](Azure Files
zone-redundant storage (ZRS) support for premium file shares | Microsoft Learn),
and ZRS for managed disks document for ZRS support in Azure regions.

If using availability zones for disaster recovery, keep in mind the following points:
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files
feature isn't deployed in all Availability zones in an Azure region. So it might
happen that the Azure NetApp Files service isn't available in the chosen
availability zone for your DR strategy.
Cross region replication of Azure NetApp File volumes is only available in fixed
region pairs, not across zones.

If you configure your storage with Active Directory integration, similar setup
should be done on the DR site storage account as well.

Next steps
Disaster Recovery Guidelines for SAP workload
Azure to Azure disaster recovery architecture using Azure Site Recovery service
Disaster recovery guidelines for SAP
application
Article • 05/08/2024

To configure Disaster Recovery (DR) for SAP workload on Azure, you need to test, fine
tune and update the process regularly. Testing disaster recovery helps in identifying
sequence of dependent services that are required before you can trigger SAP workload
DR failover or start the system on the secondary site. Organizations usually have their
SAP systems connected to Active Directory (AD) and Domain Name System (DNS)
services to function correctly. When you set up DR for your SAP workload, ensure AD
and DNS services are functioning before you recover SAP and other non-SAP systems,
to ensure the application functions correctly. For guidance on protecting Active
Directory and DNS, learn how to protect Active Directory and DNS. The DR
recommendation for SAP application described in this document is at abstract level. You
need to design your DR strategy based on your specific setup and document the end-
to-end scenario.

DR recommendation for SAP workloads


Usually in distributed SAP NetWeaver systems; central services, database and shared
storage (NFS/SMB) are single point of failures (SPOF). To mitigate the effect of different
SPOFs, it's necessary to set up redundancy of these components. The redundancy of
these SPOF components in the primary region is achieved by configuring high
availability. The high availability setup of the component protects SAP system from local
failure or catastrophe. But to protect SAP applications from geographical dispersed
disaster, DR strategy should be implemented for all the SAP components.

For SAP systems running on virtual machines, you can use Azure Site Recovery to create
a disaster recovery plan. Following is the recommended disaster recovery approach for
each component of an SAP system. Standalone non-NetWeaver SAP engines such as
TREX and non-SAP applications aren't covered in this document.

ノ Expand table

Components Recommendation

SAP Web Dispatcher Replicate VM using Azure Site Recovery

SAP Central Services Replicate VM using Azure Site Recovery


Components Recommendation

SAP Application server Replicate VM using Azure Site Recovery

SAP Database Use replication method offered by the database

Shared Storage Replicate content, using appropriate method per storage type

SAP Web Dispatcher


SAP Web Dispatcher component works as a load balancer for SAP traffic among SAP
application servers. You have different options to achieve high availability of SAP Web
Dispatcher component in the primary region. For more information about this option,
see High Availability of the SAP Web Dispatcher and SAP Web dispatcher HA setup on
Azure .

Option 1: High availability using cluster solution.


Option 2: High availability with parallel SAP Web Dispatchers.

To achieve DR for highly available SAP Web Dispatcher setup in primary region, you can
use Azure Site Recovery. For parallel web dispatchers (option 2) running in primary
region, you can configure Azure Site Recovery to achieve DR. But for SAP Web
Dispatcher configured using option 1 in primary region, you need to make some
additional changes after failover to have similar HA setup on the DR region. As the
configuration of SAP Web Dispatcher high availability with cluster solution is configured
in similar manner to SAP central services. Follow the same guidelines as mentioned for
SAP Central Services.

SAP Central Services


The SAP central services contain enqueue and message server, which is one of the SPOF
of your SAP application. In an SAP system, there can be only one such instance, and it
can be configured for high availability. Read High Availability for SAP Central Service to
understand the different high availability solution for SAP workload on Azure.

Configuring high availability for SAP Central Services protects resources and processes
from local incidents. To achieve DR for SAP Central Services, you can use Azure Site
Recovery. Azure Site Recovery replicates VMs and the attached managed disks, but
there are additional considerations for the DR strategy. Check the following section for
more information, based on the operating system used for SAP central services.

Windows
For SAP system, the redundancy of SPOF component in the primary region is
achieved by configuring high availability. To achieve similar high availability setup in
the disaster recovery region after failover, you need to consider additional points
like cluster reconfiguration, SAP shared directories availability, alongside of
replicating VMs and attached managed disk to DR site using Azure Site Recovery.
On Windows, the high availability of SAP application can be achieved using
Windows Server Failover Cluster (WSFC). The following diagram shows the different
components involved in configuring high availability of SAP central services with
WSFC. Each component must be evaluated to achieve similar high availability set up
in the DR site. If you configure SAP Web Dispatcher using WSFC, similar
consideration would apply as well.

Load balancer

Azure Site Recovery replicates VMs to the DR site, but it doesn’t replicate Azure
load balancer. You'll need to create a separate internal load balancer on DR site
beforehand or after failover. If you create internal load balancer beforehand, create
an empty backend pool and add VMs after the failover event.

Quorum (cloud witness)

If you configure a cluster with a cloud witness as its quorum mechanism, then you
need to create a separate storage account in the DR region. On the event of
failover, quorum setting must be updated with the new storage account name and
access keys.

Windows server failover cluster

If there's a failover, SAP ASCS/ERS VMs configured with WSFC don't work out-of-
the-box. Additional reconfiguration is required to start SAP system on the DR
region. Based on the type of your deployment (file share or shared disk), refer to
following blog to learn more on the additional steps to be performed in the DR
region.

SAP NetWeaver HA deployment with File Share running on Windows failover


to DR Region using Azure Site Recovery .
Disaster Recovery for SAP NetWeaver HA deployment with Azure Shared Disk
on Windows using Azure Site Recovery .

SAP shared directories for Windows

On Windows, the high availability configuration of SAP central services (ASCS and
ERS) is set up with either a file share or shared disk. Depending on the type of
cluster disk, you need to implement the suitable method to replicate the data on
this disk type to the DR region. The replication methodology for each cluster disk
type is presented at abstract level. You need to confirm exact steps to replicate
storage and perform testing.

ノ Expand table

SAP shared directories Cross region replication mechanism

SMB on Azure Files Robocopy

SMB on Azure NetApp Files Cross Region Replication

Azure Shared Disk Azure Site Recovery with Shared Disks (preview)

7 Note

Azure Site Recovery with shared disk is currently in public preview. So, we don't
recommend implementing the scenario for most critical SAP production
workloads
SAP Application Servers
In the primary region, the redundancy of the SAP application servers is achieved by
installing instances in multiple VMs. To have DR for SAP application servers, Azure Site
Recovery can be set up for each application server VM. For shared storages (transport
filesystem, interface data filesystem) that is attached to the application servers, follow
the appropriate DR practice based on the type of shared storage.

SAP Database Servers


For databases running SAP workload, use the native DBMS replication technology to
configure DR. Use of Azure Site Recovery for databases isn't recommended, as it doesn’t
guarantee DB consistency and has data churn limitation. The replication technology for
each database is different, so follow the respective database guidelines. Following table
shows the list of databases used for SAP workloads and the corresponding DR
recommendation.

ノ Expand table

Database DR recommendation

SAP HANA HANA System Replication (HSR)

Oracle Oracle Data Guard (FarSync)

IBM DB2 High availability disaster recovery (HADR)

Microsoft SQL Microsoft SQL Always On

SAP ASE ASE HADR Always On

SAP MaxDB Standby Database

For cost optimized solution, you can even use backup and restore option for database
DR strategy.

Back up and restore


Backup and restore is other solution you can use to achieve disaster recovery for your
SAP workloads if the business RTO and RPO are noncritical. You can use Azure backup, a
cloud based backup service to take copies of different component of your SAP workload
like virtual machines, managed disks, and supported databases. To learn more on the
general support settings and limitations for Azure Backup scenarios and deployments,
see Azure Backup support matrix.
ノ Expand table

Services Component Azure Backup Support

Compute Azure VMs Supported

Storage Azure Managed Disks including shared disks Supported

Storage Azure File Share - SMB (Standard or Premium) Supported

Storage Azure blobs Supported

Storage Azure File Shared - NFS (Standard or Premium) Not Supported

Storage Azure NetApp Files Not Supported

Database SAP HANA database in Azure VMs Supported

Database SQL server in Azure VMs Supported

Database Oracle Supported*

Database IBM DB2, SAP ASE Not Supported

7 Note

*Azure backup support Oracle database using Azure VM backup for database
consistent snapshots.

Azure backup doesn’t support all Azure storages and databases that are used for
SAP workload.

Azure backup stores backups in recovery service vault, which replicates your data based
on the chosen replication type (LRS, ZRS, or GRS). For Geo-redundant storage (GRS),
your backup data is replicated to a paired secondary region. With cross region restore
feature enabled, you can restore data of the supported management type on the
secondary region.

Backup and restore are more traditional cost optimized approach but comes with a
trade-off of higher RTO. As you need to restore all the applications from the backup if
there's failover to DR region. So you need to analyze your business need and
accordingly design a DR strategy.

References
Tutorial: Set up disaster recovery for Azure VMs
Azure Backup service.
Considerations for Azure Virtual
Machines DBMS deployment for SAP
workload
Article • 02/10/2023

This guide is part of the documentation on how to implement and deploy SAP software
on Microsoft Azure. Before you read this guide, read the Planning and implementation
guide and articles the planning guide points you to. This document covers the generic
deployment aspects of SAP-related DBMS systems on Microsoft Azure virtual machines
(VMs) by using the Azure infrastructure as a service (IaaS) capabilities.

The paper complements the SAP installation documentation and SAP Notes, which
represent the primary resources for installations and deployments of SAP software on
given platforms.

In this document, considerations of running SAP-related DBMS systems in Azure VMs


are introduced. There are few references to specific DBMS systems in this document.
Instead, the specific DBMS systems are handled in other database system specific
documents.

Resources
There are other articles available on SAP workload on Azure. Start with SAP workload on
Azure: Get started and then choose your area of interest.

The following SAP Notes are related to SAP on Azure in regard to the area covered in
this document.

Note Title
number

1928533 SAP applications on Azure: Supported products and Azure VM types

2015553 SAP on Microsoft Azure: Support prerequisites

1999351 Troubleshooting enhanced Azure monitoring for SAP

2178632 Key monitoring metrics for SAP on Microsoft Azure

1409604 Virtualization on Windows: Enhanced monitoring

2191498 SAP on Linux with Azure: Enhanced monitoring


Note Title
number

2039619 SAP applications on Microsoft Azure using the Oracle database: Supported products
and versions

2233094 DB6: SAP applications on Azure using IBM DB2 for Linux, UNIX, and Windows:
Additional information

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

2578899 SUSE Linux Enterprise Server 15: Installation Note

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2772999 Red Hat Enterprise Linux 8.x: Installation and Configuration

2002167 Red Hat Enterprise Linux 7.x: Installation and upgrade

2069760 Oracle Linux 7.x SAP installation and upgrade

1597355 Swap-space recommendation for Linux

2799900 Central Technical Note for Oracle Database 19c

2171857 Oracle Database 12c: File system support on Linux

1114181 Oracle Database 11g: File system support on Linux

2969063 Microcode Validation Failed in HCMT on Azure

3246210 Azure - HCMT Fails During Some Disk Performance Tests

For information on all the SAP Notes for Linux, see the SAP community wiki .

You need a working knowledge of Microsoft Azure architecture and how Microsoft
Azure virtual machines are deployed and operated. For more information, see Azure
documentation.

In general, the Windows, Linux, and DBMS installation and configuration are essentially
the same as any virtual machine or bare metal machine you install on-premises. There
are some architecture and system management implementation decisions that are
different when you use Azure IaaS. This document explains the specific architectural and
system management differences to be prepared for when you use Azure IaaS.

Storage structure of a VM for RDBMS


deployments
To follow this chapter, read and understand the information presented in:

Azure Virtual Machines planning and implementation for SAP NetWeaver


Azure Storage types for SAP workload
What SAP software is supported for Azure deployments
SAP workload on Azure virtual machine supported scenarios

For Azure block storage, the usage of Azure managed disks is mandatory. For details
about Azure managed disks read the article Introduction to managed disks for Azure
VMs.

In a basic configuration, we usually recommend a deployment structure where the


operating system, DBMS, and eventual SAP binaries are separate from the database files.
We recommend having separate Azure disks for:

The operating system (base VHD or OS VHD)


Database management system executables
SAP executables like /usr/sap
DBMS data files
DBMS redo log files

A configuration that separates these components into five different volumes can result
in higher resiliency since excessive usage on one volume doesn't necessarily interfere
with the usage of other volumes as long as VM storage quota and limits aren't
exceeded.

The DBMS data and transaction/redo log files are stored in Azure supported block
storage or Azure NetApp Files. Azure Files or Azure Premium Files isn't supported as
storage for DBSM data and/or redo log files with SAP workload. They're stored in
separate disks and attached as logical disks to the original Azure operating system
image VM. For Linux deployments, different recommendations are documented. Read
the article Azure Storage types for SAP workload for the capabilities and the support of
the different storage types for your scenario. Specifically for SAP HANA start with the
article SAP HANA Azure virtual machine storage configurations.

When you plan your disk layout, find the best balance between these items:

The number of data files.


The number of disks that contain the files.
The IOPS quotas of a single disk or NFS share.
The data throughput per disk or NFS share.
The number of additional data disks possible per VM size.
The overall storage or network throughput a VM can provide.
The latency different Azure Storage types can provide.

VM storage IOPS and throughput quota.


VM network quota in case you're using NFS - traffic to NFS shares is counting
against the VM's network quota and NOT the storage quota.

VM SLAs.

Azure enforces an IOPS quota per data disk or NFS share. These quotas are different for
disks hosted on the different Azure block storage solutions or shares. I/O latency is also
different between these different storage types as well.

Each of the different VM types has a limited number of data disks that you can attach.
Another restriction is that only certain VM types can use, for example, premium storage.
Typically, you decide to use a certain VM type based on CPU and memory requirements.
You also need to consider the IOPS, latency, and disk throughput requirements that
usually are scaled with the number of disks or the type of premium storage disks v1. The
number of IOPS and the throughput to be achieved by each disk might dictate disk size,
especially with premium storage v1. With premium storage v2 or Ultra disk, you can
select provisioned IOPS and throughput independent of the disk capacity.

7 Note

For DBMS deployments, we highly recommend Azure premium storage (v1 and v2),
Ultra disk or Azure NetApp Files based NFS shares for any data, transaction log, or
redo files. It doesn't matter whether you want to deploy production or
nonproduction systems. Latency of Azure standard HDD or SSD isn't acceptable for
any type of production system.

7 Note

To maximize Azure's single VM SLA , all disks that are attached must be Azure
premium storage (v1 or v2) or Azure Ultra disk type, which includes the base VHD
(Azure premium storage).

7 Note

Hosting main database files, such as data and log files, of SAP databases on storage
hardware that's located in co-located third-party data centers adjacent to Azure
data centers isn't supported. Storage provided through software appliances hosted
in Azure VMs, are also not supported for this use case. For SAP DBMS workloads,
only storage that's represented as native Azure service is supported for the data
and transaction log files of SAP databases in general. Different DBMS might
support different Azure storage types. For more details check the article Azure
Storage types for SAP workload

The placement of the database files and the log and redo files and the type of Azure
Storage you use, is defined by IOPS, latency, and throughput requirements. Specifically
for Azure premium storage v1, to achieve enough IOPS, you might be forced to use
multiple disks or use a larger premium storage disk. If you use multiple disks, build a
software stripe across the disks that contain the data files or the log and redo files. In
such cases, the IOPS and the disk throughput SLAs of the underlying premium storage
disks or the maximum achievable IOPS of standard storage disks are accumulative for
the resulting stripe set.

If your IOPS requirement exceeds what a single VHD can provide, balance the IOPS that
is needed for the database files across a number of VHDs. The easiest way to distribute
the IOPS load across disks is to build a software stripe over the different disks. Then
place a number of data files of the SAP DBMS on the LUNs carved out of the software
stripe. The number of disks in the stripe is driven by IOPS demands, disk throughput
demands, and volume demands.

Windows

We recommend that you use Windows Storage Spaces to create stripe sets across
multiple Azure VHDs. Use at least Windows Server 2012 R2 or Windows Server 2016.

Linux

Only MDADM and Logical Volume Manager (LVM) are supported to build a software
RAID on Linux. For more information, see:

Configure software RAID on Linux using MDADM


Configure LVM on a Linux VM in Azure using LVM

For Azure premium storage v2 and Ultra disk, striping may not necessary since you can
define IOPS and disk throughput independent of the size of the disk.

7 Note

Because Azure Storage keeps three images of the VHDs, it doesn't make sense to
configure a redundancy when you stripe. You only need to configure striping so
that the I/Os are distributed over the different VHDs.

Managed or nonmanaged disks


An Azure storage account is an administrative construct and also a subject of limitations.
For information on capabilities and limitations, see Azure Storage scalability and
performance targets. For standard storage, remember that there's a limit on the IOPS
per storage account. See the row that contains Total Request Rate in the article Azure
Storage scalability and performance targets. There's also an initial limit on the number
of storage accounts per Azure subscription. As of 2017, Azure introduced the concepts
of Azure Managed Disks that relief you of taking care of any storage account
administration. Using Azure managed disks is the default to deploy for SAP workload in
Azure.

) Important

Given the advantages of Azure Managed Disks, it is mandatory that you use Azure
Managed Disks for your DBMS deployments and SAP deployments in general.

If you happen to have SAP workload that isn't yet using managed disks, to convert from
unmanaged to managed disks, see:

Convert a Windows virtual machine from unmanaged disks to managed disks.


Convert a Linux virtual machine from unmanaged disks to managed disks.

Caching for VMs and data disks


When you mount disks to VMs, you can choose whether the I/O traffic between the VM
and those disks located in Azure storage is cached.

The following recommendations assume these I/O characteristics for standard DBMS:

It's mostly a read workload against data files of a database. These reads are
performance critical for the DBMS system.
Writing against the data files occurs in bursts based on checkpoints or a constant
stream. Averaged over a day, there are fewer writes than reads. Opposite to reads
from data files, these writes are asynchronous and don't hold up any user
transactions.
There are hardly any reads from the transaction log or redo files. Exceptions are
large I/Os when you perform transaction log backups.
The main load against transaction or redo log files is writes. Dependent on the
nature of the workload, you can have I/Os as small as 4 KB or, in other cases, I/O
sizes of 1 MB or more.
All writes must be persisted on disk in a reliable fashion.

For Azure premium storage v1, the following caching options exist:

None
Read
Read/write
None + Write Accelerator, which is only for Azure M-Series VMs
Read + Write Accelerator, which is only for Azure M-Series VMs

For premium storage v1, we recommend that you use Read caching for data files of the
SAP database and choose No caching for the disks of log file(s).

For M-Series deployments, we recommend that you use Azure Write Accelerator only
for the disks of your log files. For details, restrictions, and deployment of Azure Write
Accelerator, see Enable Write Accelerator.

For premium storage v2, Ultra disk and Azure NetApp Files, no caching options are
offered.

Azure nonpersistent disks


Azure VMs offer nonpersistent disks after a VM is deployed. If a VM reboots, all content
on those drives can be wiped out. It's a given that data files and log and redo files of
databases should under no circumstances be located on those nonpersisted drives.
There might be exceptions for some databases, where these nonpersisted drives could
be suitable for tempdb and temp tablespaces.

For more information, see Understand the temporary drive on Windows VMs in Azure.

Windows

Drive D in an Azure VM is a nonpersisted drive, which is backed by some local disks


on the Azure compute node. Because it's nonpersisted, any changes made to the
content on drive D are lost when the VM is rebooted. Changes include files that
were stored, directories that were created, and applications that were installed.

Linux

Linux Azure VMs automatically mount a drive at /mnt/resource that's a nonpersisted


drive backed by local disks on the Azure compute node. Because it's nonpersisted,
any changes made to content in /mnt/resource are lost when the VM is rebooted.
Changes include files that were stored, directories that were created, and
applications that were installed.

Microsoft Azure Storage resiliency


Microsoft Azure Storage stores the base VHD, with OS and attached disks or blobs, on
at least three separate storage nodes. This type of storage is called locally redundant
storage (LRS). LRS is the default for all types of storage in Azure.

There are other redundancy methods. For more information, see Azure Storage
replication.

7 Note

Azure premium storage v1 and v2, Ultra disk and Azure NetApp Files are the
recommended type of storage for DBMS VMs and disks that store database and
log and redo files. With exception of premium storage v1, the only available
redundancy method for these storage types is LRS. As a result, you need to
configure database methods to enable database data replication into another
Azure region or availability zone. Database methods include SQL Server Always On,
Oracle Data Guard, and HANA System Replication.

VM node resiliency
Azure offers several different SLAs for VMs. For more information, see the most recent
release of SLA for Virtual Machines . Because the DBMS layer is critical to availability in
an SAP system, you need to understand availability sets, Availability Zones, and
maintenance events. For more information on these concepts, see Manage the
availability of Windows virtual machines in Azure and Manage the availability of Linux
virtual machines in Azure.

The minimum recommendation for production DBMS scenarios with an SAP workload is
to:

Deploy two VMs in a separate availability set in the same Azure region.
Run these two VMs in the same Azure virtual network and have NICs attached out
of the same subnets.
Use database methods to keep a hot standby with the second VM. Methods can
be SQL Server Always On, Oracle Data Guard, or HANA System Replication.

You also can deploy a third VM in another Azure region and use the same database
methods to supply an asynchronous replica in another Azure region.

For information on how to set up Azure availability sets, see this tutorial.

Azure network considerations


In large-scale SAP deployments, use the blueprint of Azure Virtual Datacenter. Use it for
your virtual network configuration and permissions and role assignments to different
parts of your organization.

These best practices are the result of thousands of customer deployments:

The virtual networks the SAP application is deployed into don't have access to the
internet.
The database VMs run in the same virtual network as the application layer,
separated in a different subnet from the SAP application layer.
The VMs within the virtual network have a static allocation of the private IP
address. For more information, see IP address types and allocation methods in
Azure.
Routing restrictions to and from the DBMS VMs aren't set with firewalls installed
on the local DBMS VMs. Instead, traffic routing is defined with network security
groups (NSGs).
To separate and isolate traffic to the DBMS VM, assign different NICs to the VM.
Every NIC gets a different IP address, and every NIC is assigned to a different
virtual network subnet. Every subnet has different NSG rules. The isolation or
separation of network traffic is a measure for routing. It's not used to set quotas
for network throughput.

7 Note

Assigning static IP addresses through Azure means to assign them to individual


virtual NICs. Don't assign static IP addresses within the guest OS to a virtual NIC.
Some Azure services like Azure Backup rely on the fact that at least the primary
virtual NIC in the guest OS is set to DHCP and not to static IP addresses. For more
information, see Troubleshoot Azure virtual machine backup. To assign multiple
static IP addresses to a VM, assign multiple virtual NICs to a VM.
2 Warning

Configuring network virtual appliances in the communication path between the


SAP application and the DBMS layer of a SAP NetWeaver-, Hybris-, or S/4HANA-
based SAP system isn't supported. This restriction is for functionality and
performance reasons. The communication path between the SAP application layer
and the DBMS layer must be a direct one. The restriction doesn't include application
security group (ASG) and NSG rules if those ASG and NSG rules allow a direct
communication path. This also includes traffic to NFS shares that host DBMS data
and redo log files.

Other scenarios where network virtual appliances aren't supported are in:

Communication paths between Azure VMs that represent Linux Pacemaker


cluster nodes and SBD devices as described in High availability for SAP
NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP
Applications.
Communication paths between Azure VMs and Windows Server Scale-Out File
Server (SOFS) set up as described in Cluster an SAP ASCS/SCS instance on a
Windows failover cluster by using a file share in Azure.

Network virtual appliances in communication paths can easily double the network
latency between two communication partners. They also can restrict throughput in
critical paths between the SAP application layer and the DBMS layer. In some
customer scenarios, network virtual appliances can cause Pacemaker Linux clusters
to fail. These are cases where communications between the Linux Pacemaker cluster
nodes communicate to their SBD device through a network virtual appliance.

) Important

Another design that's not supported is the segregation of the SAP application layer
and the DBMS layer into different Azure virtual networks that aren't peered with
each other. We recommend that you segregate the SAP application layer and
DBMS layer by using subnets within an Azure virtual network instead of by using
different Azure virtual networks.

If you decide not to follow the recommendation and instead segregate the two
layers into different virtual networks, the two virtual networks must be peered.

Be aware that network traffic between two peered Azure virtual networks is subject
to transfer costs. Huge data volume that consists of many terabytes is exchanged
between the SAP application layer and the DBMS layer. You can accumulate
substantial costs if the SAP application layer and DBMS layer are segregated
between two peered Azure virtual networks.

Use two VMs for your production DBMS deployment within an Azure availability set or
between two Azure Availability Zones. Also use separate routing for the SAP application
layer and the management and operations traffic to the two DBMS VMs. See the
following image:

Use Azure Load Balancer to redirect traffic


The use of private virtual IP addresses used in functionalities like SQL Server Always On
or HANA System Replication requires the configuration of an Azure load balancer. The
load balancer uses probe ports to determine the active DBMS node and route the traffic
exclusively to that active database node.

If there's a failover of the database node, there's no need for the SAP application to
reconfigure. Instead, the most common SAP application architectures reconnect against
the private virtual IP address. Meanwhile, the load balancer reacts to the node failover
by redirecting the traffic against the private virtual IP address to the second node.
Azure offers two different load balancer SKUs: a basic SKU and a standard SKU. Based
on the advantages in setup and functionality, you should use the Standard SKU of the
Azure load balancer. One of the large advantages of the Standard version of the load
balancer is that the data traffic isn't routed through the load balancer itself.

An example how you can configure an internal load balancer can be found in the article
Tutorial: Configure a SQL Server availability group on Azure Virtual Machines manually

7 Note

There are differences in behavior of the basic and standard SKU related to the
access of public IP addresses. The way how to work around the restrictions of the
Standard SKU to access public IP addresses is described in the document Public
endpoint connectivity for Virtual Machines using Azure Standard Load Balancer
in SAP high-availability scenarios

Deployment of host monitoring


For production use of SAP applications in Azure virtual machines, SAP requires the
ability to get host monitoring data from the physical hosts that run the Azure virtual
machines. A specific SAP Host Agent patch level is required that enables this capability
in SAPOSCOL and SAP Host Agent. The exact patch level is documented in SAP Note
1409604 .

For more information on the deployment of components that deliver host data to
SAPOSCOL and SAP Host Agent and the life-cycle management of those components,
start with the article Implement the Azure VM extension for SAP solutions.

Next steps
For more information on a particular DBMS, see:

SQL Server Azure Virtual Machines DBMS deployment for SAP workload

Oracle Azure Virtual Machines DBMS deployment for SAP workload

IBM DB2 Azure Virtual Machines DBMS deployment for SAP workload

High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
with Pacemaker

High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload

SAP maxDB, Live Cache, and Content Server deployment on Azure

SAP HANA on Azure operations guide

SAP HANA Azure virtual machine storage configurations

SAP HANA high availability for Azure virtual machines

Backup guide for SAP HANA on Azure virtual machines

SAP BW NLS implementation guide with SAP IQ on Azure


Installation of SAP HANA on Azure
virtual machines
Article • 05/03/2023

Introduction
This document helps in pointing you to the right resources for deploying HANA on
Azure virtual machines, including documents that you need to check before installing
SAP HANA on Azure VMs. The aim is to ensure you are able to perform the right steps
to achieve a supported configuration of SAP HANA on Azure.

7 Note

This guide describes deployments of SAP HANA into Azure VMs. For information
on how to deploy SAP HANA on HANA large instances, see How to install and
configure SAP HANA (Large Instances) on Azure.

Prerequisites
This guide also assumes that you're familiar with:

SAP HANA and SAP NetWeaver and how to install them on-premises.
How to install and operate SAP HANA and SAP application instances on Azure.
The concepts and procedures documented in:
Planning for SAP deployment on Azure, which includes Azure Virtual Network
planning and Azure Storage usage. See SAP NetWeaver on Azure Virtual
Machines - Planning and implementation guide
Deployment principles and ways to deploy VMs in Azure. See Azure Virtual
Machines deployment for SAP
High availability concepts for SAP HANA as documented in SAP HANA high
availability for Azure virtual machines

Step-by-step before deploying


In this section, the different steps are listed that you need to perform before starting
with the installation of SAP HANA in an Azure virtual machine. The order is enumerated
and as such should be followed in the order listed:
1. Although technically possible, some deployment scenarios will not be supported
on Azure. Therefore, you should check the document SAP workload on Azure
virtual machine supported scenarios for the scenario you have in mind with your
SAP HANA deployment. If the scenario is not listed, you need to assume that it has
not been tested and, as a result, is not supported.
2. Assuming that you have a rough idea of the memory requirement for your SAP
HANA deployment, you need to find a suitable Azure VM. Not all the VMs that are
certified for SAP NetWeaver, as documented in SAP support note #1928533 , are
SAP HANA certified. The source of truth for SAP HANA certified Azure VMs is the
website SAP HANA hardware directory . The units starting with S are HANA Large
Instances units and not Azure VMs.
3. Different Azure VM types have different minimum operating system releases for
SUSE Linux or Red Hat Linux. On the website SAP HANA hardware directory , you
need to click on an entry in the list of SAP HANA certified units to get detailed
data of this unit. Besides the supported HANA workload, the OS releases that are
supported with those units for SAP HANA are listed.
4. As of operating system releases, you need to consider certain minimum kernel
releases. These minimum releases are documented in these SAP support notes:

SAP support note #2814271 SAP HANA Backup fails on Azure with Checksum
Error
SAP support note #2753418 Potential Performance Degradation Due to Timer
Fallback
SAP support note #2791572 Performance Degradation Because of Missing
VDSO Support For Hyper-V in Azure

5. Based on the OS release that is supported for the virtual machine type of choice,
you need to check whether your desired SAP HANA release is supported with that
operating system release. Read SAP support note #2235581 for a support matrix
of SAP HANA releases with the different Operating System releases.
6. When you have found a valid combination of Azure VM type, operating system
release and SAP HANA release, you will need to check the SAP Product Availability
Matrix. In the SAP Availability Matrix, you can verify whether the SAP product you
want to run against your SAP HANA database is supported.

Step-by-step VM deployment and guest OS


considerations
In this phase, you need to go through the steps deploying the VM(s) to install HANA
and eventually optimize the chosen operating system after the installation.
1. Choose the base image from the Azure gallery. If you want to build your own
operating system image for SAP HANA, you need to know all the different
packages that are necessary for a successful SAP HANA installation. Otherwise it is
recommended using the SUSE and Red Hat images for SAP or SAP HANA out of
the gallery. These images include the packages necessary for a successful HANA
installation. Based on your support contract with the operating system provider,
you need to choose an image where you bring your own license, or choose an OS
image that includes support.

2. If you choose a guest OS image that requires you to bring your own license, you
will need to register this OS image with your subscription to enable you to
download and apply the latest patches. This step is going to require public internet
access, unless you set up your private instance of, for example, an SMT server in
Azure.

3. Decide the network configuration of the VM. You can get more information in the
document SAP HANA infrastructure configurations and operations on Azure. Keep
in mind that there are no network throughput quotas you can assign to virtual
network cards in Azure. As a result, the only purpose of directing traffic through
different vNICs is based on security considerations. We trust you to find a
supportable compromise between complexity of traffic routing through multiple
vNICs and the requirements enforced by security aspects.

4. Apply the latest patches to the operating system once the VM is deployed and
registered. Registered either with your own subscription. Or in case you chose an
image that includes operating system support the VM should have access to the
patches already.

5. Apply the tunings necessary for SAP HANA. These tunings are listed in the
following SAP support notes:

SAP support note #2694118 - Red Hat Enterprise Linux HA Add-On on


Azure
SAP support note #1984787 - SUSE LINUX Enterprise Server 12: Installation
notes
SAP support note #2578899 - SUSE Linux Enterprise Server 15: Installation
Note
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and
Upgrade
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for
RHEL 7
SAP support note #2772999 - Red Hat Enterprise Linux 8.x: Installation and
Configuration
SAP support note #2777782 - SAP HANA DB: Recommended OS Settings for
RHEL 8
SAP support note #2455582 - Linux: Running SAP applications compiled with
GCC 6.x
SAP support note #2382421 - Optimizing the Network Configuration on
HANA- and OS-Level

6. Select the Azure storage type and storage layout for the SAP HANA installation.
You are going to use either attached Azure disks or native Azure NFS shares. The
Azure storage types that are supported and the combinations of different Azure
storage types that can be used are documented in SAP HANA Azure virtual
machine storage configurations. Take the configurations documented as starting
point. For non-production systems, you might be able to configure lower
throughput or IOPS. For production systems, you might need to increase the
throughput and IOPS.

7. Make sure you have configured Azure Write Accelerator for your volumes that
contain the DBMS transaction logs or redo logs when using M-Series or Mv2-
Series VMs. Be aware of the limitations for Write Accelerator as documented.

8. Check whether Azure Accelerated Networking is enabled on the VMs deployed.

7 Note

Not all the commands in the different sap-tune profiles or as described in the notes
might run successfully on Azure. Commands that would manipulate the power
mode of VMs usually return with an error since the power mode of the underlying
Azure host hardware can not be manipulated.

Step-by-step preparations specific to Azure


virtual machines
One of the Azure-specific preparations is the installation of an Azure VM extension that
delivers monitoring data for the SAP Host Agent. The details about the installation of
this monitoring extension are documented in:

SAP Note 2191498 discusses SAP enhanced monitoring with Linux VMs on Azure
SAP Note 1102124 discusses information about SAPOSCOL on Linux
SAP Note 2178632 discusses key monitoring metrics for SAP on Microsoft Azure
Azure Virtual Machines deployment for SAP NetWeaver

SAP HANA installation


With the Azure virtual machines deployed and the operating systems registered and
configured, you can install SAP HANA according to the SAP install instructions. A good
starting point is this SAP website: HANA resources

For SAP HANA scale-out configurations using direct attached disks of Azure Premium
Storage or Ultra disk, read the specifics in the document SAP HANA infrastructure
configurations and operations on Azure

Additional resources for SAP HANA backup


For information on how to back up SAP HANA databases on Azure VMs, see:

Backup guide for SAP HANA on Azure Virtual Machines


SAP HANA Azure Backup on file level

Next steps
Read the documentation:

SAP HANA infrastructure configurations and operations on Azure


SAP HANA Azure virtual machine storage configurations
SAP HANA infrastructure configurations
and operations on Azure
Article • 11/09/2023

This document provides guidance for configuring Azure infrastructure and operating
SAP HANA systems that are deployed on Azure native virtual machines (VMs). The
document also includes configuration information for SAP HANA scale-out for the
M128s VM SKU. This document isn't intended to replace the standard SAP
documentation, which includes the following content:

SAP administration guide


SAP installation guides
SAP notes

Prerequisites
To use this guide, you need basic knowledge of the following Azure components:

Azure virtual machines


Azure networking and virtual networks
Azure Storage

To learn more about SAP NetWeaver and other SAP components on Azure, see the SAP
on Azure section of the Azure documentation.

Basic setup considerations


The following sections describe basic setup considerations for deploying SAP HANA
systems on Azure VMs.

Connect into Azure virtual machines


As documented in the Azure virtual machines planning guide, there are two basic
methods for connecting into Azure VMs:

Connect through the internet and public endpoints on a Jump VM or on the VM


that is running SAP HANA.
Connect through a VPN or Azure ExpressRoute .
Site-to-site connectivity via VPN or ExpressRoute is necessary for production scenarios.
This type of connection is also needed for non-production scenarios that feed into
production scenarios where SAP software is being used. The following image shows an
example of cross-site connectivity:

Choose Azure VM types


SAP lists which Azure VM types that you can use for production scenarios . For non-
production scenarios, a wider variety of native Azure VM types is available.

7 Note

For non-production scenarios, use the VM types that are listed in the SAP note
#1928533 . For the usage of Azure VMs for production scenarios, check for SAP
HANA certified VMs in the SAP published Certified IaaS Platforms list .

Deploy the VMs in Azure by using:

The Azure portal.


Azure PowerShell cmdlets.
The Azure CLI.

You also can deploy a complete installed SAP HANA platform on the Azure VM services
through the SAP Cloud platform . The installation process is described in Deploy SAP
S/4HANA or BW/4HANA on Azure.
) Important

In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image.
For more information, see Memory optimized virtual machine sizes.

Storage configuration for SAP HANA


For storage configurations and storage types to be used with SAP HANA in Azure, read
the document SAP HANA Azure virtual machine storage configurations

Set up Azure virtual networks


When you have site-to-site connectivity into Azure via VPN or ExpressRoute, you must
have at least one Azure virtual network that is connected through a Virtual Gateway to
the VPN or ExpressRoute circuit. In simple deployments, the Virtual Gateway can be
deployed in a subnet of the Azure virtual network (VNet) that hosts the SAP HANA
instances as well. To install SAP HANA, you create two more subnets within the Azure
virtual network. One subnet hosts the VMs to run the SAP HANA instances. The other
subnet runs Jumpbox or Management VMs to host SAP HANA Studio, other
management software, or your application software.

) Important

Out of functionality, but more important out of performance reasons, it is not


supported to configure Azure Network Virtual Appliances in the communication
path between the SAP application and the DBMS layer of a SAP NetWeaver, Hybris
or S/4HANA based SAP system. The communication between the SAP application
layer and the DBMS layer needs to be a direct one. The restriction does not include
Azure ASG and NSG rules as long as those ASG and NSG rules allow a direct
communication. Further scenarios where NVAs are not supported are in
communication paths between Azure VMs that represent Linux Pacemaker cluster
nodes and SBD devices as described in High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise Server for SAP applications. Or in
communication paths between Azure VMs and Windows Server SOFS set up as
described in Cluster an SAP ASCS/SCS instance on a Windows failover cluster by
using a file share in Azure. NVAs in communication paths can easily double the
network latency between two communication partners, can restrict throughput in
critical paths between the SAP application layer and the DBMS layer. In some
scenarios observed with customers, NVAs can cause Pacemaker Linux clusters to fail
in cases where communications between the Linux Pacemaker cluster nodes need
to communicate to their SBD device through an NVA.

) Important

Another design that is NOT supported is the segregation of the SAP application
layer and the DBMS layer into different Azure virtual networks that are not peered
with each other. It is recommended to segregate the SAP application layer and
DBMS layer using subnets within an Azure virtual network instead of using different
Azure virtual networks. If you decide not to follow the recommendation, and
instead segregate the two layers into different virtual network, the two virtual
networks need to be peered. Be aware that network traffic between two peered
Azure virtual networks are subject of transfer costs. With the huge data volume in
many Terabytes exchanged between the SAP application layer and DBMS layer
substantial costs can be accumulated if the SAP application layer and DBMS layer is
segregated between two peered Azure virtual networks.

If you deployed Jumpbox or management VMs in a separate subnet, you can define
multiple virtual network interface cards (vNICs) for the HANA VM, with each vNIC
assigned to different subnet. With the ability to have multiple vNICs, you can set up
network traffic separation, if necessary. For example, client traffic can be routed through
the primary vNIC and admin traffic is routed through a second vNIC.
You also assign static private IP addresses that are deployed for both virtual NICs.

7 Note

You should assign static IP addresses through Azure means to individual vNICs. You
should not assign static IP addresses within the guest OS to a vNIC. Some Azure
services like Azure Backup Service rely on the fact that at least the primary vNIC is
set to DHCP and not to static IP addresses. See also the document Troubleshoot
Azure virtual machine backup. If you need to assign multiple static IP addresses to
a VM, you need to assign multiple vNICs to a VM.

However, for deployments that are enduring, you need to create a virtual datacenter
network architecture in Azure. This architecture recommends the separation of the
Azure VNet Gateway that connects to on-premises into a separate Azure VNet. This
separate VNet should host all the traffic that leaves either to on-premises or to the
internet. This approach allows you to deploy software for auditing and logging traffic
that enters the virtual datacenter in Azure in this separate hub VNet. So you have one
VNet that hosts all the software and configurations that relate to in- and outgoing traffic
to your Azure deployment.

The articles Azure Virtual Datacenter: A Network Perspective and Azure Virtual
Datacenter and the Enterprise Control Plane give more information on the virtual
datacenter approach and related Azure VNet design.

7 Note

Traffic that flows between a hub VNet and spoke VNet using Azure VNet peering is
subject of additional costs . Based on those costs, you might need to consider
making compromises between running a strict hub and spoke network design and
running multiple Azure ExpressRoute Gateways that you connect to 'spokes' in
order to bypass VNet peering. However, Azure ExpressRoute Gateways introduce
additional costs as well. You also may encounter additional costs for third-party
software you use for network traffic logging, auditing, and monitoring. Dependent
on the costs for data exchange through VNet peering on the one side and costs
created by additional Azure ExpressRoute Gateways and additional software
licenses, you may decide for micro-segmentation within one VNet by using subnets
as isolation unit instead of VNets.

For an overview of the different methods for assigning IP addresses, see IP address
types and allocation methods in Azure.

For VMs running SAP HANA, you should work with static IP addresses assigned. Reason
is that some configuration attributes for HANA reference IP addresses.

Azure Network Security Groups (NSGs) are used to direct traffic that's routed to the SAP
HANA instance or the jumpbox. The NSGs and eventually Application Security Groups
are associated to the SAP HANA subnet and the Management subnet.

To deploy SAP HANA in Azure without a site-to-site connection, you still want to shield
the SAP HANA instance from the public internet and hide it behind a forward proxy. In
this basic scenario, the deployment relies on Azure built-in DNS services to resolve
hostnames. In a more complex deployment where public-facing IP addresses are used,
Azure built-in DNS services are especially important. Use Azure NSGs and Azure NVAs
to control, monitor the routing from the internet into your Azure VNet architecture in
Azure. The following image shows a rough schema for deploying SAP HANA without a
site-to-site connection in a hub and spoke VNet architecture:
Another description on how to use Azure NVAs to control and monitor access from
Internet without the hub and spoke VNet architecture can be found in the article Deploy
highly available network virtual appliances.

Configuring Azure infrastructure for SAP HANA


scale-out
In order to find out the Azure VM types that are certified for either OLAP scale-out or
S/4HANA scale-out, check the SAP HANA hardware directory . A checkmark in the
column 'Clustering' indicates scale-out support. Application type indicates whether
OLAP scale-out or S/4HANA scale-out is supported. For details on nodes certified in
scale-out, review the entry for a specific VM SKU listed in the SAP HANA hardware
directory.

The minimum OS releases for deploying scale-out configurations in Azure VMs, check
the details of the entries in the particular VM SKU listed in the SAP HANA hardware
directory. Of a n-node OLAP scale-out configuration, one node functions as the main
node. The other nodes up to the limit of the certification act as worker node. More
standby nodes don't count into the number of certified nodes

7 Note

Azure VM scale-out deployments of SAP HANA with standby node are only
possible using the Azure NetApp Files storage. No other SAP HANA certified
Azure storage allows the configuration of SAP HANA standby nodes

For /hana/shared, we recommend the usage of Azure NetApp Files or Azure Files.
A typical basic design for a single node in a scale-out configuration, with /hana/shared
deployed on Azure NetApp Files, looks like:

The basic configuration of a VM node for SAP HANA scale-out looks like:

For /hana/shared, you use the native NFS service provided through Azure NetApp
Files or Azure Files.
All other disk volumes aren't shared among the different nodes and aren't based
on NFS. Installation configurations and steps for scale-out HANA installations with
non-shared /hana/data and /hana/log is provided further later in this document.
For HANA certified storage that can be used, check the article SAP HANA Azure
virtual machine storage configurations.

Sizing the volumes or disks, you need to check the document SAP HANA TDI Storage
Requirements , for the size required dependent on the number of worker nodes. The
document releases a formula you need to apply to get the required capacity of the
volume

The other design criteria that is displayed in the graphics of the single node
configuration for a scale-out SAP HANA VM is the VNet, or better the subnet
configuration. SAP highly recommends a separation of the client/application facing
traffic from the communications between the HANA nodes. As shown in the graphics,
this goal is achieved by having two different vNICs attached to the VM. Both vNICs are
in different subnets, have two different IP addresses. You then control the flow of traffic
with routing rules using NSGs or user-defined routes.
Particularly in Azure, there are no means and methods to enforce quality of service and
quotas on specific vNICs. As a result, the separation of client/application facing and
intra-node communication doesn't open any opportunities to prioritize one traffic
stream over the other. Instead the separation remains a measure of security in shielding
the intra-node communications of the scale-out configurations.

7 Note

SAP recommends separating network traffic to the client/application side and intra-
node traffic as described in this document. Therefore putting an architecture in
place as shown in the last graphics is recommended. Also consult your security and
compliance team for requirements that deviate from the recommendation

From a networking point of view the minimum required network architecture would
look like:

Installing SAP HANA scale-out n Azure


Installing a scale-out SAP configuration, you need to perform rough steps of:

Deploying new or adapting an existing Azure VNet infrastructure


Deploying the new VMs using Azure Managed Premium Storage, Ultra disk
volumes, and/or NFS volumes based on ANF
Adapt network routing to make sure that, for example, intra-node
communication between VMs isn't routed through an NVA .
Install the SAP HANA main node.
Adapt configuration parameters of the SAP HANA main node
Continue with the installation of the SAP HANA worker nodes

Installation of SAP HANA in scale-out configuration


As your Azure VM infrastructure is deployed, and all other preparations are done, you
need to install the SAP HANA scale-out configurations in these steps:

Install the SAP HANA main node according to SAP's documentation


When using Azure Premium Storage or Ultra disk storage with non-shared disks of
/hana/data and /hana/log , add the parameter basepath_shared = no to the

global.ini file. This parameter enables SAP HANA to run in scale-out without

shared /hana/data and /hana/log volumes between the nodes. Details are
documented in SAP Note #2080991 . If you're using NFS volumes based on ANF
for /hana/data and /hana/log, you don't need to make this change
After the eventual change in the global.ini parameter, restart the SAP HANA
instance
Add more worker nodes. For more information, see Add Hosts Using the
Command-Line Interface . Specify the internal network for SAP HANA inter-node
communication during the installation or afterwards using, for example, the local
hdblcm. For more detailed documentation, see SAP Note #2183363 .

To set up an SAP HANA scale-out system with a standby node, see the SUSE Linux
deployment instructions or the Red Hat deployment instructions.

SAP HANA Dynamic Tiering 2.0 for Azure


virtual machines
In addition to the SAP HANA certifications on Azure M-series VMs, SAP HANA Dynamic
Tiering 2.0 is also supported on Microsoft Azure. For more information, see Links to DT
2.0 documentation. There's no difference in installing or operating the product. For
example, you can install SAP HANA Cockpit inside an Azure VM. However, there are
some mandatory requirements, as described in the following section, for official support
on Azure. Throughout the article, the abbreviation "DT 2.0" is going to be used instead
of the full name Dynamic Tiering 2.0.

SAP HANA Dynamic Tiering 2.0 isn't supported by SAP BW or S4HANA. Main use cases
right now are native HANA applications.

Overview
The picture below gives an overview regarding DT 2.0 support on Microsoft Azure.
There's a set of mandatory requirements, which has to be followed to comply with the
official certification:
DT 2.0 must be installed on a dedicated Azure VM. It may not run on the same VM
where SAP HANA runs
SAP HANA and DT 2.0 VMs must be deployed within the same Azure Vnet
The SAP HANA and DT 2.0 VMs must be deployed with Azure accelerated
networking enabled
Storage type for the DT 2.0 VMs must be Azure Premium Storage
Multiple Azure disks must be attached to the DT 2.0 VM
It's required to create a software raid / striped volume (either via lvm or mdadm)
using striping across the Azure disks

More details are going to be explained in the following sections.

Dedicated Azure VM for SAP HANA DT 2.0


On Azure IaaS, DT 2.0 is only supported on a dedicated VM. It isn't allowed to run DT 2.0
on the same Azure VM where the HANA instance is running. Initially two VM types can
be used to run SAP HANA DT 2.0:

M64-32ms
E32sv3

For more information on the VM type description, see Azure VM sizes - Memory
Given the basic idea of DT 2.0, which is about offloading "warm" data in order to save
costs it makes sense to use corresponding VM sizes. There's no strict rule though
regarding the possible combinations. It depends on the specific customer workload.

Recommended configurations would be:

ノ Expand table

SAP HANA VM type DT 2.0 VM type

M128ms M64-32ms

M128s M64-32ms

M64ms E32sv3

M64s E32sv3

All combinations of SAP HANA-certified M-series VMs with supported DT 2.0 VMs (M64-
32ms and E32sv3) are possible.

Azure networking and SAP HANA DT 2.0


Installing DT 2.0 on a dedicated VM requires network throughput between the DT 2.0
VM and the SAP HANA VM of 10 Gb minimum. Therefore it's mandatory to place all
VMs within the same Azure Vnet and enable Azure accelerated networking.

See additional information about Azure accelerated networking Create an Azure VM


with Accelerated Networking using Azure CLI

VM Storage for SAP HANA DT 2.0


According to DT 2.0 best practice guidance, the disk IO throughput should be minimum
50 MB/sec per physical core.

According to the specifications for the two Azure VM types, which are supported for DT
2.0, the maximum disk IO throughput limit for the VM looks like:

E32sv3: 768 MB/sec (uncached) which means a ratio of 48 MB/sec per physical
core
M64-32ms: 1000 MB/sec (uncached) which means a ratio of 62.5 MB/sec per
physical core

It's required to attach multiple Azure disks to the DT 2.0 VM and create a software raid
(striping) on OS level to achieve the max limit of disk throughput per VM. A single Azure
disk can't provide the throughput to reach the max VM limit in this regard. Azure
Premium storage is mandatory to run DT 2.0.

Details about available Azure disk types can be found on the Select a disk type for
Azure IaaS VMs - managed disks page
Details about creating software raid via mdadm can be found on the Configure
software RAID on a Linux VM page
Details about configuring LVM to create a striped volume for max throughput can
be found on the Configure LVM on a virtual machine running Linux page

Depending on size requirements, there are different options to reach the max
throughput of a VM. Here are possible data volume disk configurations for every DT 2.0
VM type to achieve the upper VM throughput limit. The E32sv3 VM should be
considered as an entry level for smaller workloads. In case it should turn out that it's not
fast enough it might be necessary to resize the VM to M64-32ms. As the M64-32ms VM
has much memory, the IO load might not reach the limit especially for read intensive
workloads. Therefore fewer disks in the stripe set might be sufficient depending on the
customer specific workload. But to be on the safe side the disk configurations below
were chosen to guarantee the maximum throughput:

ノ Expand table

VM SKU Disk Config 1 Disk Config Disk Config Disk Config 4 Disk Config 5
2 3

M64- 4 x P50 -> 16 4 x P40 -> 8 5 x P30 -> 5 7 x P20 -> 3.5 8 x P15 -> 2 TB
32ms TB TB TB TB

E32sv3 3 x P50 -> 12 3 x P40 -> 6 4 x P30 -> 4 5 x P20 -> 2.5 6 x P15 -> 1.5
TB TB TB TB TB

Especially in case the workload is read-intense it could boost IO performance to turn on


Azure host cache "read-only" as recommended for the data volumes of database
software. Whereas for the transaction log Azure host disk cache must be "none".

Regarding the size of the log volume a recommended starting point is a heuristic of 15%
of the data size. The creation of the log volume can be accomplished by using different
Azure disk types depending on cost and throughput requirements. For the log volume,
high I/O throughput is required.

When using the VM type M64-32ms, it's mandatory to enable Write Accelerator. Azure
Write Accelerator provides optimal disk write latency for the transaction log (only
available for M-series). There are some items to consider though like the maximum
number of disks per VM type. Details about Write Accelerator can be found on the
Azure Write Accelerator page

Here are a few examples about sizing the log volume:

ノ Expand table

data volume size and disk log volume and disk type log volume and disk type
type config 1 config 2

4 x P50 -> 16 TB 5 x P20 -> 2.5 TB 3 x P30 -> 3 TB

6 x P15 -> 1.5 TB 4 x P6 -> 256 GB 1 x P15 -> 256 GB

Like for SAP HANA scale-out, the /hana/shared directory has to be shared between the
SAP HANA VM and the DT 2.0 VM. The same architecture as for SAP HANA scale-out
using dedicated VMs, which act as a highly available NFS server is recommended. In
order to provide a shared backup volume, the identical design can be used. But it's up
to the customer if HA would be necessary or if it's sufficient to just use a dedicated VM
with enough storage capacity to act as a backup server.

Links to DT 2.0 documentation


SAP HANA Dynamic Tiering installation and update guide
SAP HANA Dynamic Tiering tutorials and resources
SAP HANA Dynamic Tiering PoC
SAP HANA 2.0 SPS 02 dynamic tiering enhancements

Operations for deploying SAP HANA on Azure


VMs
The following sections describe some of the operations related to deploying SAP HANA
systems on Azure VMs.

Back up and restore operations on Azure VMs


The following documents describe how to back up and restore your SAP HANA
deployment:

SAP HANA backup overview


SAP HANA file-level backup
SAP HANA storage snapshot benchmark
Start and restart VMs that contain SAP HANA
A prominent feature of the Azure public cloud is that you're charged only for your
computing minutes. For example, when you shut down a VM that is running SAP HANA,
you're billed only for the storage costs during that time. Another feature is available
when you specify static IP addresses for your VMs in your initial deployment. When you
restart a VM that has SAP HANA, the VM restarts with its prior IP addresses.

Use SAProuter for SAP remote support


If you have a site-to-site connection between your on-premises locations and Azure,
and you're running SAP components, then you're probably already running SAProuter.
In this case, complete the following items for remote support:

Maintain the private and static IP address of the VM that hosts SAP HANA in the
SAProuter configuration.
Configure the NSG of the subnet that hosts the HANA VM to allow traffic through
TCP/IP port 3299.

If you're connecting to Azure through the internet, and you don't have an SAP router for
the VM with SAP HANA, then you need to install the component. Install SAProuter in a
separate VM in the Management subnet. The following image shows a rough schema
for deploying SAP HANA without a site-to-site connection and with SAProuter:
Be sure to install SAProuter in a separate VM and not in your Jumpbox VM. The separate
VM must have a static IP address. To connect your SAProuter to the SAProuter that is
hosted by SAP, contact SAP for an IP address. (The SAProuter that is hosted by SAP is
the counterpart of the SAProuter instance that you install on your VM.) Use the IP
address from SAP to configure your SAProuter instance. In the configuration settings,
the only necessary port is TCP port 3299.

For more information on how to set up and maintain remote support connections
through SAProuter, see the SAP documentation .

High-availability with SAP HANA on Azure native VMs


If you're running SUSE Linux Enterprise Server or Red Hat, you can establish a Pacemaker
cluster with fencing devices. You can use the devices to set up an SAP HANA
configuration that uses synchronous replication with HANA System Replication and
automatic failover. For more information listed in the 'next steps' section.

Next Steps
Get familiar with the articles as listed

SAP HANA Azure virtual machine storage configurations


Deploy a SAP HANA scale-out system with standby node on Azure VMs by using
Azure NetApp Files on SUSE Linux Enterprise Server
Deploy a SAP HANA scale-out system with standby node on Azure VMs by using
Azure NetApp Files on Red Hat Enterprise Linux
Deploy a SAP HANA scale-out system with HSR and Pacemaker on Azure VMs on
SUSE Linux Enterprise Server
Deploy a SAP HANA scale-out system with HSR and PAcemaker on Azure VMs on
Red Hat Enterprise Linux
High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server
High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
SQL Server Azure Virtual Machines
DBMS deployment for SAP NetWeaver
Article • 02/10/2023

This document covers several different areas to consider when deploying SQL Server for
SAP workload in Azure IaaS. As a precondition to this document, you should have read
the document Considerations for Azure Virtual Machines DBMS deployment for SAP
workload and other guides in the SAP workload on Azure documentation.

) Important

The scope of this document is the Windows version on SQL Server. SAP is not
supporting the Linux version of SQL Server with any of the SAP software. The
document is not discussing Microsoft Azure SQL Database, which is a Platform as a
Service offer of the Microsoft Azure Platform. The discussion in this paper is about
running the SQL Server product as it's known for on-premises deployments in
Azure Virtual Machines, leveraging the Infrastructure as a Service capability of
Azure. Database capabilities and functionality between these two offers are
different and should not be mixed up with each other. For more information, see
Azure SQL Database .

In general, you should consider using the most recent SQL Server releases to run SAP
workload in Azure IaaS. The latest SQL Server releases offer better integration into some
of the Azure services and functionality. Or have changes that optimize operations in an
Azure IaaS infrastructure.

General documentation about SQL Server running in Azure VMs can be found in these
articles:

SQL Server on Azure Virtual Machines (Windows)


Automate management with the Windows SQL Server IaaS Agent extension
Configure Azure Key Vault integration for SQL Server on Azure VMs (Resource
Manager)
Checklist: Best practices for SQL Server on Azure VMs
Storage: Performance best practices for SQL Server on Azure VMs
HADR configuration best practices (SQL Server on Azure VMs)

Not all the content and statements made in the general SQL Server in Azure VM
documentation applies to SAP workload. But, the documentation gives a good
impression on the principles. an example for functionality not supported for SAP
workload is the usage of FCI clustering.

There's some SQL Server in IaaS specific information you should know before
continuing:

SQL Version Support: Even with SAP Note #1928533 stating that the minimum
supported SQL Server release is SQL Server 2008 R2, the window of supported SQL
Server versions on Azure is also dictated by SQL Server's lifecycle. SQL Server 2012
extended maintenance ended mid of 2022. As a result, the current minimum
release for newly deployed systems should be SQL Server 2014. The more recent,
the better. The latest SQL Server releases offer better integration into some of the
Azure services and functionality. Or have changes that optimize operations in an
Azure IaaS infrastructure.
Using Images from Azure Marketplace: The fastest way to deploy a new Microsoft
Azure VM is to use an image from the Azure Marketplace. There are images in the
Azure Marketplace, which contain the most recent SQL Server releases. The images
where SQL Server already is installed can't be immediately used for SAP NetWeaver
applications. The reason is the default SQL Server collation is installed within those
images and not the collation required by SAP NetWeaver systems. In order to use
such images, check the steps documented in chapter Using a SQL Server image
out of the Microsoft Azure Marketplace.
SQL Server multi-instance support within a single Azure VM: This deployment
method is supported. However, be aware of resource limitations, especially around
network and storage bandwidth of the VM type that you're using. Detailed
information is available in article Sizes for virtual machines in Azure. These quota
limitations might prevent you to implement the same multi-instance architecture
as you can implement on-premises. As of the configuration and interference of
sharing the resources available within a single VM, the same considerations as on-
premises need to be taken into account.
Multiple SAP databases in one single SQL Server instance in a single VM:
Configurations like these are supported. Considerations of multiple SAP databases
sharing the shared resources of a single SQL Server instance are the same as for
on-premises deployments. Keep other limits like number of disks that can be
attached to a specific VM type in mind. Or network and storage quota limits of
specific VM types as detailed Sizes for virtual machines in Azure.

Recommendations on VM/VHD structure for


SAP-related SQL Server deployments
In accordance with the general description, Operating system, SQL Server executables,
the SAP executables should be located or installed separate Azure disks. Typically, most
of the SQL Server system databases aren't utilized at a high level by SAP NetWeaver
workload. Nevertheless the system databases of SQL Server should be, together with the
other SQL Server directories on a separate Azure disk. SQL Server tempdb should be
either located on the nonperisisted D:\ drive or on a separate disk.

With all SAP certified VM types (see SAP Note #1928533 ), tempdb data, and log
files can be placed on the non-persisted D:\ drive.
With SQL Server releases, where SQL Server installs tempdb with one data file by
default, it's recommended to use multiple tempdb data files. Be aware D:\ drive
volumes are different in size and capabilities based on the VM type. For exact sizes
of the D:\ drive of the different VMs, check the article Sizes for Windows virtual
machines in Azure.

These configurations enable tempdb to consume more space and more important more
I/O operations per second (IOPS) and storage bandwidth than the system drive is able
to provide. The nonpersistent D:\ drive also offers better I/O latency and throughput. In
order to determine the proper tempdb size, you can check the tempdb sizes on existing
systems.

7 Note

in case you place tempdb data files and log file into a folder on D:\ drive that you
created, you need to make sure that the folder does exist after a VM reboot. Since
the D:\ drive can be freshly initialized after a VM reboot all file and directory
structures could be wiped out. A possibility to recreate eventual directory structures
on D:\ drive before the start of the SQL Server service is documented in this
article .

A VM configuration, which runs SQL Server with an SAP database and where tempdb
data and tempdb logfile are placed on the D:\ drive and Azure premium storage v1 or
v2 would look like:
The diagram displays a simple case. As eluded to in the article Considerations for Azure
Virtual Machines DBMS deployment for SAP workload, Azure storage type, number, and
size of disks is dependent from different factors. But in general we recommend:

For smaller and mid-range deployments, using one large volume, which contains
the SQL Server data files. Reason behind this configuration is that it's easier to deal
with different I/O workloads in case the SQL Server data files don't have the same
free space. Whereas in large deployments, especially deployments where the
customer moved with a heterogenous database migration to SQL Server in Azure,
we used separate disks and then distributed the data files across those disks. Such
an architecture is only successful when each disk has the same number of data
files, all the data files are the same size, and roughly have the same free space.
Use the D:\drive for tempdb as long as performance is good enough. If the overall
workload is limited in performance by tempdb located on the D:\ drive, you need
to move tempdb to Azure premium storage v1 or v2, or Ultra disk as
recommended in this article.

SQL Server proportional fill mechanism distributes reads and writes to all datafiles
evenly provided all SQL Server data files are the same size and have the same frees
pace. SAP on SQL Server will deliver the best performance when reads and writes are
distributed evenly across all available datafiles. If a database has too few datafiles or the
existing data files are highly unbalanced, the best method to correct is an R3load export
and import. An R3load export and import involves downtime and should only be done if
there's an obvious performance problem that needs to be resolved. If the datafiles are
only moderately different sizes, increase all datafiles to the same size, and SQL Server
will rebalance data over time. SQL Server will automatically grow datafiles evenly if trace
flag 1117 is set or if SQL Server 2016 or higher is used.

Special for M-Series VMs


For Azure M-Series VM, the latency writing into the transaction log can be reduced,
compared to Azure premium storage performance v1, when using Azure Write
Accelerator. If the latency provided by premium storage v1 is limiting scalability of the
SAP workload, the disk that stores the SQL Server transaction log file can be enabled for
Write Accelerator. Details can be read in the document Write Accelerator. Azure Write
Accelerator doesn't work with Azure premium storage v2 and Ultra disk. In both cases,
the latency is better than what Azure premium storage v1 delivers.

Formatting the disks


For SQL Server, the NTFS block size for disks containing SQL Server data and log files
should be 64 KB. There's no need to format the D:\ drive. This drive comes pre-
formatted.

To avoid that the restore or creation of databases is initializing the data files by zeroing
the content of the files, make sure that the user context the SQL Server service is
running in has the User Right Perform volume maintenance tasks. For more
information, see Database instant file initialization.

SQL Server 2014 and more recent - Storing


Database Files directly on Azure Blob Storage
SQL Server 2014 and later releases open the possibility to store database files directly on
Azure Blob Store without the 'wrapper' of a VHD around them. This functionality was
meant to address shortcomings of Azure block storage years back. These days, it isn't
recommended to use this deployment method and instead choose either Azure
premium storage v1, premium storage v2, or Ultra disk. Dependent on the requirements.

SQL Server 2014 Buffer Pool Extension


SQL Server 2014 introduced a new feature, which is called Buffer Pool Extension. This
functionality though tested under SAP workload on Azure didn't provide improvement
in hosting workload. Therefore, it shouldn't be considered
Backup/Recovery considerations for SQL Server
Deploying SQL Server into Azure, you need to review your backup architecture. Even if
the system isn't a production system, the SAP database hosted by SQL Server must be
backed up periodically. Since Azure Storage keeps three images, a backup is now less
important in respect to compensating a storage crash. The priority reason for
maintaining a proper backup and recovery plan is more that you can compensate for
logical/manual errors by providing point in time recovery capabilities. The goal is to
either use backups to restore the database back to a certain point in time. Or to use the
backups in Azure to seed another system by copying the existing database.

There are several ways to back up and restore SQL Server databases in Azure. To get the
best overview and details, read the document Backup and restore for SQL Server on
Azure VMs. The article covers several different possibilities.

Using a SQL Server image out of the Microsoft


Azure Marketplace
Microsoft offers VMs in the Azure Marketplace, which already contain versions of SQL
Server. For SAP customers who require licenses for SQL Server and Windows, using
these images might be an opportunity to cover the need for licenses by spinning up
VMs with SQL Server already installed. In order to use such images for SAP, the
following considerations need to be made:

The SQL Server non-evaluation versions acquire higher costs than a 'Windows-
only' VM deployed from Azure Marketplace. To compare prices, see Windows
Virtual Machines Pricing and SQL Server Enterprise Virtual Machines Pricing .
You only can use SQL Server releases, which are supported by SAP.
The collation of the SQL Server instance, which is installed in the VMs offered in
the Azure Marketplace isn't the collation SAP NetWeaver requires the SQL Server
instance to run. You can change the collation though with the directions in the
following section.

Changing the SQL Server Collation of a Microsoft


Windows/SQL Server VM
Since the SQL Server images in the Azure Marketplace aren't set up to use the collation,
which is required by SAP NetWeaver applications, it needs to be changed immediately
after the deployment. For SQL Server, this change of collation can be done with the
following steps as soon as the VM has been deployed and an administrator is able to
log into the deployed VM:

Open a Windows Command Window, as administrator.


Change the directory to C:\Program Files\Microsoft SQL Server\110\Setup
Bootstrap\SQLServer2012.
Execute the command: Setup.exe /QUIET /ACTION=REBUILDDATABASE
/INSTANCENAME=MSSQLSERVER /SQLSYSADMINACCOUNTS=
<local_admin_account_name > /SQLCOLLATION=SQL_Latin1_General_Cp850_BIN2

<local_admin_account_name > is the account, which was defined as the


administrator account when deploying the VM for the first time through the
gallery.

The process should only take a few minutes. In order to make sure whether the step
ended up with the correct result, perform the following steps:

Open SQL Server Management Studio.


Open a Query Window.
Execute the command sp_helpsort in the SQL Server master database.

The desired result should look like:

Output

Latin1-General, binary code point comparison sort for Unicode Data, SQL
Server Sort Order 40 on Code Page 850 for non-Unicode Data

If the result is different, STOP any deployment and investigate why the setup command
didn't work as expected. Deployment of SAP NetWeaver applications onto SQL Server
instance with different SQL Server codepages than the one mentioned is NOT supported
for NetWeaver deployments.

SQL Server High-Availability for SAP in Azure


Using SQL Server in Azure IaaS deployments for SAP, you have several different
possibilities to add to deploy the DBMS layer highly available. Azure provides different
up-time SLAs for a single VM using different Azure block storages, a pair of VMs
deployed in an Azure availability set, or a pair of VMs deployed across Azure Availability
Zones. For production systems, we expect you to deploy a pair of VMs within an
availability set or across two Availability Zones. One VM will run the active SQL Server
Instance. The other VM will run the passive Instance
SQL Server Clustering using Windows Scale-out File
Server or Azure shared disk
With Windows Server 2016, Microsoft introduced Storage Spaces Direct. Based on
Storage Spaces, Direct Deployment, SQL Server FCI clustering is supported in general.
Azure also offers Azure shared disks that could be used for Windows clustering. For SAP
workload, we aren't supporting these HA options.

SQL Server Log Shipping


One high availability functionality is SQL Server log shipping. If the VMs participating in
the HA configuration have working name resolution, there's no problem. The setup in
Azure doesn't differ from any setup that is done on-premises related to setting up log
shipping and the principles around log shipping. Details of SQL Server log shipping can
be found in the article About Log Shipping (SQL Server).

The SQL Server log shipping functionality was hardly used in Azure to achieve high
availability within one Azure region. However in the following scenarios SAP customers
were using log shipping successful with Azure:

Disaster Recovery scenarios from one Azure region into another Azure region
Disaster Recovery configuration from on-premises into an Azure region
Cut-over scenarios from on-premises to Azure. In those cases, log shipping is used
to synchronize the new DBMS deployment in Azure with the ongoing production
system on-premises. At the time of cutting over, production is shut down and it's
made sure that the last and latest transaction log backups got transferred to the
Azure DBMS deployment. Then the Azure DBMS deployment is opened up for
production.

SQL Server Always On


As Always On is supported for SAP on-premises (see SAP Note #1772688 ), it's
supported in combination with SAP in Azure. There are some special considerations
around deploying the SQL Server Availability Group Listener (not to be confused with
the Azure Availability Set). Therefore, some different installation steps are necessary.

Some considerations using an Availability Group Listener are:

Using an Availability Group Listener is only possible with Windows Server 2012 or
higher as guest OS of the VM. For Windows Server 2012, ensure that the update to
enable SQL Server Availability Group Listeners on Windows Server 2008 R2 and
Windows Server 2012-based Microsoft Azure virtual machines has been applied.
For Windows Server 2008 R2, this patch doesn't exist. In this case, Always On
would need to be used in the same manner as Database Mirroring. By specifying a
failover partner in the connections string (done through the SAP default.pfl
parameter dbs/mss/server - see SAP Note #965908 ).
Using an Availability Group Listener, you need to connect the Database VMs to a
dedicated Load Balancer. You should assign static IP addresses to the network
interfaces of those VMs in the Always On configuration (defining a static IP address
is described in this article). Static IP addresses compared to DHCP are preventing
the assignment of new IP addresses in cases where both VMs might be stopped.
There are special steps required when building the WSFC cluster configuration
where the cluster needs a special IP address assigned, because Azure with its
current functionality would assign the cluster name the same IP address as the
node the cluster is created on. This behavior means a manual step must be
performed to assign a different IP address to the cluster.
The Availability Group Listener is going to be created in Azure with TCP/IP
endpoints, which are assigned to the VMs running the primary and secondary
replicas of the Availability group.
There might be a need to secure these endpoints with ACLs.

Detailed documentation on deploying Always On with SQL Server in Azure VMs lists like:

Introducing SQL Server Always On availability groups on Azure virtual machines.


Configure an Always On availability group on Azure virtual machines in different
regions.
Configure a load balancer for an Always On availability group in Azure.
HADR configuration best practices (SQL Server on Azure VMs)

7 Note

Reading Introducing SQL Server Always On availability groups on Azure virtual


machines, you're going to read about SQL Server's Direct Network Name (DNN)
listener. This new functionality got introduced with SQL Server 2019 CU8. This new
functionality makes the usage of an Azure load balancer handling the virtual IP
address of the Availability Group Listener obsolete.

SQL Server Always On is the most common used high availability and disaster recovery
functionality used in Azure for SAP workload deployments. Most customers use Always
On for high availability within a single Azure Region. If the deployment is restricted to
two nodes only, you have two choices for connectivity:
Using the Availability Group Listener. With the Availability Group Listener, you're
required to deploy an Azure load balancer.
With SQL Server 2016 SP3, SQL Server 2017 CU 25, or SQL Server 2019 CU8 or
more recent SQL Server releases on Windows Server 2016 or later you can use the
Direct Network Name (DNN) listener instead of an Azure load balancer. DNN is
eliminating the requirement to us an Azure load balancer.

Using the connectivity parameters of SQL Server Database Mirroring should only be
considered for round of investigating issues with the other two methods. In this case,
you need to configure the connectivity of the SAP applications in a way where both
node names are named. Exact details of such an SAP side configuration is documented
in SAP Note #965908 . By using this option, you would have no need to configure an
Availability Group listener. And with that no Azure load balancer and with that could
investigate issues of those components. But recall, this option only works if you restrict
your Availability Group to span two instances.

Most customers are using the SQL Server Always On functionality for disaster recovery
functionality between Azure regions. Several customers also use the ability to perform
backups from a secondary replica.

SQL Server Transparent Data Encryption


Many customers are using SQL Server Transparent Data Encryption (TDE) when
deploying their SAP SQL Server databases in Azure. The SQL Server TDE functionality is
fully supported by SAP (see SAP Note #1380493 ).

Applying SQL Server TDE


In cases where you perform a heterogeneous migration from another DBMS, running
on-premises, to Windows/SQL Server running in Azure, you should create your empty
target database in SQL Server ahead of time. As next step you would apply SQL Server
TDE functionality against this empty database. Reason you want to perform in this
sequence is that the process of encrypting the empty database can take quite a while.
The SAP import processes would then import the data into the encrypted database
during the downtime phase. The overhead of importing into an encrypted database has
a way lower time impact than encrypting the database after the export phase in the
down time phase. Negative experiences were made when trying to apply TDE with SAP
workload running on top of the database. Therefore, recommendation is treating the
deployment of TDE as an activity that needs to be done with no or low SAP workload on
the particular database. From SQL Server 2016 on, you can stop and resume the TDE
scan that performs the initial encryption. The document Transparent Data Encryption
(TDE) describes the command and details.

In cases where you move SAP SQL Server databases from on-premises into Azure, we
recommend testing on which infrastructure you can get the encryption applied fastest.
For this case, keep these facts in mind:

You can't define how many threads are used to apply data encryption to the
database. The number of threads is majorly dependent on the number of disk
volumes the SQL Server data and log files are distributed over. Means the more
distinct volumes (drive letters), the more threads will be engaged in parallel to
perform the encryption. Such a configuration contradicts a bit with earlier disk
configuration suggestion on building one or a smaller number of storage spaces
for the SQL Server database files in Azure VMs. A configuration with a few volumes
would lead to a few threads executing the encryption. A single thread encrypting is
reading 64 KB extents, encrypts it and then write a record into the transaction log
file, telling that the extent got encrypted. As a result the load on the transaction
log is moderate.
In older SQL Server releases, backup compression didn't get efficiency anymore
when you encrypted your SQL Server database. This behavior could develop into
an issue when your plan was to encrypt your SQL Server database on-premises and
then copy a backup into Azure to restore the database in Azure. SQL Server
backup compression can achieve a compression ratio of factor 4.
With SQL Server 2016, SQL Server introduced new functionality that allows
compressing backup of encrypted databases as well in an efficient manner. See
this blog for some details.

Using Azure Key Vault


Azure offers the service of a Key Vault to store encryption keys. SQL Server on the
other side offer a connector to use Azure Key Vault as store for the TDE certificates.

More details to use Azure Key Vault for SQL Server TDE lists like:

Configure Azure Key Vault integration for SQL Server on Azure VMs (Resource
Manager).
More Questions From Customers About SQL Server Transparent Data Encryption –
TDE + Azure Key Vault.

) Important
Using SQL Server TDE, especially with Azure key Vault, it's recommended to use the
latest patches of SQL Server 2014, SQL Server 2016, and SQL Server 2017. Reason is
that based on customer feedback, optimizations and fixes got applied to the code.
As an example, check KBA #4058175 .

Minimum deployment configurations


In this section, we suggest a set of minimum configurations for different sizes of
databases under SAP workload. It's too difficult to assess whether these sizes fit your
specific workload. In some cases, we might be generous on memory compared to the
database size. On the other side, the disk sizing might be too low for some of the
workloads. Therefore, these configurations should be treated as what they are. They're
configurations that should give you a point to start with. Configurations to fine-tune to
your specific workload and cost efficiency requirements.

An example of a configuration for a little SQL Server instance with a database size
between 50 GB – 250 GB could look like

Configuration DBMS VM Comments

VM Type E4s_v3/v4/v5 (4 vCPU/32 GiB RAM)

Accelerated Enable
Networking

SQL Server version SQL Server 2019 or more recent

# of data files 4

# of log files 1

# of temp data files 4 or default since SQL Server 2016

Operating system Windows Server 2019 or more recent

Disk aggregation Storage Spaces if desired

File system NTFS

Format block size 64 KB

# and type of data Premium storage v1: 2 x P10 (RAID0) Cache = Read Only for
disks Premium storage v2: 2 x 150 GiB (RAID0) - premium storage v1
default IOPS and throughput
Configuration DBMS VM Comments

# and type of log Premium storage v1: 1 x P20 Cache = NONE


disks Premium storage v2: 1 x 128 GiB - default
IOPS and throughput

SQL Server max 90% of Physical RAM Assuming single instance


memory parameter

An example of a configuration or a small SQL Server instance with a database size


between 250 GB – 750 GB, such as a smaller SAP Business Suite system, could look like

Configuration DBMS VM Comments

VM Type E16s_v3/v4/v5 (16 vCPU/128 GiB RAM)

Accelerated Enable
Networking

SQL Server SQL Server 2019 or more recent


version

# of data files 8

# of log files 1

# of temp data 8 or default since SQL Server 2016


files

Operating Windows Server 2019 or more recent


system

Disk aggregation Storage Spaces if desired

File system NTFS

Format block 64 KB
size

# and type of Premium storage v1: 4 x P20 (RAID0) Cache = Read Only
data disks Premium storage v2: 4 x 100 GiB - 200 GiB (RAID0) - for premium storage
default IOPS and 25 MB/sec extra throughput per disk v1

# and type of Premium storage v1: 1 x P20 Cache = NONE


log disks Premium storage v2: 1 x 200 GiB - default IOPS and
throughput

SQL Server max 90% of Physical RAM Assuming single


memory instance
parameter
An example of a configuration for a medium SQL Server instance with a database size
between 750 GB – 2,000 GB, such as a medium SAP Business Suite system, could look
like

Configuration DBMS VM Comments

VM Type E64s_v3/v4/v5 (64 vCPU/432 GiB RAM)

Accelerated Enable
Networking

SQL Server SQL Server 2019 or more recent


version

# of data devices 16

# of log devices 1

# of temp data 8 or default since SQL Server 2016


files

Operating system Windows Server 2019 or more recent

Disk aggregation Storage Spaces if desired

File system NTFS

Format block size 64 KB

# and type of Premium storage v1: 4 x P30 (RAID0) Cache = Read Only for
data disks Premium storage v2: 4 x 250 GiB - 500 GiB - plus premium storage v1
2,000 IOPS and 75 MB/sec throughput per disk

# and type of log Premium storage v1: 1 x P20 Cache = NONE


disks Premium storage v2: 1 x 400 GiB - default IOPS and
75MB/sec extra throughput

SQL Server max 90% of Physical RAM Assuming single


memory instance
parameter

An example of a configuration for a larger SQL Server instance with a database size
between 2,000 GB and 4,000 GB, such as a larger SAP Business Suite system, could look
like

Configuration DBMS VM Comments

VM Type E96(d)s_v5 (96 vCPU/672 GiB RAM)


Configuration DBMS VM Comments

Accelerated Enable
Networking

SQL Server SQL Server 2019 or more recent


version

# of data devices 24

# of log devices 1

# of temp data 8 or default since SQL Server 2016


files

Operating system Windows Server 2019 or more recent

Disk aggregation Storage Spaces if desired

File system NTFS

Format block size 64 KB

# and type of Premium storage v1: 4 x P30 (RAID0) Cache = Read Only for
data disks Premium storage v2: 4 x 500 GiB - 800 GiB - plus premium storage v1
2500 IOPS and 100 MB/sec throughput per disk

# and type of log Premium storage v1: 1 x P20 Cache = NONE


disks Premium storage v2: 1 x 400 GiB - plus 1,000 IOPS
and 75MB/sec extra throughput

SQL Server max 90% of Physical RAM Assuming single


memory instance
parameter

An example of a configuration for a large SQL Server instance with a database size of 4
TB+, such as a large globally used SAP Business Suite system, could look like

Configuration DBMS VM Comments

VM Type M-Series (1.0 to 4.0 TB RAM)

Accelerated Enable
Networking

SQL Server SQL Server 2019 or more recent


version

# of data devices 32

# of log devices 1
Configuration DBMS VM Comments

# of temp data 8 or default since SQL Server 2016


files

Operating Windows Server 2019 or more recent


system

Disk aggregation Storage Spaces if desired

File system NTFS

Format block size 64 KB

# and type of Premium storage v1: 4+ x P40 (RAID0) Cache = Read Only
data disks Premium storage v2: 4+ x 1,000 GiB - 4,000 GiB - for premium storage
plus 4,500 IOPS and 125 MB/sec throughput per disk v1

# and type of log Premium storage v1: 1 x P30 Cache = NONE


disks Premium storage v2: 1 x 500 GiB - plus 2,000 IOPS
and 125 MB/sec throughput

SQL Server max 95% of Physical RAM Assuming single


memory instance
parameter

As an example, this configuration is the DBMS VM configuration of an SAP Business


Suite on SQL Server. This VM hosts the 30TB database of the single global SAP Business
Suite instance of a global company with over $200B annual revenue and over 200K full
time employees. The system runs all the financial processing, sales and distribution
processing and many more business processes out of different areas, including North
American payroll. The system is running in Azure since the beginning of 2018 using
Azure M-series VMs as DBMS VMs. As high availability the system is using Always on
with one synchronous replica in another Availability Zone of the same Azure region and
another asynchronous replica in another Azure region. The NetWeaver application layer
is deployed in Ev4 VMs.

Configuration DBMS VM Comments

VM Type M192dms_v2 (192 vCPU/4,196 GiB


RAM)

Accelerated Networking Enabled

SQL Server version SQL Server 2019

# of data files 32

# of log files 1
Configuration DBMS VM Comments

# of temp data files 8

Operating system Windows Server 2019

Disk aggregation Storage Spaces

File system NTFS

Format block size 64 KB

# and type of data disks Premium storage v1: 16 x P40 Cache = Read Only

# and type of log disks Premium storage v1: 1 x P60 Using Write
Accelerator

# and type of tempdb disks Premium storage v1: 1 x P30 No caching

SQL Server max memory 95% of Physical RAM


parameter

General SQL Server for SAP on Azure Summary


There are many recommendations in this guide and we recommend you read it more
than once before planning your Azure deployment. In general, though, be sure to follow
the top general DBMS on Azure-specific recommendations:

1. Use the latest DBMS release, like SQL Server 2019, that has the most advantages in
Azure.
2. Carefully plan your SAP system landscape in Azure to balance the data file layout
and Azure restrictions:

Don't have too many disks, but have enough to ensure you can reach your
required IOPS.
Only stripe across disks if you need to achieve a higher throughput.

3. Never install software or put any files that require persistence on the D:\ drive as
it's non-permanent and anything on this drive can be lost at a Windows reboot or
VM restart.
4. Use your DBMS vendor's HA/DR solution to replicate database data.
5. Always use Name Resolution, don't rely on IP addresses.
6. Using SQL Server TDE, apply the latest SQL Server patches.
7. Be careful using SQL Server images from the Azure Marketplace. If you use the SQL
Server one, you must change the instance collation before installing any SAP
NetWeaver system on it.
8. Install and configure the SAP Host Monitoring for Azure as described in
Deployment Guide.

Next steps
Read the article

Considerations for Azure Virtual Machines DBMS deployment for SAP workload
Azure Virtual Machines Oracle database
deployment for SAP workload
Article • 04/21/2024

This document covers several different areas to consider when deploying Oracle
Database for SAP workload in Azure IaaS. Before you read this document, we
recommend you read Considerations for Azure Virtual Machines DBMS deployment for
SAP workload. We also recommend that you read other guides in the SAP workload on
Azure documentation.

You can find information about Oracle versions and corresponding OS versions that are
supported for running SAP on Oracle on Azure in SAP Note 2039619 .

General information about running SAP Business Suite on Oracle can be found at SAP
on Oracle . Oracle supports to run Oracle databases on Microsoft Azure. For more
information about general support for Windows Hyper-V and Azure, check the Oracle
and Microsoft Azure FAQ .

The following SAP notes are relevant for an Oracle


Installation

ノ Expand table

Note Note title


number

1738053 SAPinst for Oracle ASM installation SAP ONE Support Launchpad

2896926 ASM disk group compatibility NetWeaver SAP ONE Support Launchpad

1550133 Using Oracle Automatic Storage Management (ASM) with SAP NetWeaver based
Products SAP ONE Support Launchpad ]

888626 Redo log layout for high-end systems SAP ONE Support Launchpad

105047 Support for Oracle functions in the SAP environment SAP ONE Support Launchpad

2799920 Patches for 19c: Database SAP ONE Support Launchpad

974876 Oracle Transparent Data Encryption (TDE) SAP ONE Support Launchpad

2936683 Oracle Linux 8: SAP Installation and Upgrade SAP ONE Support Launchpad

1672954 Oracle 11g, 12c, 18c and 19c: Usage of hugepages on Linux
Note Note title
number

1171650 Automated Oracle DB parameter check

2936683 Oracle Linux 8: SAP Installation and Upgrade

Specifics for Oracle Database on Oracle Linux


Oracle supports to run their database instances on Microsoft Azure with Oracle Linux as
the guest OS. For more information about general support for Windows Hyper-V and
Azure, see the Azure and Oracle FAQ .

The specific scenario of SAP applications using Oracle Databases is supported as well.
Details are discussed in the next part of the document.

General Recommendations for running SAP on Oracle on


Azure
Installing or migrating existing SAP on Oracle systems to Azure, the following
deployment pattern should be followed:

1. Use the most recent Oracle Linux version available (Oracle Linux 8.6 or higher).
2. Use the most recent Oracle Database version available with the latest SAP Bundle
Patch (SBP) (Oracle 19 Patch 15 or higher) 2799920 - Patches for 19c: Database .
3. Use Automatic Storage Management (ASM) for small, medium, and large sized
databases on block storage.
4. Azure Premium Storage SSD should be used. Don't use Standard or other storage
types.
5. ASM removes the requirement for Mirror Log. Follow the guidance from Oracle in
Note 888626 - Redo log layout for high-end systems .
6. Use ASMLib and don't use udev.
7. Azure NetApp Files deployments should use Oracle dNFS (Oracle’s own high
performance Direct NFS solution).
8. Large Oracle databases benefit greatly from large System Global Area (SGA) sizes.
Large customers should deploy on Azure M-series with 4 TB or more RAM size

Set Linux Huge Pages to 75% of Physical RAM size


Set System Global Area (SGA) to 90% of Huge Page size
Set the Oracle parameter USE_LARGE_PAGES = ONLY - The value ONLY is
preferred over the value TRUE as the value ONLY is supposed to deliver more
consistent and predictable performance. The value TRUE may allocate both
large 2MB and standard 4K pages. The value ONLY is going to always force
large 2MB pages. If the number of available huge pages isn't sufficient or not
correctly configured, the database instance is going to fail to start with error
code: ora-27102 : out of memory Linux_x86_64 Error 12 : can't allocate
memory. If there's insufficient contiguous memory, Oracle Linux may need to
be restarted and/or the Operating System Huge Page parameters
reconfigured.
9. Oracle Home should be located outside of the "root" volume or disk. Use a
separate disk or ANF volume. The disk holding the Oracle Home should be 64
Gigabyte in size or larger.
10. The size of the boot disk for large high performance Oracle database servers is
important. As a minimum a P10 disk should be used for M-series or E-series. Don't
use small disks such as P4 or P6. A small disk can cause performance issues.
11. Accelerated Networking must be enabled on all Virtual Machines. Upgrade to the
latest Oracle Linux release if there are any problems enabling Accelerated
Networking.
12. Check for updates in this documentation and SAP note 2039619 - SAP Applications
on Microsoft Azure using the Oracle Database: Supported Products and Versions -
SAP ONE Support Launchpad .

For information about which Oracle versions and corresponding OS versions are
supported for running SAP on Oracle on Azure Virtual Machines, see SAP
Note 2039619 .

General information about running SAP Business Suite on Oracle can be found in
the SAP on Oracle community page . SAP on Oracle on Azure is only supported on
Oracle Linux (and not Suse or Red Hat) for application and database servers. ASCS/ERS
servers can use RHEL/SUSE because Oracle client isn't installed or used on these VMs.
Application Servers (PAS/AAS) shouldn't be installed on these VMs. Refer to SAP Note
3074643 - OLNX: FAQ: if Pacemaker for Oracle Linux is supported in SAP Environment .
Oracle Real Application Cluster (RAC) isn't supported on Azure because RAC would
require Multicast networking.

Storage configuration
There are two recommended storage deployment patterns for SAP on Oracle on Azure:

1. Oracle Automatic Storage Management (ASM)


2. Azure NetApp Files (ANF) with Oracle dNFS (Direct NFS)

Customers currently running Oracle databases on EXT4 or XFS file systems with Logical
Volume Manager (LVM) are encouraged to move to ASM. There are considerable
performance, administration, and reliability advantages to running on ASM compared to
LVM. ASM reduces complexity, improves supportability, and makes administration tasks
simpler. This documentation contains links for Oracle Database Administrators (DBAs) to
learn how to install and manage ASM.

Azure provides multiple storage solutions. The table below details the support status

ノ Expand table

Storage type Oracle Sector Oracle Linux 8.x or Windows Server 2019
support Size higher

Block Storage
Type

Premium SSD Supported 512e ASM Recommended. No support for ASM on


LVM Supported Windows

Premium SSD Supported 4K Native ASM Recommended. No support for ASM on


v2 or 512e1 LVM Supported Windows. Change Log File
disks from 4K Native to 512e

Standard SSD Not


supported

Standard Not
HDD supported

Ultra disk Supported 4K Native ASM Recommended. No support for ASM on


LVM Supported Windows. Change Log File
disks from 4K Native to 512e

Network
Storage
Types

Azure NetApp Supported - Oracle dNFS Not supported


Service (ANF) Required

Azure Files Not


NFS supported

Azure files Not


SMB supported

1
512e is supported on Premium SSD v2 for Windows systems. 512e configurations are't
recommended for Linux customers. Migrate to 4K Native using procedure in MOS
512/512e sector size to 4K Native Review (Doc ID 1133713.1)
Other considerations that apply list like:

1. No support for DIRECTIO with 4K Native sector size. Recommended settings for
FILESYSTEMIO_OPTIONS for LVM configurations:

LVM - If disks with 512/512e geometry are used, FILESYSTEMIO_OPTIONS =


SETALL
LVM - If disks with 4K Native geometry are used, FILESYSTEMIO_OPTIONS =
ASYNC

2. Oracle 19c and higher fully supports 4K Native sector size with both ASM and LVM
3. Oracle 19c and higher on Linux – when moving from 512e storage to 4K Native
storage Log sector sizes must be changed
4. To migrate from 512/512e sector size to 4K Native Review (Doc ID 1133713.1) – see
section "Offline Migration to 4KB Sector Disks"
5. SAPInst writes to the pfile during installation. If the $ORACLE_HOME/dbs is on a 4K
disk set filesystemio_options=asynch and see the Section "Datafile Support of 4kB
Sector Disks" in MOS Supporting 4K Sector Disks (Doc ID 1133713.1)
6. No support for ASM on Windows platforms
7. No support for 4K Native sector size for Log volume on Windows platforms. SSDv2
and Ultra Disk must be changed to 512e via the "Edit Disk" pencil icon in the Azure
Portal
8. 4K Native sector size is supported only on Data volumes for Windows platforms.
4K isn't supported for Log volumes on Windows
9. We recommend reviewing these MOS articles:

Oracle Linux: File System's Buffer Cache versus Direct I/O (Doc ID 462072.1)
Supporting 4K Sector Disks (Doc ID 1133713.1)
Using 4k Redo Logs on Flash, 4k-Disk and SSD-based Storage (Doc ID
1681266.1)
Things To Consider For Setting filesystemio_options And disk_asynch_io (Doc
ID 1987437.1)

We recommend using Oracle ASM on Linux with ASMLib. Performance, administration,


support, and configuration are optimized with deployment pattern. Oracle ASM and
Oracle dNFS are going to set the correct parameters or bypass parameters (such as
FILESYSTEMIO_OPTIONS) and therefore deliver better performance and reliability.

Oracle Automatic Storage Management (ASM)


Checklist for Oracle Automatic Storage Management:
1. All SAP on Oracle on Azure systems are running ASM including Development,
Quality Assurance, and Production. Small, Medium, and Large databases
2. ASMLib is used and not UDEV. UDEV is required for multiple SANs, a scenario
that doesn't exist on Azure
3. ASM should be configured for External Redundancy. Azure Premium SSD storage
provides triple redundancy. Azure Premium SSD matches the reliability and
integrity of any other storage solution. For optional safety, customers can consider
Normal Redundancy for the Log Disk Group
4. Mirroring Redo Log files is optional for ASM 888626 - Redo log layout for high-
end systems
5. ASM Disk Groups configured as per Variant 1, 2 or 3 below
6. ASM Allocation Unit size = 4MB (default). Very Large Databases (VLDB) OLAP
systems such as BW may benefit from larger ASM Allocation Unit size. Change only
after confirming with Oracle support
7. ASM Sector Size and Logical Sector Size = default (UDEV isn't recommended but
requires 4k)
8. If the COMPATIBLE.ASM disk group attribute is set to 11.2 or greater for a disk
group, you can create, copy, or move an Oracle ASM SPFILE into ACFS file system.
Review the Oracle documentation on moving pfile into ACFS. SAPInst isn't creating
the pfile in ACFS by default
9. Appropriate ASM Variant is used. Production systems should use Variant 2 or 3

Oracle Automatic Storage Management Disk Groups


Part II of the official Oracle Guide describes the installation and the management of
ASM:

Oracle Automatic Storage Management Administrator's Guide, 19c


Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide, 19c
for Linux

The following ASM limits exist for Oracle Database 12c or later:

511 disk groups, 10,000 ASM disks in a Disk Group, 65,530 ASM disks in a storage
system, 1 million files for each Disk Group. More info here: Performance and Scalability
Considerations for Disk Groups (oracle.com)

Review the ASM documentation in the relevant SAP Installation Guide for Oracle
available from https://help.sap.com/viewer/nwguidefinder
Variant 1 – small to medium data volumes up to 3 TB,
restore time not critical
Customer has small or medium sized databases where backup and/or restore +
Recovery of all databases can be accomplished using RMAN in a timely fashion.
Example: When a complete Oracle ASM disk group, with data files, from one or more
databases is broken and all data files from all databases need to be restored to a newly
created Oracle ASM disk group using RMAN.

Oracle ASM disk group recommendation:

ノ Expand table

ASM Disk Group Stores Azure Storage


Name

+DATA All data files 3-6 x P 30 (1 TiB)

Control file (first copy) To increase database size, add extra P30
disks

Online redo logs (first


copy)

+ARCH Control file (second copy) 2 x P20 (512 GiB)

Archived redo logs

+RECO Control file (third copy) 2 x P20 (512 GiB)

RMAN backups (optional)

recovery area (optional)

Variant 2 – medium to large data volumes between 3 TB


and 12 TB, restore time important
Customer has medium to large sized databases where backup and/or restore +

recovery of all databases can't be accomplished in a timely fashion.

Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap
techniques in combination.

Major differences to Variant 1 are:

1. Separate Oracle ASM Disk Group for each database


2. <DBNAME>+“_” is used as a prefix for the name of the DATA disk group
3. The number of the DATA disk group is appended if the database spans over more
than one DATA disk group
4. No online redo logs are located in the "data" disk groups. Instead an extra disk
group is used for the first member of each online redo log group.

ノ Expand table

ASM Disk Group Stores Azure Storage


Name

+<DBNAME>_DATA[#] All data files 3-12 x P 30 (1 TiB)

All temp files To increase database size, add extra P30


disks

Control file (first copy)

+OLOG Online redo logs (first 3 x P20 (512 GiB)


copy)

+ARCH Control file (second copy) 3 x P20 (512 GB)

Archived redo logs

+RECO Control file (third copy) 3 x P20 (512 GiB)

RMAN backups (optional)

Fast recovery area


(optional)

Variant 3 – huge data and data change volumes more


than 5 TB, restore time crucial
Customer has a huge database where backup and/or restore + recovery of a single
database can't be accomplished in a timely fashion.

Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap
techniques in combination. In this variant, each relevant database file type is separated
to different Oracle ASM disk groups.

ノ Expand table
ASM Disk Group Stores Azure Storage
Name

+ All data files 5-30 or more x P30 (1 TiB) or P40 (2 TiB)


<DBNAME>_DATA[#]

All temp files To increase


database size, add extra P30
disks

Control file (first copy)

+OLOG Online redo logs (first copy) 3-8 x P20 (512 GiB) or P30 (1 TiB)

For more safety "Normal Redundancy"


can be selected for this ASM Disk
Group

+ARCH Control file (second copy) 3-8 x P20 (512 GiB) or P30 (1 TiB)

Archived redo logs

+RECO Control file (third copy) 3 x P30 (1 TiB), P40 (2 TiB) or P50 (4 TiB)

RMAN backups (optional)

Fast recovery area (optional)

7 Note

Azure Host Disk Cache for the DATA ASM Disk Group can be set to either Read
Only or None. All other ASM Disk Groups should be set to None. On BW or SCM a
separate ASM Disk Group for TEMP can be considered for large or busy systems.

Adding Space to ASM + Azure Disks


Oracle ASM Disk Groups can either be extended by adding extra disks or by extending
current disks. We recommend adding extra disks rather than extending existing disks.
Review these MOS articles and links MOS Notes 1684112.1 and 2176737.1

ASM adds a disk to the disk group: asmca -silent -addDisk -diskGroupName DATA -disk
'/dev/sdd1'

ASM automatically rebalances the data. To check rebalancing run this command.

ps -ef | grep rbal


oraasm 4288 1 0 Jul28 ? 00:04:36 asm_rbal_oradb1

Documentation is available with:

How to Resize ASM Disk Groups Between Multiple Zones (aemcorp.com)


RESIZING - Altering Disk Groups (oracle.com)

Monitoring SAP on Oracle ASM Systems on Azure


Run an Oracle AWR report as the first step when troubleshooting a performance
problem. Disk performance metrics are detailed in the AWR report.

Disk performance can be monitored from inside Oracle Enterprise Manager and via
external tools. Documentation, which might help is available here:

Using Views to Display Oracle ASM Information


ASMCMD Disk Group Management Commands (oracle.com)

OS level monitoring tools can't monitor ASM disks as there's no recognizable file
system. Freespace monitoring must be done from within Oracle.

Training Resources on Oracle Automatic Storage


Management (ASM)
Oracle DBAs that aren't familiar with Oracle ASM follow the training materials and
resources here:

SAP on Oracle with ASM on Microsoft Azure - Part1 - Microsoft Tech Community
Oracle19c DB [ ASM ] installation on [ Oracle Linux 8.3 ] [ Grid | ASM | UDEV | OEL
8.3 ] [ VMware ] - YouTube
ASM Administrator's Guide (oracle.com)
Oracle for SAP Development Update (May 2022)
Performance and Scalability Considerations for Disk Groups (oracle.com)
Migrating to Oracle ASM with Oracle Enterprise Manager
Using RMAN to migrate to ASM | The Oracle Mentor (wordpress.com)
What is Oracle ASM to Azure IaaS? - Simple Talk (red-gate.com)
ASM Command-Line Utility (ASMCMD) (oracle.com)
Useful asmcmd commands - DBACLASS DBACLASS
Installing and Configuring Oracle ASMLIB Software
Azure NetApp Files (ANF) with Oracle dNFS
(Direct NFS)
The combination of Azure VMs and ANF is a robust and proven combination
implemented by many customers on an exceptionally large scale.

Databases of 100+ TB are already running productive on this combination. To start, we


wrote a detailed blog on how to set up this combination:

Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files - Microsoft Tech
Community

More general information

Solution architectures using Azure NetApp Files | Oracle


Solution architectures using Azure NetApp Files | SAP on anyDB

Mirror Log is required on dNFS ANF Production systems.

Even though the ANF is highly redundant, Oracle still requires a mirrored redo-logfile
volume. The recommendation is to create two separate volumes and configure origlogA
together with mirrlogB and origlogB together with mirrlogA. In this case, you make use
of a distributed load balancing of the redo-logfiles.

The mount option "nconnect" isn't recommended when the dNFS client is configured.
dNFS manages the IO channel and makes use of multiple sessions, so this option is
obsolete and can cause manifold issues. The dNFS client is going to ignore the mount
options and is going to handle the IO directly.

Both NFS versions (v3 and v4.1) with ANF are supported for the Oracle binaries, data-
and log-files.

We highly recommend using the Oracle dNFS client for all Oracle volumes.

Recommended mount options are:

ノ Expand table

NFS Version Mount Options

NFSv3 rw,vers=3,rsize=262144,wsize=262144,hard,timeo=600,noatime

NFSv4.1 rw,vers=4.1,rsize=262144,wsize=262144,hard,timeo=600,noatime
ANF Backup
With ANF, some key features are available like consistent snapshot-based backups, low
latency, and remarkably high performance. From version 6 of our AzAcSnap tool Azure
Application Consistent Snapshot tool for ANF, Oracle databases can be configured for
consistent database snapshots.

Those snapshots remain on the actual data volume and must be copied away using ANF
CRR (Cross Region Replication) Cross-region replication of ANF or other backup tools.

SAP on Oracle on Azure with LVM


ASM is the default recommendation from Oracle for all SAP systems of any size on
Azure. Performance, reliability, and support are better for customers using ASM. Oracle
provides documentation and training for DBAs to transition to ASM. In cases where the
Oracle DBA team doesn't follow the recommendation from Oracle, Microsoft, and SAP
to use ASM the following LVM configuration should be used.

Note that: when creating LVM the "-i" option must be used to evenly distribute data
across the number of disks in the LVM group.

Mirror Log is required when running LVM.

Minimum configuration Linux:

ノ Expand table

Component Disk Host Cache Striping1

/oracle/<SID>/origlogaA & mirrlogB Premium None Not needed

/oracle/<SID>/origlogaB & mirrlogA Premium None Not needed

/oracle/<SID>/sapdata1...n Premium Read-only2 Recommended

/oracle/<SID>/oraarch3 Premium None Not needed

Oracle Home, saptrace, ... Premium None None

1. Striping: LVM stripe using RAID0


2. During R3Load migrations, the Host Cache option for SAPDATA should be set to
None
3. oraarch: LVM is optional
The disk selection for hosting Oracle's online redo logs is driven by IOPS requirements.
It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as
the volume, IOPS, and throughput satisfy the requirements.

Performance configuration Linux:

ノ Expand table

Component Disk Host Cache Striping1

/oracle/<SID>/origlogaA Premium None Can be used

/oracle/<SID>/origlogaB Premium None Can be used

/oracle/<SID>/mirrlogAB Premium None Can be used

/oracle/<SID>/mirrlogBA Premium None Can be used

/oracle/<SID>/sapdata1...n Premium Read-only2 Recommended

/oracle/<SID>/oraarch3 Premium None Not needed

Oracle Home, saptrace, ... Premium None None

1. Striping: LVM stripe using RAID0


2. During R3load migrations, the Host Cache option for SAPDATA should be set to
None
3. oraarch: LVM is optional

Azure Infra: Virtual machine Throughput Limits


& Azure Disk Storage Options

Oracle Automatic Storage Management (ASM)## can


evaluate these storage technologies:
1. Azure Premium Storage – currently the default choice
2. Managed Disk Bursting - Managed disk bursting - Azure Virtual Machines |
Microsoft Docs
3. Azure Write Accelerator
4. Online disk extension for Azure Premium SSD storage is still in progress

Log write times can be improved on Azure M-Series VMs by enabling Write Accelerator.
Enable Azure Write Accelerator for the Azure Premium Storage disks used by the ASM
Disk Group for online redo log files. For more information, see Write Accelerator.

Using Write Accelerator is optional but can be enabled if the AWR report indicates
higher than expected log write times.

Azure Virtual Machine Throughput Limits


Each Azure Virtual machine (VM) type has limits for CPU, Disk, Network, and RAM. These
limits are documented in the links below

The following recommendations should be followed when selecting a VM type:

1. Ensure the Disk Throughput and IOPS is sufficient for the workload and at least
equal to the aggregate throughput of the disks
2. Consider enabling paid bursting especially for Redo Log disk(s)
3. For ANF, the Network throughput is important as all storage traffic is counted as
"Network" rather than Disk throughput
4. Review this blog for Network tuning for M-series Optimizing Network Throughput
on Azure M-series VMs HCMT (microsoft.com)
5. Review this link that describes how to use an AWR report to select the correct
Azure VM
6. Azure Intel Ev5 Edv5 and Edsv5-series - Azure Virtual Machines |Microsoft Docs
7. Azure AMD Eadsv5 Easv5 and Eadsv5-series - Azure Virtual Machines |Microsoft
Docs
8. Azure M-series/Msv2-series M-series - Azure Virtual Machines |Microsoft Docs and
Msv2/Mdsv2 Medium Memory Series - Azure Virtual Machines | Microsoft Docs
9. Azure Mv2 Mv2-series - Azure Virtual Machines | Microsoft Docs

Backup/restore
For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same
way as they are on bare metal and Hyper-V. Oracle Recovery Manager (RMAN) is also
supported for backups to disk and restores from disk.

For more information about how you can use Azure Backup and Recovery services for
Oracle databases, see:

Back up and recover an Oracle Database 12c database on an Azure Linux virtual
machine
Azure Backup service is also supporting Oracle backups as described in the
article Back up and recover an Oracle Database 19c database on an Azure Linux
VM using Azure Backup.
High availability
Oracle Data Guard is supported for high availability and disaster recovery purposes. To
achieve automatic failover in Data Guard, you need to use Fast-Start Failover (FSFA). The
Observer functionality (FSFA) triggers the failover. If you don't use FSFA, you can only
use a manual failover configuration. For more information, see Implement Oracle Data
Guard on an Azure Linux virtual machine.

Disaster Recovery aspects for Oracle databases in Azure are presented in the
article Disaster recovery for an Oracle Database 12c database in an Azure environment.

Another good Oracle whitepaper Setting up Oracle 12c Data Guard for SAP Customers

Huge Pages & Large Oracle SGA Configurations


VLDB SAP on Oracle on Azure deployments apply SGA sizes in excess of 3TB. Modern
versions of Oracle handle large SGA sizes well and significantly reduce IO. Review the
AWR report and increase the SGA size to reduce read IO.

As general guidance Linux Huge Pages should be configured to approximately 75% of


the VM RAM size. The SGA size can be set to 90% of the Huge Page size. An
approximate example would be a M192ms VM with 4 TB of RAM would have Huge
Pages set proximately 3 TB. The SGA can be set to a value a little less such as 2.95 TB.

Large SAP customers running on High Memory Azure VMs greatly benefit from
HugePages as described in this article

NUMA systems vm.min_free_kbytes should be set to 524288 * <# of NUMA nodes>. See
Oracle Linux : Recommended Value of vm.min_free_kbytes Kernel Tuning Parameter
(Doc ID 2501269.1...

Links & other Oracle Linux Utilities


Oracle Linux provides a useful GUI management utility:

Oracle web console Oracle Linux: Install Cockpit Web Console on Oracle Linux
Upstream Cockpit Project — Cockpit Project (cockpit-project.org)

Oracle Linux has a new package management tool – DNF

Oracle Linux 8: Package Management made easy with free videos | Oracle Linux Blog
Oracle® Linux 8 Managing Software on Oracle Linux - Chapter 1 Yum DNF

Memory and NUMA configurations can be tested and benchmarked with a useful tool -
Oracle Real Application Testing (RAT)

Oracle Real Application Testing: What Is It and How Do You Use It? (aemcorp.com)

Information on UDEV Log Corruption issue Oracle Redolog corruption on Azure | Oracle
in the field (wordpress.com)

Oracle ASM in Azure corruption - follow up (dbaharrison.blogspot.com)

Data corruption on Hyper-V or Azure when running Oracle ASM - Red Hat Customer
Portal

Set up Oracle ASM on an Azure Linux virtual machine - Azure Virtual Machines |
Microsoft Docs

Oracle Configuration guidelines for SAP installations in


Azure VMs on Windows
SAP on Oracle on Azure also supports Windows. The recommendations for Windows
deployments are summarized below:

1. The following Windows releases are recommended: Windows Server 2022 (only
from Oracle Database 19.13.0 on) Windows Server 2019 (only from Oracle
Database 19.5.0 on)
2. There's no support for ASM on Windows. Windows Storage Spaces should be used
to aggregate disks for optimal performance
3. Install the Oracle Home on a dedicated independent disk (don't install Oracle
Home on the C: Drive)
4. All disks must be formatted NTFS
5. Follow the Windows Tuning guide from Oracle and enable large pages, lock pages
in memory and other Windows specific settings

At the time, of writing ASM for Windows customers on Azure isn't supported. The SAP
Software Provisioning Manager (SWPM) for Windows doesn't support ASM currently.

Storage Configurations for SAP on Oracle on


Windows

Minimum configuration Windows:


ノ Expand table

Component Disk Host Cache Striping1

E:\oracle\<SID>\origlogaA & mirrlogB Premium None Not needed

F:\oracle\<SID>\origlogaB & mirrlogA Premium None Not needed

G:\oracle\<SID>\sapdata1...n Premium Read-only2 Recommended

H:\oracle\<SID>\oraarch3 Premium None Not needed

I:\Oracle Home, saptrace, ... Premium None None

1. Striping: Windows Storage Spaces


2. During R3load migrations, the Host Cache option for SAPDATA should be set to
None
3. oraarch: Windows Storage Spaces is optional

The disk selection for hosting Oracle's online redo logs is driven by IOPS requirements.
It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as
the volume, IOPS, and throughput satisfy the requirements.

Performance configuration Windows:

ノ Expand table

Component Disk Host Cache Striping1

E:\oracle\<SID>\origlogaA Premium None Can be used

F:\oracle\<SID>\origlogaB Premium None Can be used

G:\oracle\<SID>\mirrlogAB Premium None Can be used

H:\oracle\<SID>\mirrlogBA Premium None Can be used

I:\oracle\<SID>\sapdata1...n Premium Read-only2 Recommended

J:\oracle\<SID>\oraarch3 Premium None Not needed

K:\Oracle Home, saptrace, ... Premium None None

1. Striping: Windows Storage Spaces


2. During R3load migrations, the Host Cache option for SAPDATA should be set to
None
3. oraarch: Windows Storage Spaces is optional
Links for Oracle on Windows
Overview of Windows Tuning (oracle.com)
Postinstallation Configuration Tasks on Windows (oracle.com)
SAP on Windows Presentation (oracle.com) 2823030 - Oracle on MS WINDOWS
Large Pages

Next steps
Read the article

Considerations for Azure Virtual Machines DBMS deployment for SAP workload
IBM Db2 Azure Virtual Machines DBMS
deployment for SAP workload
Article • 03/08/2024

With Microsoft Azure, you can migrate your existing SAP application running on IBM Db2 for Linux, UNIX, and
Windows (LUW) to Azure virtual machines. With SAP on IBM Db2 for LUW, administrators and developers can
still use the same development and administration tools, which are available on-premises. General information
about running SAP Business Suite on IBM Db2 for LUW is available via the SAP Community Network (SCN) in
SAP on IBM Db2 for Linux, UNIX, and Windows .

For more information and updates about SAP on Db2 for LUW on Azure, see SAP Note 2233094 .

There are various articles for SAP workload on Azure. We recommend beginning with Get started with SAP on
Azure VMs and then read about other areas of interest.

The following SAP Notes are related to SAP on Azure regarding the area covered in this document:

ノ Expand table

Note number Title

1928533 SAP Applications on Azure: Supported Products and Azure VM types

2015553 SAP on Microsoft Azure: Support Prerequisites

1999351 Troubleshooting Enhanced Azure Monitoring for SAP

2178632 Key Monitoring Metrics for SAP on Microsoft Azure

1409604 Virtualization on Windows: Enhanced Monitoring

2191498 SAP on Linux with Azure: Enhanced Monitoring

2233094 DB6: SAP Applications on Azure Using IBM DB2 for Linux, UNIX, and Windows - Additional Information

2243692 Linux on Microsoft Azure (IaaS) VM: SAP license issues

1984787 SUSE LINUX Enterprise Server 12: Installation notes

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

1597355 Swap-space recommendation for Linux

As a preread to this document, review Considerations for Azure Virtual Machines DBMS deployment for SAP
workload. Review other guides in the SAP workload on Azure.

IBM Db2 for Linux, UNIX, and Windows Version Support


SAP on IBM Db2 for LUW on Microsoft Azure Virtual Machine Services is supported as of Db2 version 10.5.

For information about supported SAP products and Azure VM(Virtual Machines) types, refer to SAP Note
1928533 .
IBM Db2 for Linux, UNIX, and Windows Configuration
Guidelines for SAP Installations in Azure VMs

Storage Configuration
For an overview of Azure storage types for SAP workload, consult the article Azure Storage types for SAP
workload All database files must be stored on mounted disks of Azure block storage (Windows: NTFS, Linux:
xfs, supported as of Db2 11.1, or ext3).

Remote shared volumes like the Azure services in the listed scenarios are NOT supported for Db2 database
files:

Microsoft Azure File Service for all guest OS.

Azure NetApp Files for Db2 running in Windows guest OS.

Remote shared volumes like the Azure services in the listed scenarios are supported for Db2 database files:

Hosting Linux guest OS based Db2 data and log files on NFS shares hosted on Azure NetApp Files is
supported!

If you're using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in
Considerations for Azure Virtual Machines DBMS deployment for SAP workload apply to deployments with the
Db2 DBMS as well.

As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The
exact quotas are depending on the VM type used. A list of VM types with their quotas can be found here
(Linux) and here (Windows).

As long as the current IOPS quota per disk is sufficient, it's possible to store all the database files on one single
mounted disk. Whereas you always should separate the data files and transaction log files on different
disks/VHDs.

For performance considerations, also refer to chapter 'Data Safety and Performance Considerations for
Database Directories' in SAP installation guides.

Alternatively, you can use Windows Storage Pools, which are only available in Windows Server 2012 and higher
as described Considerations for Azure Virtual Machines DBMS deployment for SAP workload. On Linux you can
use LVM or mdadm to create one large logical device over multiple disks.

For Azure M-Series VM, you can reduce by factors the latency writing into the transaction logs, compared to
Azure Premium storage performance, when using Azure Write Accelerator. Therefore, you should deploy Azure
Write Accelerator for one or more VHDs that form the volume for the Db2 transaction logs. Details can be read
in the document Write Accelerator.

IBM Db2 LUW 11.5 released support for 4-KB sector size. Though you need to enable the usage of 4-KB sector
size with 11.5 by the configurations setting of db2set DB2_4K_DEVICE_SUPPORT=ON as documented in:

Db1 11.5 performance variable


Db2 registry and environment variables

For older Db2 versions, a 512 Byte sector size must be used. Premium SSD disks are 4-KB native and have 512
Byte emulation. Ultra disk uses 4-KB sector size by default. You can enable 512 Byte sector size during creation
of Ultra disk. Details are available Using Azure ultra disks. This 512 Byte sector size is a prerequisite for IBM
Db2 LUW versions lower than 11.5.

On Windows using Storage pools for Db2 storage paths for log_dir , sapdata and saptmp directories, you
must specify a physical disk sector size of 512 Bytes. When using Windows Storage Pools, you must create the
storage pools manually via command line interface using the parameter -LogicalSectorSizeDefault . For more
information, see New-StoragePool.

Recommendation on VM and disk structure for IBM Db2


deployment
IBM Db2 for SAP NetWeaver Applications is supported on any VM type listed in SAP support note 1928533 .
Recommended VM families for running IBM Db2 database are Esd_v4/Eas_v4/Es_v3 and M/M_v2-series for
large multi-terabyte databases. The IBM Db2 transaction log disk write performance can be improved by
enabling the M-series Write Accelerator.

Following is a baseline configuration for various sizes and uses of SAP on Db2 deployments from small to
large. The list is based on Azure premium storage. However, Azure Ultra disk is fully supported with Db2 as
well and can be used as well. Use the values for capacity, burst throughput, and burst IOPS to define the Ultra
disk configuration. You can limit the IOPS for the /db2/ <SID> /log_dir at around 5000 IOPS.

Extra small SAP system: database size 50 - 200 GB: example Solution Manager

ノ Expand table

VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name Premium Disks put [GB] IOPS Through- size
/ Size Disk [MB/s] put [GB]

E4ds_v4 /db2 P6 1 240 50 64 3,500 170

vCPU: 4 /db2/ <SID> /sapdata P10 2 1,000 200 256 7,000 340 256 ReadOnly
KB

RAM: /db2/ <SID> /saptmp P6 1 240 50 128 3,500 170


32 GiB

/db2/ <SID> /log_dir P6 2 480 100 128 7,000 340 64


KB

/db2/ <SID> /offline_log_dir P10 1 500 100 128 3,500 170

Small SAP system: database size 200 - 750 GB: small Business Suite

ノ Expand table

VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]

E16ds_v4 /db2 P6 1 240 50 64 3,500 170

vCPU: 16 /db2/ <SID> /sapdata P15 4 4,400 500 1.024 14,000 680 256 ReadOnly
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]

KB

RAM: /db2/ <SID> /saptmp P6 2 480 100 128 7,000 340 128
128 GiB KB

/db2/ <SID> /log_dir P15 2 2,200 250 512 7,000 340 64


KB

/db2/ <SID> /offline_log_dir P10 1 500 100 128 3,500 170

Medium SAP system: database size 500 - 1000 GB: small Business Suite

ノ Expand table

VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]

E32ds_v4 /db2 P6 1 240 50 64 3,500 170

vCPU: 32 /db2/ <SID> /sapdata P30 2 10,000 400 2.048 10,000 400 256 ReadOnly
KB

RAM: /db2/ <SID> /saptmp P10 2 1,000 200 256 7,000 340 128
256 GiB KB

/db2/ <SID> /log_dir P20 2 4,600 300 1.024 7,000 340 64


KB

/db2/ <SID> /offline_log_dir P15 1 1,100 125 256 3,500 170

Large SAP system: database size 750 - 2000 GB: Business Suite

ノ Expand table

VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]

E64ds_v4 /db2 P6 1 240 50 64 3,500 170

vCPU: 64 /db2/ <SID> /sapdata P30 4 20,000 800 4.096 20,000 800 256 ReadOnly
KB

RAM: /db2/ <SID> /saptmp P15 2 2,200 250 512 7,000 340 128
504 GiB KB

/db2/ <SID> /log_dir P20 4 9,200 600 2.048 14,000 680 64


KB

/db2/ <SID> /offline_log_dir P20 1 2,300 150 512 3,500 170

Large multi-terabyte SAP system: database size 2 TB+: Global Business Suite system
ノ Expand table

VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name Premium Disks put [GB] IOPS Through- size
/ Size Disk [MB/s] put [GB]

M128s /db2 P10 1 500 100 128 3,500 170

vCPU: /db2/ <SID> /sapdata P40 4 30,000 1.000 8.192 30,000 1.000 256 ReadOnly
128 KB

RAM: /db2/ <SID> /saptmp P20 2 4,600 300 1.024 7,000 340 128
2,048 KB
GiB

/db2/ <SID> /log_dir P30 4 20,000 800 4.096 20,000 800 64 Write-
KB Accelerator

/db2/ <SID> /offline_log_dir P30 1 5,000 200 1.024 5,000 200

Using Azure NetApp Files


The usage of NFS v4.1 volumes based on Azure NetApp Files (ANF) is supported with IBM Db2, hosted in Suse
or Red Hat Linux guest OS. You should create at least four different volumes that list like:

Shared volume for saptmp1, sapmnt, usr_sap, <sid> _home, db2 <sid> _home, db2_software
One data volume for sapdata1 to sapdatan
One log volume for the redo log directory
One volume for the log archives and backups

A fifth potential volume could be an ANF volume that you use for more long-term backups that you use to
snapshot and store the snapshots in Azure Blob store.

The configuration could look like shown here:


The performance tier and the size of the ANF hosted volumes must be chosen based on the performance
requirements. However, we recommend taking the Ultra performance level for the data and the log volume. It
isn't supported to mix block storage and shared storage types for the data and log volume.

As of mount options, mounting those volumes could look like (you need to replace <SID> and <sid> by the
SID of your SAP system):

vi /etc/idmapd.conf
# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp


172.17.10.4:/db2shared /mnt
mkdir -p /db2/Software /db2/AN1/saptmp /usr/sap/<SID> /sapmnt/<SID> /home/<sid>adm /db2/db2<sid>
/db2/<SID>/db2_software
mkdir -p /mnt/Software /mnt/saptmp /mnt/usr_sap /mnt/sapmnt /mnt/<sid>_home /mnt/db2_software
/mnt/db2<sid>
umount /mnt

mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp 172.17.10.4:/db2data


/mnt
mkdir -p /db2/AN1/sapdata/sapdata1 /db2/AN1/sapdata/sapdata2 /db2/AN1/sapdata/sapdata3
/db2/AN1/sapdata/sapdata4
mkdir -p /mnt/sapdata1 /mnt/sapdata2 /mnt/sapdata3 /mnt/sapdata4
umount /mnt

mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp 172.17.10.4:/db2log


/mnt
mkdir /db2/AN1/log_dir
mkdir /mnt/log_dir
umount /mnt

mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp


172.17.10.4:/db2backup /mnt
mkdir /db2/AN1/backup
mkdir /mnt/backup
mkdir /db2/AN1/offline_log_dir /db2/AN1/db2dump
mkdir /mnt/offline_log_dir /mnt/db2dump
umount /mnt

7 Note

The mount option hard and sync are required

Backup/Restore
The backup/restore functionality for IBM Db2 for LUW is supported in the same way as on standard Windows
Server Operating Systems and Hyper-V.

Make sure that you have a valid database backup strategy in place.

As in bare-metal deployments, backup/restore performance depends on how many volumes can be read in
parallel and what the throughput of those volumes might be. In addition, the CPU consumption used by
backup compression may play a significant role on VMs with up to eight CPU threads. Therefore, one can
assume:

The fewer the number of disks used to store the database devices, the smaller the overall throughput in
reading
The smaller the number of CPU threads in the VM, the more severe the impact of backup compression
The fewer targets (Stripe Directories, disks) to write the backup to, the lower the throughput

To increase the number of targets to write to, two options can be used/combined depending on your needs:

Striping the backup target volume over multiple disks to improve the IOPS throughput on that striped
volume
Using more than one target directory to write the backup to

7 Note

Db2 on Windows doesn't support the Windows VSS technology. As a result, the application consistent VM
backup of Azure Backup Service can't be leveraged for VMs the Db2 DBMS is deployed in.

High Availability and Disaster Recovery

Linux Pacemaker

) Important

For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using Pacemaker from IBM.

Integrated solution using Pacemaker


Alternate or additional configurations available on Microsoft Azure Db2 high availability disaster
recovery (HADR) with pacemaker is supported. Both SLES and RHEL operating systems are
supported. This configuration enables high availability of IBM Db2 for SAP. Deployment guides:

SLES: High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker
RHEL: High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server

Windows Cluster Server


Microsoft Cluster Server (MSCS) isn't supported.

Db2 high availability disaster recovery (HADR) is supported. If the virtual machines of the HA configuration
have working name resolution, the setup in Azure doesn't differ from any setup that is done on-premises. It
isn't recommended to rely on IP resolution only.

Don't use Geo-Replication for the storage accounts that store the database disks. For more information, see
the document Considerations for Azure Virtual Machines DBMS deployment for SAP workload.

Accelerated Networking
For Db2 deployments on Windows, we highly recommend using the Azure functionality of Accelerated
Networking as described in the document Azure Accelerated Networking . Also consider recommendations
made in Considerations for Azure Virtual Machines DBMS deployment for SAP workload.

Specifics for Linux deployments


As long as the current IOPS quota per disk is sufficient, it's possible to store all the database files on one single
disk. Whereas you always should separate the data files and transaction log files on different disks.

If the IOPS or I/O throughput of a single Azure VHD isn't sufficient, you can use LVM (Logical Volume
Manager) or MDADM as described in the document Considerations for Azure Virtual Machines DBMS
deployment for SAP workload to create one large logical device over multiple disks. For the disks containing
the Db2 storage paths for your sapdata and saptmp directories, you must specify a physical disk sector size of
512 KB.

Other
All other general areas like Azure Availability Sets or SAP monitoring apply for deployments of VMs with the
IBM Database as well. These general areas we describe in Considerations for Azure Virtual Machines DBMS
deployment for SAP workload.

Next steps
Read the article:

Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP ASE Azure Virtual Machines DBMS
deployment for SAP workload
Article • 03/27/2023

In this document, covers several different areas to consider when deploying SAP ASE in
Azure IaaS. As a precondition to this document, you should have read the document
Considerations for Azure Virtual Machines DBMS deployment for SAP workload and
other guides in the SAP workload on Azure documentation. This document covers SAP
ASE running on Linux and on Windows Operating Systems. The minimum supported
release on Azure is SAP ASE 16.0.02 (Release 16 Support Pack 2). It's recommended to
deploy the latest version of SAP and the latest Patch Level. As a minimum SAP ASE
16.0.03.07 (Release 16 Support Pack 3 Patch Level 7) is recommended. The most recent
version of SAP can be found in Targeted ASE 16.0 Release Schedule and CR list
Information .

Additional information about release support with SAP applications or installation media
location are found, besides in the SAP Product Availability Matrix in these locations:

SAP support note #2134316


SAP support note #1941500
SAP support note #1590719
SAP support note #1973241

Remark: Throughout documentation within and outside the SAP world, the name of the
product is referenced as Sybase ASE or SAP ASE or in some cases both. In order to stay
consistent, we use the name SAP ASE in this documentation.

Operating system support


The SAP Product Availability Matrix contains the supported Operating System and SAP
Kernel combinations for each SAP application. Linux distributions SLES 12.x, SLES 15.x,
RHEL 7.x and RHEL 8.x are fully supported. Oracle Linux as operating system for SAP ASE
isn't supported. It's recommended to use the most recent Linux releases available.
Windows customers should use Windows Server 2016 or Windows Server 2019 releases.
Older releases of Windows such as Windows 2012 are technically supported but the
latest Windows version is always recommended.

Specifics to SAP ASE on Windows


Starting with Microsoft Azure, you can migrate your existing SAP ASE applications to
Azure Virtual Machines. SAP ASE in an Azure Virtual Machine enables you to reduce the
total cost of ownership of deployment, management, and maintenance of enterprise
breadth applications by easily migrating these applications to Microsoft Azure. With SAP
ASE in an Azure Virtual Machine, administrators and developers can still use the same
development and administration tools that are available on-premises.

Microsoft Azure offers numerous different virtual machine types that allow you to run
smallest SAP systems and landscapes up to large SAP systems and landscapes with
thousands of users. SAP sizing SAPS numbers of the different SAP certified VM SKUs is
provided in SAP support note #1928533 .

Documentation to install SAP ASE on Windows can be found in the SAP ASE Installation
Guide for Windows

Lock Pages in Memory is a setting that will prevent the SAP ASE database buffer from
being paged out. This setting is useful for large busy systems with a high memory
demand. Contact BC-DB-SYB for more information.

Linux operating system specific settings


On SLES VMs, run saptune with profile SAP-ASE. Tune RHEL VMs as described in
69988 .
Linux Huge Pages should be enabled by default and can be verified with command

cat /proc/meminfo

The page size is typically 2048 KB. For details see the article Huge Pages on Linux

Recommendations on VM and disk structure


for SAP ASE deployments
SAP ASE for SAP NetWeaver Applications is supported on any VM type listed in SAP
support note #1928533 Typical VM types used for medium size SAP ASE database
servers include Esv3. Large multi-terabyte databases can use M-series VM types.

The SAP ASE transaction log disk write performance may be improved by enabling the
M-series Write Accelerator. Write Accelerator should be tested carefully with SAP ASE
due to the way that SAP ASE performs Log Writes. Review SAP support note #2816580
and consider running a performance test.
Write Accelerator is designed for transaction log disk only. The disk level cache should
be set to NONE. Don't be surprised if Azure Write Accelerator doesn't show similar
improvements as with other DBMS. Based on the way, SAP ASE writes into the
transaction log, it could be that there's little to no acceleration by Azure Write
Accelerator.

Separate disks are recommended for Data devices and Log Devices. The system
databases sybsecurity and saptools don't require dedicated disks and can be placed on
the disks containing the SAP database data and log devices

File systems, stripe size & IO balancing


SAP ASE writes data sequentially into disk storage devices unless configured otherwise.
This means an empty SAP ASE database with four devices will write data into the first
device only. The other disk devices will only be written to when the first device is full.
The amount of READ and WRITE IO to each SAP ASE device is likely to be different. To
balance disk IO across all available Azure disks, either Windows Storage Spaces or Linux
LVM2 needs to be used. On Linux, it's recommended to use XFS file system to format
the disks. The LVM stripe size should be tested with a performance test. 128 KB stripe
size is a good starting point. On Windows, the NTFS Allocation Unit Size (AUS) should
be tested. 64 KB can be used as a starting value.

It's recommended to configure Automatic Database Expansion as described in the


article Configuring Automatic Database Space Expansion in SAP Adaptive Server
Enterprise and SAP support note #1815695 .
Sample SAP ASE on Azure virtual machine, disk and file
system configurations
The templates below show sample configurations for both Linux and Windows. Before
confirming the virtual machine and disk configuration, ensure that the network and
storage bandwidth quotas of the individual VM are sufficient to meet the business
requirement. Also keep in mind that different Azure VM types have different maximum
numbers of disks that can be attached to the VM. For example, a E4s_v3 VM has a limit
48 MB/sec storage IO throughput. If the storage throughput required by database
backup activity demands more than 48 MB/sec then a larger VM type with more storage
bandwidth throughput is unavoidable. When configuring Azure storage, you also need
to keep in mind that especially with Azure Premium storage the throughput, and IOPS
per GB of capacity do change. See more on this topic in the article What disk types are
available in Azure?. The quotas for specific Azure VM types are documented in the
article Memory optimized virtual machine sizes and articles linked to it.

7 Note

If a DBMS system is being moved from on-premises to Azure, it's recommended to


perform monitoring on the VM and assess the CPU, memory, IOPS and storage
throughput. Compare the peak values observed with the VM quota limits
documented in the articles mentioned above

The examples given below are for illustrative purposes and can be modified based on
individual needs. Due to the design of SAP ASE, the number of data devices isn't as
critical as with other databases. The number of data devices detailed in this document is
a guide only. The configurations suggested should be treated as what they're. They are
starting points for you. But they are configurations that are going to need some fine-
tuning to your workload and cost efficiencies.

An example of a configuration for a little SAP ASE DB Server with a database size
between 50 GB – 250 GB could look like

Configuration Windows Linux Comments

VM Type E4s_v3/v4/v5 (4 vCPU/32 GB E4s_v3/v4/v5 (4 vCPU/32 GB ---


RAM) RAM)

Accelerated Enable Enable ---


Networking

SAP ASE 16.0.03.07 or higher 16.0.03.07 or higher ---


version
Configuration Windows Linux Comments

# of data 4 4 ---
devices

# of log 1 1 ---
devices

# of temp 1 1 More for


devices SAP BW
workload

Operating Windows Server 2019 SLES 12 SP5, 15 SP1 or later or ---


system RHEL 7.9, 8.1/8.2/8.4

Disk Storage Spaces LVM2 ---


aggregation

File system NTFS XFS

Format block Needs workload testing Needs workload testing ---


size

# and type of Premium storage v1: 2 x P10 Premium storage v1: 2 x P10 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 2 x 150 Premium storage v2: 2 x 150
GiB (RAID0) - default IOPS and GiB (RAID 0) - default IOPS and
throughput throughput

# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 128 Premium storage v2: 1 x 128 NONE
GiB - default IOPS and GiB - default IOPS and
throughput throughput

ASE 90% of Physical RAM 90% of Physical RAM Assuming


MaxMemory single
parameter instance

# of backup 4 4 ---
devices

# and type of 1 1 ---


backup disks

An example of a configuration for a small SAP ASE DB Server with a database size
between 250 GB – 750 GB, such as a smaller SAP Business Suite system, could look like

Configuration Windows Linux Comments


Configuration Windows Linux Comments

VM Type E16s_v3/v4/v5 (16 vCPU/128 GB E16s_v3/v4/v5 (16 vCPU/128 GB ---


RAM) RAM)

Accelerated Enable Enable ---


Networking

SAP ASE 16.0.03.07 or higher 16.0.03.07 or higher ---


version

# of data 8 8 ---
devices

# of log 1 1 ---
devices

# of temp 1 1 More for


devices SAP BW
workload

Operating Windows Server 2019 SLES 12 SP5, 15 SP1 or later or ---


system RHEL 7.9, 8.1/8.2/8.4

Disk Storage Spaces LVM2 ---


aggregation

File system NTFS XFS

Format block Needs workload testing Needs workload testing ---


size

# and type of Premium storage v1: 4 x P20 Premium storage v1: 4 x P20 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 4 x 100 Premium storage v2: 4 x 100
GiB - 200 GiB (RAID0) - default GiB- 200 GiB (RAID0) - default
IOPS and 25 MB/sec extra IOPS and 25 MB/sec extra per
throughput per disk disk throughput

# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 200 Premium storage v2: 1 x 200 NONE
GiB - default IOPS and GiB - default IOPS and
throughput throughput

ASE 90% of Physical RAM 90% of Physical RAM Assuming


MaxMemory single
parameter instance

# of backup 4 4 ---
devices
Configuration Windows Linux Comments

# and type of 1 1 ---


backup disks

An example of a configuration for a medium SAP ASE DB Server with a database size
between 750 GB – 2,000 GB, such as a larger SAP Business Suite system, could look like

Configuration Windows Linux Comments

VM Type E64s_v3/v4/v5 (64 vCPU/432 GB E64s_v3/v4/v5 (64 vCPU/432 GB ---


RAM) RAM)

Accelerated Enable Enable ---


Networking

SAP ASE 16.0.03.07 or higher 16.0.03.07 or higher ---


version

# of data 16 16 ---
devices

# of log 1 1 ---
devices

# of temp 1 1 More for


devices SAP BW
workload

Operating Windows Server 2019 SLES 12 SP5, 15 SP1 or later or ---


system RHEL 7.9, 8.1/8.2/8.4

Disk Storage Spaces LVM2 ---


aggregation

File system NTFS XFS

Format block Needs workload testing Needs workload testing ---


size

# and type of Premium storage v1: 4 x P30 Premium storage v1: 4 x P30 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 4 x 250 Premium storage v2: 4 x 250
GiB - 500 GiB - plus 2,000 IOPS GiB - 500 GiB - plus 2,000 IOPS
and 75 MB/sec throughput per and 75 MB/sec throughput per
disk disk
Configuration Windows Linux Comments

# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 400 Premium storage v2: 1 x 400 NONE
GiB - default IOPS and GiB - default IOPS and 75
75MB/sec extra throughput MB/sec extra throughput

ASE 90% of Physical RAM 90% of Physical RAM Assuming


MaxMemory single
parameter instance

# of backup 4 4 ---
devices

# and type of 1 1 ---


backup disks

An example of a configuration for a larger SAP ASE DB Server with a database size
between 2,000 GB – 4,000 GB, such as a larger SAP Business Suite system, could look like

Configuration Windows Linux Comments

VM Type E96(d)s_v5 (96 vCPU/672 GiB E96(d)s_v5 (96 vCPU/672 GiB ---
RAM) RAM)

Accelerated Enable Enable ---


Networking

SAP ASE 16.0.03.07 or higher 16.0.03.07 or higher ---


version

# of data 16 16 ---
devices

# of log 1 1 ---
devices

# of temp 1 1 More for


devices SAP BW
workload

Operating Windows Server 2019 SLES 12 SP5, 15 SP1 or later or ---


system RHEL 7.9, 8.1/8.2/8.4

Disk Storage Spaces LVM2 ---


aggregation

File system NTFS XFS


Configuration Windows Linux Comments

Format block Needs workload testing Needs workload testing ---


size

# and type of Premium storage v1: 4 x P30 Premium storage v1: 4 x P30 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 4 x 500 Premium storage v2: 4 x 500
GiB - 1,000 GiB - plus 2,500 GiB - 1,000 GiB - plus 2,500
IOPS and 100 MB/sec IOPS and 100 MB/sec
throughput per disk throughput per disk

# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 400 Premium storage v2: 1 x 400 NONE
GiB - plus 1,000 IOPS and GiB - plus 1,000 IOPS and 75
75MB/sec extra throughput MB/sec extra throughput

ASE 90% of Physical RAM 90% of Physical RAM Assuming


MaxMemory single
parameter instance

# of backup 4 4 ---
devices

# and type of 1 1 ---


backup disks

An example of a configuration for a large SAP ASE DB Server with a database size of 4
TB+, such as a larger globally used SAP Business Suite system, could look like

Configuration Windows Linux Comments

VM Type M-Series (1.0 to 4.0 TB RAM) M-Series (1.0 to 4.0 TB RAM) ---

Accelerated Enable Enable ---


Networking

SAP ASE 16.0.03.07 or higher 16.0.03.07 or higher ---


version

# of data 32 32 ---
devices

# of log 1 1 ---
devices

# of temp 1 1 More for SAP


devices BW workload
Configuration Windows Linux Comments

Operating Windows Server 2019 SLES 12 SP5, 15 SP1 or later or ---


system RHEL 7.9, 8.1/8.2/8.4

Disk Storage Spaces LVM2 ---


aggregation

File system NTFS XFS

Format block Needs workload testing Needs workload testing ---


size

# and type of Premium storage v1: 4+ x P30 Premium storage v1: 4+ x P30 Cache = Read
data disks (RAID0) (RAID0) Only,
Premium storage v2: 4+ x Premium storage v2: 4+ x Consider
1,000 GiB - 4,000 GiB - plus 1,000 GiB - 4,000 GiB - plus Azure Ultra
3,000 IOPS and 125 MB/sec 3,000 IOPS and 125 MB/sec disk
throughput per disk throughput per disk

# and type of Premium storage v1: 1 x P30 Premium storage v1: 1 x P30 Consider
log disks Premium storage v2: 1 x 500 Premium storage v2: 1 x 500 Write
GiB - plus 2,000 IOPS and 125 GiB - plus 2,000 IOPS and 125 Accelerator or
MB/sec throughput MB/sec throughput Azure Ultra
disk

ASE 90% of Physical RAM 90% of Physical RAM Assuming


MaxMemory single
parameter instance

# of backup 16 16 ---
devices

# and type of 4 4 Use


backup disks LVM2/Storage
Spaces

NFS v4.1 volumes hosted Azure NetApp Files is another alternative to use for SAP ASE
database storage. The principle structure of such a configuration should look like
In the example, the SID of the database was A11. The sizes and the performance tiers of
the Azure NetApp Files based volumes are dependent on the database volume and the
IOPS and throughput you require. For sapdata and saplog, we recommend starting with
the Ultra performance tier to be able to provide enough bandwidth. For many non-
production deployments, the Premium performance tier can be sufficient. For more
details on specific sizing and limitations of Azure NetApp Files for database usage, read
the chapter Sizing for HANA database on Azure NetApp Files in NFS v4.1 volumes on
Azure NetApp Files for SAP HANA.

Backup & restore considerations for SAP ASE on Azure


Increasing the number of data and backup devices increases backup and restore
performance. It's recommended to stripe the Azure disks that are hosting the SAP ASE
backup device as show in the tables shown earlier. Care should be taken to balance the
number of backup devices and disks and ensure that backup throughput shouldn't
exceed 40%-50% of total VM throughput quota. It's recommended to use SAP Backup
Compression as a default. More details can be found in the articles:

SAP support note #1588316


SAP support note #1801984
SAP support note #1585981

Don't use drive D:\ or /temp space as database or log dump destination.

Impact of database compression


In configurations where I/O bandwidth can become a limiting factor, measures, which
reduce IOPS might help to stretch the workload one can run in an IaaS scenario like
Azure. Therefore, it's recommended to make sure that SAP ASE compression is used
before uploading an existing SAP database to Azure.

The recommendation to apply compression before uploading to Azure is given out of


several reasons:

The amount of data to be uploaded to Azure is lower


The duration of the compression execution is shorter assuming that one can use
stronger hardware with more CPUs or higher I/O bandwidth or less I/O latency on-
premises
Smaller database sizes might lead to less costs for disk allocation

Data- and LOB-Compression work in a VM hosted in Azure Virtual Machines as it does


on-premises. For more details on how to check if compression is already in use in an
existing SAP ASE database, check SAP support note 1750510 . For more details on SAP
ASE database compression check SAP support note #2121797

High availability of SAP ASE on Azure


The HADR Users Guide details the setup and configuration of a two-node SAP ASE
“Always-on” solution. In addition, a third disaster recovery node is also supported. SAP
ASE supports many High Availability configurations including shared disk and native
OS clustering (such as Pacemaker and Windows Server Failover Cluster). There are two
supported High Availability configurations for SAP ASE on Azure:

HA Aware with Fault Manager - The SAP Kernel is an “HA Aware” application and
knows about the primary and secondary SAP ASE servers. There are no close
integrations between the SAP ASE “HA Aware“ solution and Azure, the Azure
Internal load balancer isn't used. The solution is documented in the SAP ASE HADR
Users Guide
Floating IP with Fault Manager – This solution can be used for SAP Business Suite
and non-SAP Business Suite applications. This solution utilizes the Azure ILB and
the SAP ASE database engine provides a Probe Port. The Fault Manager will call
SAPHostAgent to start or stop a secondary Floating IP on the ASE hosts. This
solution is documented in SAP note #3086679 - SYB: Fault Manager: floating IP
address on Microsoft Azure

7 Note
The failover times and other characteristics of either HA Aware or Floating IP
solutions are similar. When deciding between these two solutions customers should
perform their own testing and evaluation including factors such as planned and
unplanned failover times and other operational procedures.

Third node for disaster recovery


Beyond using SAP ASE Always-On for local high availability, you might want to extend
the configuration to an asynchronously replicated node in another Azure region. For
more information, see Installation Procedure for Sybase 16. 3 Patch Level 3 Always-on +
DR on Suse 12.3 .

SAP ASE database encryption & SSL


SAP Software provisioning Manager (SWPM) is giving an option to encrypt the database
during installation. If you want to use encryption, it's recommended to use SAP Full
Database Encryption. See details documented in:

SAP support note #2556658


SAP support note #2224138
SAP support note #2401066
SAP support note #2593925

7 Note

If a SAP ASE database is encrypted then Backup Dump Compression will not work.
See also SAP support note #2680905

SAP ASE on Azure deployment checklist


Deploy SAP ASE 16.0.03.07 or higher
Update to latest version and patches of FaultManager and SAPHostAgent
Deploy on latest certified OS available such as Windows 2019, SLES 15 or RHEL 8
Use SAP Certified VMs – high memory Azure VM SKUs such as Es_v3 or for x-large
systems M-Series VM SKUs are recommended
Match the disk IOPS and total VM aggregate throughput quota of the VM with the
disk design. Deploy sufficient number of disks
Aggregate disks using Windows Storage Spaces or Linux LVM2 with correct stripe
size and file system
Create sufficient number of devices for data, log, temp, and backup purposes
Consider using UltraDisk for x-large systems
Run saptune SAP-ASE on SLES. Tune RHEL VMs per 69988 .
Secure the database with DB Encryption – manually store keys in Azure Key Vault
Complete the SAP on Azure Checklist
Configure log backup and full backup
Test HA/DR, backup and restore and perform stress & volume test
Confirm Automatic Database Extension is working

Using DBACockpit to monitor database


instances
For SAP systems, which are using SAP ASE as database platform, the DBACockpit is
accessible as embedded browser windows in transaction DBACockpit or as Webdynpro.
However, the full functionality for monitoring and administering the database is
available in the Webdynpro implementation of the DBACockpit only.

As with on-premises systems several steps are required to enable all SAP NetWeaver
functionality used by the Webdynpro implementation of the DBACockpit. Follow SAP
support note #1245200 to enable the usage of webdynpros and generate the
required ones. When following the instructions in the above notes, you also configure
the Internet Communication Manager ( ICM ) along with the ports to be used for http and
https connections. The default setting for http looks like:

icm/server_port_0 = PROT=HTTP,PORT=8000,PROCTIMEOUT=600,TIMEOUT=600

icm/server_port_1 = PROT=HTTPS,PORT=443$$,PROCTIMEOUT=600,TIMEOUT=600

and the links generated in transaction DBACockpit look similar to:

https://<fullyqualifiedhostname>:44300/sap/bc/webdynpro/sap/dba_cockpit

http://<fullyqualifiedhostname>:8000/sap/bc/webdynpro/sap/dba_cockpit

Depending on how the Azure Virtual Machine hosting the SAP system is connected to
your AD and DNS, you need to make sure that ICM is using a fully qualified hostname
that can be resolved on the machine where you're opening the DBACockpit from. See
SAP support note #773830 to understand how ICM determines the fully qualified host
name based on profile parameters and set parameter icm/host_name_full explicitly if
necessary.
If you deployed the VM in a Cloud-Only scenario without cross-premises connectivity
between on-premises and Azure, you need to define a public IP address and a
domainlabel . The format of the public DNS name of the VM looks like:

<custom domainlabel >. <azure region >.cloudapp.azure.com

Setting the SAP profile parameter icm/host_name_full to the DNS name of the Azure VM
the link might look similar to:

https://mydomainlabel.westeurope.cloudapp.net:44300/sap/bc/webdynpro/sap/dba
_cockpit

http://mydomainlabel.westeurope.cloudapp.net:8000/sap/bc/webdynpro/sap/dba_c
ockpit

In this case you need to make sure to:

Add Inbound rules to the Network Security Group in the Azure portal for the
TCP/IP ports used to communicate with ICM
Add Inbound rules to the Windows Firewall configuration for the TCP/IP ports used
to communicate with the ICM

For an automated imported of all corrections available, it's recommended to periodically


apply the correction collection SAP Note applicable to your SAP version:

SAP support note #1558958


SAP support note #1619967
SAP support note #1882376

Further information about DBA Cockpit for SAP ASE can be found in the following SAP
Notes:

SAP support note #1605680


SAP support note #1757924
SAP support note #1757928
SAP support note #1758182
SAP support note #1758496
SAP support note #1814258
SAP support note #1922555
SAP support note #1956005

Useful links, notes & whitepapers for SAP ASE


The starting page for SAP ASE 16.0.03.07 Documentation gives links to various
documents of which the documents of:

SAP ASE Learning Journey - Administration & Monitoring


SAP ASE Learning Journey - Installation & Upgrade

are helpful. Another useful document is SAP Applications on SAP Adaptive Server
Enterprise Best Practices for Migration and Runtime .

Other helpful SAP support notes are:

SAP support note #2134316


SAP support note #1748888
SAP support note #2588660
SAP support note #1680803
SAP support note #1724091
SAP support note #1775764
SAP support note #2162183
SAP support note #1928533
SAP support note #2015553
SAP support note #1750510
SAP support note #1752266
SAP support note #2162183
SAP support note #1588316

Other information is published on

SAP Applications on SAP Adaptive Server Enterprise


SAP ASE infocenter
SAP ASE Always-on with 3rd DR Node Setup

A Monthly newsletter is published through SAP support note #2381575

Next steps
Check the article SAP workloads on Azure: planning and deployment checklist
SAP MaxDB, liveCache, and Content
Server deployment on Azure VMs
Article • 02/10/2023

This document covers several different areas to consider when deploying MaxDB,
liveCache, and Content Server in Azure IaaS. As a precondition to this document, you
should have read the document Considerations for Azure Virtual Machines DBMS
deployment for SAP workload as well as other guides in the SAP workload on Azure
documentation.

Specifics for the SAP MaxDB deployments on


Windows

SAP MaxDB Version Support on Azure


SAP currently supports SAP MaxDB version 7.9 or higher for use with SAP NetWeaver-
based products in Azure. All updates for SAP MaxDB server, or JDBC and ODBC drivers
to be used with SAP NetWeaver-based products are provided solely through the SAP
Service Marketplace . For more information about running SAP NetWeaver on SAP
MaxDB, see SAP MaxDB .

Supported Microsoft Windows versions and Azure VM


types for SAP MaxDB DBMS
To find the supported Microsoft Windows version for SAP MaxDB DBMS on Azure, see:

SAP Product Availability Matrix (PAM)


SAP Note 1928533

It is highly recommended to use the newest version of the operating system Microsoft
Windows, which is Microsoft Windows 2016.

Available SAP MaxDB Documentation for MaxDB


You can find the updated list of SAP MaxDB documentation in the following SAP Note
767598
SAP MaxDB Configuration Guidelines for SAP
Installations in Azure VMs

Storage configuration
Azure storage best practices for SAP MaxDB follow the general recommendations
mentioned in chapter Storage structure of a VM for RDBMS Deployments.

) Important

Like other databases, SAP MaxDB also has data and log files. However, in SAP
MaxDB terminology the correct term is "volume" (not "file"). For example, there are
SAP MaxDB data volumes and log volumes. Do not confuse these with OS disk
volumes.

In short you have to:

If you use Azure Storage accounts, set the Azure storage account that holds the
SAP MaxDB data and log volumes (data and log files) to Local Redundant Storage
(LRS) as specified in Considerations for Azure Virtual Machines DBMS deployment
for SAP workload.
Separate the IO path for SAP MaxDB data volumes (data files) from the IO path for
log volumes (log files). It means that SAP MaxDB data volumes (data files) have to
be installed on one logical drive and SAP MaxDB log volumes (log files) have to be
installed on another logical drive.
Set the proper caching type for each disk, depending on whether you use it for
SAP MaxDB data or log volumes (data and log files), and whether you use Azure
Standard or Azure Premium Storage, as described in Considerations for Azure
Virtual Machines DBMS deployment for SAP workload.
As long as the current IOPS quota per disk satisfies the requirements, it is possible
to store all the data volumes on a single mounted disk, and also store all database
log volumes on another single mounted disk.
If more IOPS and/or space are required, it is recommended to use Microsoft
Window Storage Pools (only available in Microsoft Windows Server 2012 and
higher) to create one large logical device over multiple mounted disks. For more
details, see also Considerations for Azure Virtual Machines DBMS deployment for
SAP workload. This approach simplifies the administration overhead to manage the
disk space and avoids the effort of manually distributing files across multiple
mounted disks.
it is highly recommended to use Azure Premium Storage for MaxDB deployments.
Backup and Restore

When deploying SAP MaxDB into Azure, you must review your backup methodology.
Even if the system is not a productive system, the SAP database hosted by SAP MaxDB
must be backed up periodically. Since Azure Storage keeps three images, a backup is
now less important in terms of protecting your system against storage failure and more
important operational or administrative failures. The primary reason for maintaining a
proper backup and restore plan is so that you can compensate for logical or manual
errors by providing point-in-time recovery capabilities. So the goal is to either use
backups to restore the database to a certain point in time or to use the backups in
Azure to seed another system by copying the existing database.

Backing up and restoring a database in Azure works the same way as it does for on-
premises systems, so you can use standard SAP MaxDB backup/restore tools, which are
described in one of the SAP MaxDB documentation documents listed in SAP Note
767598 .

Performance Considerations for Backup and Restore


As in bare-metal deployments, backup and restore performance are dependent on how
many volumes can be read in parallel and the throughput of those volumes. Therefore,
one can assume:
The fewer the number of disks used to store the database devices, the lower the
overall read throughput
The fewer targets (Stripe Directories, disks) to write the backup to, the lower the
throughput

To increase the number of targets to write to, there are two options that you can use,
possibly in combination, depending on your needs:

Dedicating separate volumes for backup


Striping the backup target volume over multiple mounted disks in order to
improve the IOPS throughput on that striped disk volume
Having separate dedicated logical disk devices for:
SAP MaxDB backup volumes (i.e. files)
SAP MaxDB data volumes (i.e. files)
SAP MaxDB log volumes (i.e. files)

Striping a volume over multiple mounted disks has been discussed earlier in
Considerations for Azure Virtual Machines DBMS deployment for SAP workload.

Other considerations
All other general areas such as Azure Availability Sets or SAP monitoring also apply as
described in Considerations for Azure Virtual Machines DBMS deployment for SAP
workload. for deployments of VMs with the SAP MaxDB database. Other SAP MaxDB-
specific settings are transparent to Azure VMs and are described in different documents
listed in SAP Note 767598 and in these SAP Notes:

826037
1139904
1173395

Specifics for SAP liveCache deployments on


Windows

SAP liveCache Version Support


Minimal version of SAP liveCache supported in Azure Virtual Machines is SAP
LC/LCAPPS 10.0 SP 25 including liveCache 7.9.08.31 and LCA-Build 25, released for EhP
2 for SAP SCM 7.0 and later releases.
Supported Microsoft Windows Versions and Azure VM
types for SAP liveCache DBMS
To find the supported Microsoft Windows version for SAP liveCache on Azure, see:

SAP Product Availability Matrix (PAM)


SAP Note 1928533

It is highly recommended to use the newest version of the operating system Microsoft
Windows Server.

SAP liveCache Configuration Guidelines for SAP


Installations in Azure VMs

Recommended Azure VM Types for liveCache

As SAP liveCache is an application that performs huge calculations, the amount and
speed of RAM and CPU has a major influence on SAP liveCache performance.

For the Azure VM types supported by SAP (SAP Note 1928533 ), all virtual CPU
resources allocated to the VM are backed by dedicated physical CPU resources of the
hypervisor. No overprovisioning (and therefore no competition for CPU resources) takes
place.

Similarly, for all Azure VM instance types supported by SAP, the VM memory is 100%
mapped to the physical memory - over-provisioning (over-commitment), for example, is
not used.

From this perspective, it is highly recommended to use the most recent Dv2, Dv3, Ev3,
and M-series VMs. The choice of the different VM types depends on the memory you
need for liveCache and the CPU resources you need. As with all other DBMS
deployments it is advisable to leverage Azure Premium Storage for performance critical
volumes.

Storage Configuration for liveCache in Azure

As SAP liveCache is based on SAP MaxDB technology, all the Azure storage best practice
recommendations mentioned for SAP MaxDB described in this document are also valid
for SAP liveCache.

Dedicated Azure VM for liveCache scenario


As SAP liveCache intensively uses computational power, for productive usage it is highly
recommended to deploy on a dedicated Azure Virtual Machine.

Backup and Restore for liveCache in Azure

backup and restore, including performance considerations, are already described in the
relevant SAP MaxDB chapters in this document.

Other considerations
All other general areas are already described in the relevant SAP MaxDB chapter.

Specifics for the SAP Content Server


deployment on Windows in Azure
The SAP Content Server is a separate, server-based component to store content such as
electronic documents in different formats. The SAP Content Server is provided by
development of technology and is to be used cross-application for any SAP applications.
It is installed on a separate system. Typical content is training material and
documentation from Knowledge Warehouse or technical drawings originating from the
mySAP PLM Document Management System.

SAP Content Server Version Support for Azure VMs


SAP currently supports:
SAP Content Server with version 6.50 (and higher)
SAP MaxDB version 7.9
Microsoft IIS (Internet Information Server) version 8.0 (and higher)

It is highly recommended to use the newest version of SAP Content Server, and the
newest version of Microsoft IIS.

Check the latest supported versions of SAP Content Server and Microsoft IIS in the SAP
Product Availability Matrix (PAM) .

Supported Microsoft Windows and Azure VM types for


SAP Content Server
To find out supported Windows version for SAP Content Server on Azure, see:

SAP Product Availability Matrix (PAM)


SAP Note 1928533

It is highly recommended to use the newest version of Microsoft Windows Server.

SAP Content Server Configuration Guidelines for SAP


Installations in Azure VMs

Storage Configuration for Content Server in Azure


If you configure SAP Content Server to store files in the SAP MaxDB database, all Azure
storage best practices recommendation mentioned for SAP MaxDB in this document are
also valid for the SAP Content Server scenario.

If you configure SAP Content Server to store files in the file system, it is recommended
to use a dedicated logical drive. Using Windows Storage Spaces enables you to also
increase logical disk size and IOPS throughput, as described in Considerations for Azure
Virtual Machines DBMS deployment for SAP workload.

SAP Content Server Location


SAP Content Server has to be deployed in the same Azure region and Azure VNET
where the SAP system is deployed. You are free to decide whether you want to deploy
SAP Content Server components on a dedicated Azure VM or on the same VM where
the SAP system is running.
SAP Cache Server Location
The SAP Cache Server is an additional server-based component to provide access to
(cached) documents locally. The SAP Cache Server caches the documents of an SAP
Content Server. This is to optimize network traffic if documents have to be retrieved
more than once from different locations. The general rule is that the SAP Cache Server
has to be physically close to the client that accesses the SAP Cache Server.

Here you have two options:

1. Client is a backend SAP system If a backend SAP system is configured to access


SAP Content Server, that SAP system is a client. As both SAP system and SAP
Content Server are deployed in the same Azure region, in the same Azure
datacenter, they are physically close to each other. Therefore, there is no need to
have a dedicated SAP Cache Server. SAP UI clients (SAP GUI or web browser)
access the SAP system directly, and the SAP system retrieves documents from the
SAP Content Server.
2. Client is an on-premises web browser The SAP Content Server can be configured
to be accessed directly by the web browser. In this case, a web browser running
on-premises is a client of the SAP Content Server. On-premises datacenter and
Azure datacenter are placed in different physical locations (ideally close to each
other). Your on-premises datacenter is connected to Azure via Azure Site-to-Site
VPN or ExpressRoute. Although both options offer secure VPN network connection
to Azure, site-to-site network connection does not offer a network bandwidth and
latency SLA between the on-premises datacenter and the Azure datacenter. To
speed up access to documents, you can do one of the following:
a. Install SAP Cache Server on-premises, close to the on-premises web browser
(option in figure below)
b. Configure Azure ExpressRoute, which offers a high-speed and low-latency
dedicated network connection between on-premises datacenter and Azure
datacenter.

Backup / Restore

If you configure the SAP Content Server to store files in the SAP MaxDB database, the
backup/restore procedure and performance considerations are already described in SAP
MaxDB chapters of this document.

If you configure the SAP Content Server to store files in the file system, one option is to
execute manual backup/restore of the whole file structure where the documents are
located. Similar to SAP MaxDB backup/restore, it is recommended to have a dedicated
disk volume for backup purpose.

Other

Other SAP Content Server-specific settings are transparent to Azure VMs and are
described in various documents and SAP Notes:

SAP NetWeaver
SAP Note 1619726
SAP BW NLS implementation guide with
SAP IQ on Azure
Article • 06/19/2023

Over the years, customers running the SAP Business Warehouse (BW) system see an
exponential growth in database size, which increases compute cost. To achieve the right
balance of cost and performance, customers can use near-line storage (NLS) to migrate
historical data.

The NLS implementation based on SAP IQ is the standard method by SAP to move
historical data from a primary database (SAP HANA or AnyDB). The integration of SAP
IQ makes it possible to separate frequently accessed data from infrequently accessed
data, which makes less resource demand in the SAP BW system.

This guide provides guidelines for planning, deploying, and configuring SAP BW NLS
with SAP IQ on Azure. This guide covers common Azure services and features that are
relevant for SAP IQ NLS deployment and doesn't cover any NLS partner solutions.

This guide doesn't replace SAP's standard documentation on NLS deployment with SAP
IQ. Instead, it complements the official installation and administration documentation.

Solution overview
In an operative SAP BW system, the volume of data increases constantly because of
business and legal requirements. The large volume of data can affect the performance
of the system and increase the administration effort, which results in the need to
implement a data-aging strategy.

If you want to keep the amount of data in your SAP BW system without deleting, you
can use data archiving. The data is first moved to archive or near-line storage and then
deleted from the SAP BW system. You can either access the data directly or load it back
as required, depending on how the data has been archived.

SAP BW users can use SAP IQ as a near-line storage solution. The adapter for SAP IQ as
a near-line solution is delivered with the SAP BW system. With NLS implemented,
frequently used data is stored in an SAP BW online database (SAP HANA or AnyDB).
Infrequently accessed data is stored in SAP IQ, which reduces the cost to manage data
and improves the performance of the SAP BW system. To ensure consistency between
online data and near-line data, the archived partitions are locked and are read-only.
SAP IQ supports two types of architecture: simplex and multiplex. In a simplex
architecture, a single instance of an SAP IQ server runs on a single virtual machine. Files
might be located on a host machine or on a network storage device.

) Important

For the SAP NLS solution, only simplex architecture is available and evaluated by
SAP.

In Azure, the SAP IQ server must be implemented on a separate virtual machine (VM).
We don't recommend installing SAP IQ software on an existing server that already has
other database instances running, because SAP IQ uses complete CPU and memory for
its own usage. One SAP IQ server can be used for multiple SAP NLS implementations.

Support matrix
The support matrix for an SAP IQ NLS solution includes:

Operating system: SAP IQ is certified at the operating system level only. You can
run an SAP IQ certified operating system in an Azure environment as long as it's
compatible to run on Azure infrastructure. For more information, see SAP note
2133194 .
SAP BW compatibility: Near-line storage for SAP IQ is released only for SAP BW
systems that already run under Unicode. SAP note 1796393 contains information
about SAP BW.

Storage: In Azure, SAP IQ supports premium managed disks (Windows and Linux),
Azure shared disks (Windows only), and Azure NetApp Files (Linux only).

For more up-to-date information based on your SAP IQ release, see the Product
Availability Matrix .

Sizing
Sizing of SAP IQ is confined to CPU, memory, and storage. You can find general sizing
guidelines for SAP IQ on Azure in SAP note 1951789 . The sizing recommendation that
you get by following the guidelines needs to be mapped to certified Azure virtual
machine types for SAP. SAP note 1928533 provides the list of supported SAP products
and Azure VM types.

The SAP IQ sizing guide and sizing worksheet mentioned in SAP note 1951789 were
developed for the native usage of an SAP IQ database. Because they don't reflect the
resources for the planning of an SAP IQ database, you might end up with unused
resources for SAP NLS.

Azure resources

Regions
If you're already running your SAP systems on Azure, you've probably identified your
region. SAP IQ deployment must be in the same region as your SAP BW system for
which you're implementing the NLS solution.

To determine the architecture of SAP IQ, you need to ensure that the services required
by SAP IQ, like Azure NetApp Files (NFS for Linux only), are available in that region. To
check the service availability in your region, see the Products available by region
webpage.

Deployment options
To achieve redundancy of SAP systems in an Azure infrastructure, your application needs
to be deployed in either flexible scale set, availability zones, or availability sets. Although
you can achieve SAP IQ high availability by using the SAP IQ multiplex architecture, the
multiplex architecture doesn't meet the requirements of the NLS solution.

To achieve high availability for the SAP IQ simplex architecture, you need to configure a
two-node cluster with a custom solution. The two-node SAP IQ cluster can be deployed
in flexible scale set with FD=1, availability zones or availability sets. However, it is
advised to configure zone redundant storage when setting up a highly available solution
across availability zones.

Virtual machines
Based on SAP IQ sizing, you need to map your requirements to Azure virtual machines.
This approach is supported in Azure for SAP products. SAP note 1928533 is a good
starting point that lists supported Azure VM types for SAP products on Windows and
Linux.

Beyond the selection of only supported VM types, you also need to check whether those
VM types are available in specific regions. You can check the availability of VM types on
the Products available by region webpage. To choose the pricing model, see Azure
virtual machines for SAP workload.

 Tip

For production systems, we recommend that you use E-Series virtual machines
because of their core-to-memory ratio.

Storage
Azure Storage has various storage types available for customers. You can find details
about them in the article What disk types are available in Azure?.

Some of the storage types in Azure have limited use for SAP scenarios, but other types
are well suited or optimized for specific SAP workload scenarios. For more information,
see the Azure Storage types for SAP workload guide. It highlights the storage options
that are suited for SAP.

For SAP IQ on Azure, you can use the following Azure storage types. The choice
depends on your operating system (Windows or Linux) and deployment method
(standalone or highly available).

Azure managed disks


A managed disk is a block-level storage volume that Azure manages. You can use
managed disks for SAP IQ simplex deployment. Various types of managed disks
are available, but we recommend that you use premium SSDs for SAP IQ.

Azure shared disks

Shared disks are a new feature for Azure managed disks that allow you to attach a
managed disk to multiple VMs simultaneously. Shared managed disks don't
natively offer a fully managed file system that can be accessed through SMB or
NFS. You need to use a cluster manager like a Windows Server failover cluster
(WSFC), which handles cluster node communication and write locking.

To deploy a highly available solution for an SAP IQ simplex architecture on


Windows, you can use Azure shared disks between two nodes that WSFC manages.
An SAP IQ deployment architecture with Azure shared disks is discussed in the
article Deploy SAP IQ NLS HA solution using Azure shared disk on Windows
Server .

Azure NetApp Files

SAP IQ deployment on Linux can use Azure NetApp Files as a file system (NFS
protocol) to install a standalone or a highly available solution. This storage offering
isn't available in all regions. For up-to-date information, see the Products available
by region webpage. SAP IQ deployment architecture with Azure NetApp Files is
discussed in the article Deploy SAP IQ-NLS HA solution using Azure NetApp Files
on SUSE Linux Enterprise Server .

The following table lists the recommendations for each storage type based on the
operating system:

Storage type Windows Linux

Azure managed disks Yes Yes

Azure shared disks Yes No

Azure NetApp Files No Yes

Networking
Azure provides a network infrastructure that allows the mapping of all scenarios that can
be realized for an SAP BW system that uses SAP IQ as near-line storage. These scenarios
include connecting to on-premises systems, connecting to systems in different virtual
networks, and others. For more information, see Microsoft Azure networking for SAP
workloads .

Deploy SAP IQ on Windows

Windows server preparation and installation


To prepare servers for NLS implementation with SAP IQ on Windows, you can get
the most up-to-date information in SAP note 2780668 - SAP First Guidance - BW
NLS Implementation with SAP IQ . It has comprehensive information about
prerequisites for SAP BW systems, SAP IQ file-system layout, installation, post-
configuration tasks, and SAP BW NLS integration with SAP IQ.

High-availability deployment on Windows


SAP IQ supports both a simplex and a multiplex architecture. For the NLS solution,
only simplex server architecture is available and evaluated. Simplex is a single
instance of an SAP IQ server running on a single virtual machine.

Technically, you can achieve SAP IQ high availability by using a multiplex server
architecture, but the multiplex architecture doesn't meet the requirements of the
NLS solution. For simplex server architecture, SAP doesn't provide any features or
procedures to run SAP IQ in a high-availability configuration.

To set up SAP IQ high availability on Windows for simplex server architecture, you
need to set up a custom solution that requires extra configuration, like a Windows
Server failover cluster and shared disks. One such custom solution for SAP IQ on
Windows is described in detail in Deploy SAP IQ NLS HA solution using Azure
shared disk on Windows Server .

Backup and restore for system deployed on Windows


In Azure, you can schedule SAP IQ database backup as described in SAP IQ
Administration: Backup, Restore, and Data Recovery . SAP IQ provides the
following types of database backups. You can find details about each backup type
in Backup Scenarios .

Full backup: It makes a complete copy of the database.


Incremental backup: It copies all transactions since the last backup of any
type.
Incremental since full backup: It backs up all changes to the database since
the last full backup.
Virtual backup: It copies all of the database except the table data and
metadata from the SAP IQ store.

Depending on your SAP IQ database size, you can schedule your database backup
from any of the backup scenarios. But if you're using SAP IQ with the NLS interface
delivered by SAP, you might want to automate the backup process for an SAP IQ
database. Automation ensures that the SAP IQ database can always be recovered to
a consistent state without loss of data that's moved between the primary database
and the SAP IQ database. For details on setting up automation for SAP IQ near-line
storage, see SAP note 2741824 - How to setup backup automation for SAP IQ Cold
Store/Near-line Storage .

For a large SAP IQ database, you can use virtual backups. For more information, see
Virtual Backups , Introduction Virtual Backup in SAP Sybase IQ . Also see SAP
note 2461985 - How to Backup Large SAP IQ Database .

If you're using a network drive (SMB protocol) to back up and restore an SAP IQ
server on Windows, be sure to use the UNC path for backup. Three backslashes
( \\\ ) are required when you're using a UNC path for backup and restore:

SQL

BACKUP DATABASE FULL TO '\\\sapiq.internal.contoso.net\sapiq-


backup\backup\data\<filename>'

Disaster recovery
This section explains the strategy to provide disaster recovery (DR) protection for the
SAP IQ NLS solution. It complements the Set up disaster recovery for SAP article, which
represents the primary resources for an overall SAP DR approach. The process described
in that article is presented at an abstract level. You need to validate the exact steps and
thoroughly test your DR strategy.

For SAP IQ, see SAP note 2566083 , which describes methods to implement a DR
environment safely. In Azure, you can also use Azure Site Recovery for an SAP IQ DR
strategy. The strategy for SAP IQ DR depends on the way it's deployed in Azure, and it
should also be in line with your SAP BW system.

Standalone deployment of SAP IQ


If you've installed SAP IQ as a standalone system that doesn't have any application-level
redundancy or high availability, but the business requires a DR setup, all the disks
(Azure-managed disks) attached to the virtual machine will be local.

You can use Azure Site Recovery to replicate a standalone SAP IQ virtual machine in the
secondary region. It replicates the servers and all the attached managed disks to the
secondary region so that if a disaster or an outage occurs, you can easily fail over to
your replicated environment and continue working. To start replicating the SAP IQ VMs
to the Azure DR region, follow the guidance in Replicate a virtual machine to Azure.

Highly available deployment of SAP IQ


If you've installed SAP IQ as a highly available system where SAP IQ binaries and
database files are on an Azure shared disk (Windows only) or on a network drive like
Azure NetApp Files (Linux only), you need to identify:

Whether you need the same highly available SAP IQ system on the DR site.
Whether a standalone SAP IQ instance will suffice for your business requirements.

If you need a standalone SAP IQ instance on a DR site, you can use Azure Site Recovery
to replicate a primary SAP IQ virtual machine in the secondary region. It replicates the
servers and all the local attached managed disks to the secondary region, but it won't
replicate an Azure shared disk or a network drive like Azure NetApp Files.

To copy data from Azure a shared disk or a network drive, you can use any file-base
copy tool to replicate data between Azure regions. For more information on how to
copy an Azure NetApp Files volume in another region, see FAQs about Azure NetApp
Files.

Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Set up Pacemaker on Red Hat Enterprise
Linux in Azure
Article • 10/12/2023

This article describes how to configure a basic Pacemaker cluster on Red Hat Enterprise
Server (RHEL). The instructions cover RHEL 7, RHEL 8, and RHEL 9.

Prerequisites
Read the following SAP Notes and papers first:

SAP Note 1928533 , which has:


A list of Azure virtual machine (VM) sizes that are supported for the deployment
of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software and operating system (OS) and database
combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 recommends OS settings for Red Hat Enterprise Linux.
SAP Note 3108316 recommends OS settings for Red Hat Enterprise Linux 9.x.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 3108302 has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in Pacemaker cluster
General RHEL documentation:
High Availability (HA) Add-On Overview
High-Availability Add-On Administration
High-Availability Add-On Reference
Support Policies for RHEL High-Availability Clusters - sbd and fence_sbd
Azure-specific RHEL documentation:
Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
Considerations in Adopting RHEL 8 - High Availability and Clusters
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2)
in Pacemaker on RHEL 7.6
RHEL for SAP Offerings on Azure

Cluster installation

7 Note

Red Hat doesn't support a software-emulated watchdog. Red Hat doesn't support
SBD on cloud platforms. For more information, see Support Policies for RHEL
High-Availability Clusters - sbd and fence_sbd .

The only supported fencing mechanism for Pacemaker RHEL clusters on Azure is an
Azure fence agent.

The following items are prefixed with:


[A]: Applicable to all nodes
[1]: Only applicable to node 1
[2]: Only applicable to node 2

Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9
are marked in the document.

1. [A] Register. This step is optional. If you're using RHEL SAP HA-enabled images,
this step isn't required.

For example, if you're deploying on RHEL 7, register your VM and attach it to a


pool that contains repositories for RHEL 7.

Bash

sudo subscription-manager register


# List the available pools
sudo subscription-manager list --available --matches '*SAP*'
sudo subscription-manager attach --pool=<pool id>

When you attach a pool to an Azure Marketplace pay-as-you-go RHEL image,


you're effectively double billed for your RHEL usage. You're billed once for the pay-
as-you-go image and once for the RHEL entitlement in the pool you attach. To
mitigate this situation, Azure now provides bring-your-own-subscription RHEL
images. For more information, see Red Hat Enterprise Linux bring-your-own-
subscription Azure images.

2. [A] Enable RHEL for SAP repos. This step is optional. If you're using RHEL SAP HA-
enabled images, this step isn't required.

To install the required packages on RHEL 7, enable the following repositories:

Bash

sudo subscription-manager repos --disable "*"


sudo subscription-manager repos --enable=rhel-7-server-rpms
sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
sudo subscription-manager repos --enable=rhel-sap-for-rhel-7-server-
rpms
sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-eus-
rpms

3. [A] Install the RHEL HA add-on.

Bash
sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat

) Important

We recommend the following versions of the Azure fence agent (or later) for
customers to benefit from a faster failover time, if a resource stop fails or the
cluster nodes can't communicate with each other anymore:

RHEL 7.7 or higher use the latest available version of fence-agents package.

RHEL 7.6: fence-agents-4.2.1-11.el7_6.8

RHEL 7.5: fence-agents-4.0.11-86.el7_5.8

RHEL 7.4: fence-agents-4.0.11-66.el7_4.12

For more information, see Azure VM running as a RHEL High-Availability


cluster member takes a very long time to be fenced, or fencing fails/times
out before the VM shuts down .

) Important

We recommend the following versions of the Azure fence agent (or later) for
customers who want to use managed identities for Azure resources instead of
service principal names for the fence agent:

RHEL 8.4: fence-agents-4.2.1-54.el8.

RHEL 8.2: fence-agents-4.2.1-41.el8_2.4

RHEL 8.1: fence-agents-4.2.1-30.el8_1.4

RHEL 7.9: fence-agents-4.2.1-41.el7_9.4.

) Important

On RHEL 9, we recommend the following package versions (or later) to avoid


issues with the Azure fence agent:

fence-agents-4.10.0-20.el9_0.7

fence-agents-common-4.10.0-20.el9_0.6
ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm

Check the version of the Azure fence agent. If necessary, update it to the minimum
required version or later.

Bash

# Check the version of the Azure Fence Agent


sudo yum info fence-agents-azure-arm

) Important

If you need to update the Azure fence agent, and if you're using a custom
role, make sure to update the custom role to include the action powerOff. For
more information, see Create a custom role for the fence agent.

4. If you're deploying on RHEL 9, also install the resource agents for cloud
deployment.

Bash

sudo yum install -y resource-agents-cloud

5. [A] Set up hostname resolution.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands.

) Important

If you're using hostnames in the cluster configuration, it's vital to have reliable
hostname resolution. The cluster communication fails if the names aren't
available, which can lead to cluster failover delays.

The benefit of using /etc/hosts is that your cluster becomes independent of


DNS, which could be a single point of failures too.

Bash

sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.

text

# IP address of the first cluster node


10.0.0.6 prod-cl1-0
# IP address of the second cluster node
10.0.0.7 prod-cl1-1

6. [A] Change the hacluster password to the same password.

Bash

sudo passwd hacluster

7. [A] Add firewall rules for Pacemaker.

Add the following firewall rules to all cluster communication between the cluster
nodes.

Bash

sudo firewall-cmd --add-service=high-availability --permanent


sudo firewall-cmd --add-service=high-availability

8. [A] Enable basic cluster services.

Run the following commands to enable the Pacemaker service and start it.

Bash

sudo systemctl start pcsd.service


sudo systemctl enable pcsd.service

9. [1] Create a Pacemaker cluster.

Run the following commands to authenticate the nodes and create the cluster. Set
the token to 30000 to allow memory preserving maintenance. For more
information, see this article for Linux.

If you're building a cluster on RHEL 7.x, use the following commands:

Bash
sudo pcs cluster auth prod-cl1-0 prod-cl1-1 -u hacluster
sudo pcs cluster setup --name nw1-azr prod-cl1-0 prod-cl1-1 --token
30000
sudo pcs cluster start --all

If you're building a cluster on RHEL 8.x/RHEL 9.x, use the following commands:

Bash

sudo pcs host auth prod-cl1-0 prod-cl1-1 -u hacluster


sudo pcs cluster setup nw1-azr prod-cl1-0 prod-cl1-1 totem token=30000
sudo pcs cluster start --all

Verify the cluster status by running the following command:

Bash

# Run the following command until the status of both nodes is online
sudo pcs status

# Cluster name: nw1-azr


# WARNING: no stonith devices and stonith-enabled is not false
# Stack: corosync
# Current DC: prod-cl1-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) -
partition with quorum
# Last updated: Fri Aug 17 09:18:24 2018
# Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on prod-
cl1-1
#
# 2 nodes configured
# 0 resources configured
#
# Online: [ prod-cl1-0 prod-cl1-1 ]
#
# No resources
#
# Daemon Status:
# corosync: active/disabled
# pacemaker: active/disabled
# pcsd: active/enabled

10. [A] Set expected votes.

Bash

# Check the quorum votes


pcs quorum status
# If the quorum votes are not set to 2, execute the next command
sudo pcs quorum expected-votes 2

 Tip

If you're building a multinode cluster, that is, a cluster with more than two
nodes, don't set the votes to 2.

11. [1] Allow concurrent fence actions.

Bash

sudo pcs property set concurrent-fencing=true

Create a fencing device


The fencing device uses either a managed identity for Azure resource or a service
principal to authorize against Azure.

Managed identity

To create a managed identity (MSI), create a system-assigned managed identity for


each VM in the cluster. If a system-assigned managed identity already exists, it's
used. Don't use user-assigned managed identities with Pacemaker at this time. A
fence device, based on managed identity, is supported on RHEL 7.9 and RHEL
8.x/RHEL 9.x.

[1] Create a custom role for the fence agent


Both the managed identity and the service principal don't have permissions to access
your Azure resources by default. You need to give the managed identity or service
principal permissions to start and stop (power-off) all VMs of the cluster. If you haven't
already created the custom role, you can create it by using PowerShell or the Azure CLI.

Use the following content for the input file. You need to adapt the content to your
subscriptions, that is, replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and yyyyyyyy-
yyyy-yyyy-yyyy-yyyyyyyyyyyy with the IDs of your subscription. If you only have one

subscription, remove the second entry in AssignableScopes .

JSON
{
"Name": "Linux Fence Agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
],
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}

[A] Assign the custom role


Use managed identity or service principal.

Managed identity

Assign the custom role Linux Fence Agent Role that was created in the last section
to each managed identity of the cluster VMs. Each VM system-assigned managed
identity needs the role assigned for every cluster VM's resource. For more
information, see Assign a managed identity access to a resource by using the Azure
portal. Verify that each VM's managed identity role assignment contains all the
cluster VMs.

) Important

Be aware that assignment and removal of authorization with managed


identities can be delayed until effective.

[1] Create the fencing devices


After you edit the permissions for the VMs, you can configure the fencing devices in the
cluster.

Bash
sudo pcs property set stonith-timeout=900

7 Note

The option pcmk_host_map is only required in the command if the RHEL hostnames
and the Azure VM names are not identical. Specify the mapping in the format
hostname:vm-name. Refer to the bold section in the command. For more
information, see What format should I use to specify node mappings to fencing
devices in pcmk_host_map? .

Managed identity

For RHEL 7.x, use the following command to configure the fence device:

Bash

sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true


resourceGroup="resource group" \
subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-
vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120
pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
op monitor interval=3600

For RHEL 8.x/9.x, use the following command to configure the fence device:

Bash

# Run following command if you are setting up fence agent on (two-node


cluster and pacemaker version greater than 2.0.4-6.el8) OR (HANA scale
out)
sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true
resourceGroup="resource group" \
subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-
vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120
pcmk_monitor_retries=4 pcmk_action_limit=3 \
op monitor interval=3600

# Run following command if you are setting up fence agent on (two-node


cluster and pacemaker version less than 2.0.4-6.el8)
sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true
resourceGroup="resource group" \
subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-
vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120
pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
op monitor interval=3600

If you're using a fencing device based on service principal configuration, read Change
from SPN to MSI for Pacemaker clusters by using Azure fencing and learn how to
convert to managed identity configuration.

 Tip

To avoid fence races within a two-node pacemaker cluster, you can configure
the priority-fencing-delay cluster property. This property introduces
additional delay in fencing a node that has higher total resource priority when
a split-brain scenario occurs. For more information, see Can Pacemaker fence
the cluster node with the fewest running resources? .
The property priority-fencing-delay is applicable for Pacemaker version
2.0.4-6.el8 or higher and on a two-node cluster. If you configure the
priority-fencing-delay cluster property, you don't need to set the

pcmk_delay_max property. But if the Pacemaker version is less than 2.0.4-6.el8,

you need to set the pcmk_delay_max property.


For instructions on how to set the priority-fencing-delay cluster property,
see the respective SAP ASCS/ERS and SAP HANA scale-up HA documents.

The monitoring and fencing operations are deserialized. As a result, if there's a longer
running monitoring operation and simultaneous fencing event, there's no delay to the
cluster failover because the monitoring operation is already running.

[1] Enable the use of a fencing device


Bash

sudo pcs property set stonith-enabled=true

 Tip

The Azure fence agent requires outbound connectivity to public endpoints. For
more information along with possible solutions, see Public endpoint connectivity
for VMs using standard ILB.
Configure Pacemaker for Azure scheduled
events
Azure offers scheduled events. Scheduled events are sent via the metadata service and
allow time for the application to prepare for such events.

The Pacemaker resource agent azure-events-az monitors for scheduled Azure events. If
events are detected and the resource agent determines that another cluster node is
available, it sets a cluster health attribute.

When the cluster health attribute is set for a node, the location constraint triggers and
all resources with names that don't start with health- are migrated away from the node
with the scheduled event. After the affected cluster node is free of running cluster
resources, the scheduled event is acknowledged and can execute its action, such as a
restart.

1. [A] Make sure that the package for the azure-events-az agent is already installed
and up to date.

Bash

sudo dnf info resource-agents

Minimum version requirements:

RHEL 8.4: resource-agents-4.1.1-90.13


RHEL 8.6: resource-agents-4.9.0-16.9
RHEL 8.8 and newer: resource-agents-4.9.0-40.1
RHEL 9.0 and newer: resource-agents-cloud-4.10.0-34.2

2. [1] Configure the resources in Pacemaker.

Bash

#Place the cluster in maintenance mode


sudo pcs property set maintenance-mode=true

3. [1] Set the Pacemaker cluster health-node strategy and constraint.

Bash

sudo pcs property set node-health-strategy=custom


sudo pcs constraint location 'regexp%!health-.*' \
rule score-attribute='#health-azure' \
defined '#uname'

) Important

Don't define any other resources in the cluster starting with health- besides
the resources described in the next steps.

4. [1] Set the initial value of the cluster attributes. Run for each cluster node and for
scale-out environments including majority maker VM.

Bash

sudo crm_attribute --node prod-cl1-0 --name '#health-azure' --update 0


sudo crm_attribute --node prod-cl1-1 --name '#health-azure' --update 0

5. [1] Configure the resources in Pacemaker. Make sure the resources start with
health-azure .

Bash

sudo pcs resource create health-azure-events \


ocf:heartbeat:azure-events-az op monitor interval=10s
sudo pcs resource clone health-azure-events allow-unhealthy-nodes=true

6. Take the Pacemaker cluster out of maintenance mode.

Bash

sudo pcs property set maintenance-mode=false

7. Clear any errors during enablement and verify that the health-azure-events
resources have started successfully on all cluster nodes.

Bash

sudo pcs resource cleanup

First-time query execution for scheduled events can take up to two minutes.
Pacemaker testing with scheduled events can use reboot or redeploy actions for
the cluster VMs. For more information, see Scheduled events.
Optional fencing configuration

 Tip

This section is only applicable if you want to configure the special fencing device
fence_kdump .

If you need to collect diagnostic information within the VM, it might be useful to
configure another fencing device based on the fence agent fence_kdump . The
fence_kdump agent can detect that a node entered kdump crash recovery and can allow

the crash recovery service to complete before other fencing methods are invoked. Note
that fence_kdump isn't a replacement for traditional fence mechanisms, like the Azure
fence agent, when you're using Azure VMs.

) Important

Be aware that when fence_kdump is configured as a first-level fencing device, it


introduces delays in the fencing operations and, respectively, delays in the
application resources failover.

If a crash dump is successfully detected, the fencing is delayed until the crash
recovery service completes. If the failed node is unreachable or if it doesn't
respond, the fencing is delayed by time determined, the configured number of
iterations, and the fence_kdump timeout. For more information, see How do I
configure fence_kdump in a Red Hat Pacemaker cluster? .

The proposed fence_kdump timeout might need to be adapted to the specific


environment.

We recommend that you configure fence_kdump fencing only when necessary to


collect diagnostics within the VM and always in combination with traditional fence
methods, such as the Azure fence agent.

The following Red Hat KB articles contain important information about configuring
fence_kdump fencing:

See How do I configure fence_kdump in a Red Hat Pacemaker cluster? .


See How to configure/manage fencing levels in an RHEL cluster with Pacemaker .
See fence_kdump fails with "timeout after X seconds" in an RHEL 6 or 7 HA cluster
with kexec-tools older than 2.0.14 .
For information on how to change the default timeout, see How do I configure
kdump for use with the RHEL 6, 7, 8 HA Add-On? .
For information on how to reduce failover delay when you use fence_kdump , see
Can I reduce the expected delay of failover when adding fence_kdump
configuration? .

Run the following optional steps to add fence_kdump as a first-level fencing


configuration, in addition to the Azure fence agent configuration.

1. [A] Verify that kdump is active and configured.

Bash

systemctl is-active kdump


# Expected result
# active

2. [A] Install the fence_kdump fence agent.

Bash

yum install fence-agents-kdump

3. [1] Create a fence_kdump fencing device in the cluster.

Bash

pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off"


pcmk_host_list="prod-cl1-0 prod-cl1-1" timeout=30

4. [1] Configure fencing levels so that the fence_kdump fencing mechanism is


engaged first.

Bash

pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off"


pcmk_host_list="prod-cl1-0 prod-cl1-1"
pcs stonith level add 1 prod-cl1-0 rsc_st_kdump
pcs stonith level add 1 prod-cl1-1 rsc_st_kdump
pcs stonith level add 2 prod-cl1-0 rsc_st_azure
pcs stonith level add 2 prod-cl1-1 rsc_st_azure

# Check the fencing level configuration


pcs stonith level
# Example output
# Target: prod-cl1-0
# Level 1 - rsc_st_kdump
# Level 2 - rsc_st_azure
# Target: prod-cl1-1
# Level 1 - rsc_st_kdump
# Level 2 - rsc_st_azure

5. [A] Allow the required ports for fence_kdump through the firewall.

Bash

firewall-cmd --add-port=7410/udp
firewall-cmd --add-port=7410/udp --permanent

6. [A] Ensure that the initramfs image file contains the fence_kdump and hosts files.
For more information, see How do I configure fence_kdump in a Red Hat
Pacemaker cluster? .

Bash

lsinitrd /boot/initramfs-$(uname -r)kdump.img | egrep "fence|hosts"


# Example output
# -rw-r--r-- 1 root root 208 Jun 7 21:42 etc/hosts
# -rwxr-xr-x 1 root root 15560 Jun 17 14:59
usr/libexec/fence_kdump_send

7. [A] Perform the fence_kdump_nodes configuration in /etc/kdump.conf to avoid


fence_kdump from failing with a timeout for some kexec-tools versions. For more

information, see fence_kdump times out when fence_kdump_nodes is not specified


with kexec-tools version 2.0.15 or later and fence_kdump fails with "timeout after
X seconds" in a RHEL 6 or 7 High Availability cluster with kexec-tools versions older
than 2.0.14 . The example configuration for a two-node cluster is presented here.
After you make a change in /etc/kdump.conf , the kdump image must be
regenerated. To regenerate, restart the kdump service.

Bash

vi /etc/kdump.conf
# On node prod-cl1-0 make sure the following line is added
fence_kdump_nodes prod-cl1-1
# On node prod-cl1-1 make sure the following line is added
fence_kdump_nodes prod-cl1-0

# Restart the service on each node


systemctl restart kdump
8. Test the configuration by crashing a node. For more information, see How do I
configure fence_kdump in a Red Hat Pacemaker cluster? .

) Important

If the cluster is already in productive use, plan the test accordingly because
crashing a node has an impact on the application.

Bash

echo c > /proc/sysrq-trigger

Next steps
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
VMs, see High Availability of SAP HANA on Azure Virtual Machines.
High availability of SAP HANA on Azure
VMs on Red Hat Enterprise Linux
Article • 04/08/2024

For on-premises development, you can use either HANA System Replication or shared
storage to establish high availability (HA) for SAP HANA. On Azure Virtual Machines,
HANA System Replication on Azure is currently the only supported HA function.

SAP HANA Replication consists of one primary node and at least one secondary node.
Changes to the data on the primary node are replicated to the secondary node
synchronously or asynchronously.

This article describes how to deploy and configure virtual machines (VMs), install the
cluster framework, and install and configure SAP HANA System Replication.

In the example configurations, installation commands, instance number 03, and HANA
System ID HN1 are used.

Prerequisites
Read the following SAP Notes and papers first:

SAP Note 1928533 , which has:


The list of Azure VM sizes that are supported for the deployment of SAP
software.
Important capacity information for Azure VM sizes.
The supported SAP software and operating system (OS) and database
combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 3108302 has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA System Replication in a Pacemaker cluster
General RHEL documentation:
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
HANA Scale-Up System Replication with RHEL HA Add-On
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure

Overview
To achieve HA, SAP HANA is installed on two VMs. The data is replicated by using HANA
System Replication.

The SAP HANA System Replication setup uses a dedicated virtual hostname and virtual
IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
presented configuration shows a load balancer with:

Front-end IP address: 10.0.0.13 for hn1-db


Probe port: 62503
Prepare the infrastructure
Azure Marketplace contains images qualified for SAP HANA with the High Availability
add-on, which you can use to deploy new VMs by using various versions of Red Hat.

Deploy Linux VMs manually via the Azure portal


This document assumes that you've already deployed a resource group, an Azure virtual
network, and a subnet.

Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the
HANA system. You can deploy a VM in any one of the availability options: virtual
machine scale set, availability zone, or availability set.

) Important

Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to setup standard load balancer for high
availability setup of HANA database.

Azure portal

Follow the steps in Create load balancer to set up a standard load balancer for a
high-availability SAP system by using the Azure portal. During the setup of the load
balancer, consider the following points:

1. Frontend IP Configuration: Create a front-end IP. Select the same virtual


network and subnet name as your database virtual machines.
2. Backend Pool: Create a back-end pool and add database VMs.
3. Inbound rules: Create a load-balancing rule. Follow the same steps for both
load-balancing rules.

Frontend IP address: Select a front-end IP.


Backend pool: Select a back-end pool.
High-availability ports: Select this option.
Protocol: Select TCP.
Health Probe: Create a health probe with the following details:
Protocol: Select TCP.
Port: For example, 625<instance-no.>.
Interval: Enter 5.
Probe Threshold: Enter 2.
Idle timeout (minutes): Enter 30.
Enable Floating IP: Select this option.

7 Note

The health probe configuration property numberOfProbes , otherwise known as


Unhealthy threshold in the portal, isn't respected. To control the number of
successful or failed consecutive probes, set the property probeThreshold to 2 .
It's currently not possible to set this property by using the Azure portal, so use
either the Azure CLI or the PowerShell command.

For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
another IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health

probes and SAP Note 2382421 .

Install SAP HANA


The steps in this section use the following prefixes:

[A]: The step applies to all nodes.


[1]: The step applies to node 1 only.
[2]: The step applies to node 2 of the Pacemaker cluster only.

1. [A] Set up the disk layout: Logical Volume Manager (LVM).

We recommend that you use LVM for volumes that store data and log files. The
following example assumes that the VMs have four data disks attached that are
used to create two volumes.

List all the available disks:

Bash

ls /dev/disk/azure/scsi1/lun*

Example output:

Output

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2 /dev/disk/azure/scsi1/lun3

Create physical volumes for all the disks that you want to use:

Bash

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2
sudo pvcreate /dev/disk/azure/scsi1/lun3
Create a volume group for the data files. Use one volume group for the log files
and one for the shared directory of SAP HANA:

Bash

sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0


/dev/disk/azure/scsi1/lun1
sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2
sudo vgcreate vg_hana_shared_HN1 /dev/disk/azure/scsi1/lun3

Create the logical volumes. A linear volume is created when you use lvcreate
without the -i switch. We suggest that you create a striped volume for better I/O
performance. Align the stripe sizes to the values documented in SAP HANA VM
storage configurations. The -i argument should be the number of the underlying
physical volumes, and the -I argument is the stripe size.

In this document, two physical volumes are used for the data volume, so the -i
switch argument is set to 2. The stripe size for the data volume is 256KiB. One
physical volume is used for the log volume, so no -i or -I switches are explicitly
used for the log volume commands.

) Important

Use the -i switch and set it to the number of the underlying physical volume
when you use more than one physical volume for each data, log, or shared
volumes. Use the -I switch to specify the stripe size when you're creating a
striped volume. See SAP HANA VM storage configurations for recommended
storage configurations, including stripe sizes and number of disks. The
following layout examples don't necessarily meet the performance guidelines
for a particular system size. They're for illustration only.

Bash

sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared_HN1
sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log
sudo mkfs.xfs /dev/vg_hana_shared_HN1/hana_shared

Don't mount the directories by issuing mount commands. Instead, enter the
configurations into the fstab and issue a final mount -a to validate the syntax.
Start by creating the mount directories for each volume:

Bash

sudo mkdir -p /hana/data


sudo mkdir -p /hana/log
sudo mkdir -p /hana/shared

Next, create fstab entries for the three logical volumes by inserting the following
lines in the /etc/fstab file:

/dev/mapper/vg_hana_data_HN1-hana_data /hana/data xfs defaults,nofail 0 2


/dev/mapper/vg_hana_log_HN1-hana_log /hana/log xfs defaults,nofail 0 2
/dev/mapper/vg_hana_shared_HN1-hana_shared /hana/shared xfs defaults,nofail 0
2

Finally, mount the new volumes all at once:

Bash

sudo mount -a

2. [A] Set up hostname resolution for all hosts.

You can either use a DNS server or modify the /etc/hosts file on all nodes by
creating entries for all nodes like this in /etc/hosts :

10.0.0.5 hn1-db-0 10.0.0.6 hn1-db-1

3. [A] Perform RHEL for HANA configuration.

Configure RHEL as described in the following notes:

2447641 - Additional packages required for installing SAP HANA SPS 12 on


RHEL 7.X
2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
2455582 - Linux: Running SAP applications compiled with GCC 6.x
2593824 - Linux: Running SAP applications compiled with GCC 7.x
2886607 - Linux: Running SAP applications compiled with GCC 9.x

4. [A] Install the SAP HANA.

To install SAP HANA System Replication, see Automating SAP HANA Scale-Up
System Replication using the RHEL HA Add-On .
Run the hdblcm program from the HANA DVD. Enter the following values at the
prompt:
a. Choose installation: Enter 1.
b. Select additional components for installation: Enter 1.
c. Enter Installation Path [/hana/shared]: Select Enter.
d. Enter Local Host Name [..]: Select Enter.
e. Do you want to add additional hosts to the system? (y/n) [n]: Select Enter.
f. Enter SAP HANA System ID: Enter the SID of HANA, for example: HN1.
g. Enter Instance Number [00]: Enter the HANA Instance number. Enter 03 if you
used the Azure template or followed the manual deployment section of this
article.
h. Select Database Mode / Enter Index [1]: Select Enter.
i. Select System Usage / Enter Index [4]: Select the system usage value.
j. Enter Location of Data Volumes [/hana/data]: Select Enter.
k. Enter Location of Log Volumes [/hana/log]: Select Enter.
l. Restrict maximum memory allocation? [n]: Select Enter.
m. Enter Certificate Host Name For Host '...' [...]: Select Enter.
n. Enter SAP Host Agent User (sapadm) Password: Enter the host agent user
password.
o. Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user
password again to confirm.
p. Enter System Administrator (hdbadm) Password: Enter the system
administrator password.
q. Confirm System Administrator (hdbadm) Password: Enter the system
administrator password again to confirm.
r. Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select
Enter.
s. Enter System Administrator Login Shell [/bin/sh]: Select Enter.
t. Enter System Administrator User ID [1001]: Select Enter.
u. Enter ID of User Group (sapsys) [79]: Select Enter.
v. Enter Database User (SYSTEM) Password: Enter the database user password.
w. Confirm Database User (SYSTEM) Password: Enter the database user password
again to confirm.
x. Restart system after machine reboot? [n]: Select Enter.
y. Do you want to continue? (y/n): Validate the summary. Enter y to continue.

5. [A] Upgrade the SAP Host Agent.

Download the latest SAP Host Agent archive from the SAP Software Center and
run the following command to upgrade the agent. Replace the path to the archive
to point to the file that you downloaded:
Bash

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP


Host Agent>;

6. [A] Configure the firewall.

Create the firewall rule for the Azure Load Balancer probe port.

Bash

sudo firewall-cmd --zone=public --add-port=62503/tcp


sudo firewall-cmd --zone=public --add-port=62503/tcp --permanent

Configure SAP HANA 2.0 System Replication


The steps in this section use the following prefixes:

[A]: The step applies to all nodes.


[1]: The step applies to node 1 only.
[2]: The step applies to node 2 of the Pacemaker cluster only.

1. [A] Configure the firewall.

Create firewall rules to allow HANA System Replication and client traffic. The
required ports are listed on TCP/IP Ports of All SAP Products . The following
commands are just an example to allow HANA 2.0 System Replication and client
traffic to database SYSTEMDB, HN1, and NW1.

Bash

sudo firewall-cmd --zone=public --add-port=


{40302,40301,40307,40303,40340,30340,30341,30342}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{40302,40301,40307,40303,40340,30340,30341,30342}/tcp

2. [1] Create the tenant database.

If you're using SAP HANA 2.0 or MDC, create a tenant database for your SAP
NetWeaver system. Replace NW1 with the SID of your SAP system.

Run the following command as <hanasid>adm:

Bash
hdbsql -u SYSTEM -p "[passwd]" -i 03 -d SYSTEMDB 'CREATE DATABASE NW1
SYSTEM USER PASSWORD "<passwd>"'

3. [1] Configure system replication on the first node.

Back up the databases as <hanasid>adm:

Bash

hdbsql -d SYSTEMDB -u SYSTEM -p "<passwd>" -i 03 "BACKUP DATA USING


FILE ('initialbackupSYS')"
hdbsql -d HN1 -u SYSTEM -p "<passwd>" -i 03 "BACKUP DATA USING FILE
('initialbackupHN1')"
hdbsql -d NW1 -u SYSTEM -p "<passwd>" -i 03 "BACKUP DATA USING FILE
('initialbackupNW1')"

Copy the system PKI files to the secondary site:

Bash

scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hn1-


db-1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hn1-db-
1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/

Create the primary site:

Bash

hdbnsutil -sr_enable --name=SITE1

4. [2] Configure system replication on the second node.

Register the second node to start the system replication. Run the following
command as <hanasid>adm:

Bash

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --
replicationMode=sync --name=SITE2

5. [1] Check replication status.


Check the replication status and wait until all databases are in sync. If the status
remains UNKNOWN, check your firewall settings.

Bash

sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
# | Database | Host | Port | Service Name | Volume ID | Site ID |
Site Name | Secondary | Secondary | Secondary | Secondary | Secondary
| Replication | Replication | Replication |
# | | | | | | |
| Host | Port | Site ID | Site Name | Active Status | Mode
| Status | Status Details |
# | -------- | -------- | ----- | ------------ | --------- | ------- |
--------- | --------- | --------- | --------- | --------- | -----------
-- | ----------- | ----------- | -------------- |
# | SYSTEMDB | hn1-db-0 | 30301 | nameserver | 1 | 1 |
SITE1 | hn1-db-1 | 30301 | 2 | SITE2 | YES
| SYNC | ACTIVE | |
# | HN1 | hn1-db-0 | 30307 | xsengine | 2 | 1 |
SITE1 | hn1-db-1 | 30307 | 2 | SITE2 | YES
| SYNC | ACTIVE | |
# | NW1 | hn1-db-0 | 30340 | indexserver | 2 | 1 |
SITE1 | hn1-db-1 | 30340 | 2 | SITE2 | YES
| SYNC | ACTIVE | |
# | HN1 | hn1-db-0 | 30303 | indexserver | 3 | 1 |
SITE1 | hn1-db-1 | 30303 | 2 | SITE2 | YES
| SYNC | ACTIVE | |
#
# status system replication site "2": ACTIVE
# overall system replication status: ACTIVE
#
# Local System Replication State
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# mode: PRIMARY
# site id: 1
# site name: SITE1

Configure SAP HANA 1.0 System Replication


The steps in this section use the following prefixes:

[A]: The step applies to all nodes.


[1]: The step applies to node 1 only.
[2]: The step applies to node 2 of the Pacemaker cluster only.

1. [A] Configure the firewall.


Create firewall rules to allow HANA System Replication and client traffic. The
required ports are listed on TCP/IP Ports of All SAP Products . The following
commands are just an example to allow HANA 2.0 System Replication. Adapt it to
your SAP HANA 1.0 installation.

Bash

sudo firewall-cmd --zone=public --add-port=40302/tcp --permanent


sudo firewall-cmd --zone=public --add-port=40302/tcp

2. [1] Create the required users.

Run the following command as root. Make sure to replace the values for HANA
System ID (for example, HN1), instance number (03), and any usernames, with the
values of your SAP HANA installation:

Bash

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'

3. [A] Create the keystore entry.

Run the following command as root to create a new keystore entry:

Bash

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd

4. [1] Back up the database.

Back up the databases as root:

Bash

PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -d SYSTEMDB -u system -i 03 "BACKUP DATA USING FILE
('initialbackup')"

If you use a multitenant installation, also back up the tenant database:

Bash
hdbsql -d HN1 -u system -i 03 "BACKUP DATA USING FILE
('initialbackup')"

5. [1] Configure system replication on the first node.

Create the primary site as <hanasid>adm:

Bash

su - hdbadm
hdbnsutil -sr_enable –-name=SITE1

6. [2] Configure system replication on the secondary node.

Register the secondary site as <hanasid>adm:

Bash

HDB stop
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --
replicationMode=sync --name=SITE2
HDB start

Create a Pacemaker cluster


Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create
a basic Pacemaker cluster for this HANA server.

) Important

With the systemd based SAP Startup Framework, SAP HANA instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As outlined in SAP Note 3189534 , any new installations
of SAP HANA SPS07 revision 70 or above, or updates to HANA systems to HANA
2.0 SPS07 revision 70 or above, SAP Startup framework will be automatically
registered with systemd.

When using HA solutions to manage SAP HANA system replication in combination


with systemd-enabled SAP HANA instances (refer to SAP Note 3189534 ),
additional steps are necessary to ensure that the HA cluster can manage the SAP
instance without systemd interference. So, for SAP HANA system integrated with
systemd, additional steps outlined in Red Hat KBA 7029705 must be followed on
all cluster nodes.

Implement the Python system replication hook


SAPHanaSR
This important step optimizes the integration with the cluster and improves the
detection when a cluster failover is needed. We highly recommend that you configure
the SAPHanaSR Python hook.

1. [A] Install the SAP HANA resource agents on all nodes. Make sure to enable a
repository that contains the package. You don't need to enable more repositories,
if you're using an RHEL 8.x HA-enabled image.

Bash

# Enable repository that contains SAP HANA resource agents


sudo subscription-manager repos --enable="rhel-sap-hana-for-rhel-7-
server-rpms"

sudo yum install -y resource-agents-sap-hana

2. [A] Install the HANA system replication hook . The hook needs to be installed on
both HANA DB nodes.

 Tip

The Python hook can only be implemented for HANA 2.0.

a. Prepare the hook as root .

Bash

mkdir -p /hana/shared/myHooks
cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks
chown -R hn1adm:sapsys /hana/shared/myHooks

b. Stop HANA on both nodes. Run as <sid>adm.

Bash
sapcontrol -nr 03 -function StopSystem

c. Adjust global.ini on each cluster node.

Output

[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1

[trace]
ha_dr_saphanasr = info

3. [A] The cluster requires sudoers configuration on each cluster node for <sid>adm.
In this example, that's achieved by creating a new file. Use the visudo command to
edit the 20-saphana drop-in file as root .

Bash

sudo visudo -f /etc/sudoers.d/20-saphana

Insert the following lines and then save:

Output

Cmnd_Alias SITE1_SOK = /usr/sbin/crm_attribute -n


hana_hn1_site_srHook_SITE1 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias SITE1_SFAIL = /usr/sbin/crm_attribute -n
hana_hn1_site_srHook_SITE1 -v SFAIL -t crm_config -s SAPHanaSR
Cmnd_Alias SITE2_SOK = /usr/sbin/crm_attribute -n
hana_hn1_site_srHook_SITE2 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias SITE2_SFAIL = /usr/sbin/crm_attribute -n
hana_hn1_site_srHook_SITE2 -v SFAIL -t crm_config -s SAPHanaSR
hn1adm ALL=(ALL) NOPASSWD: SITE1_SOK, SITE1_SFAIL, SITE2_SOK,
SITE2_SFAIL
Defaults!SITE1_SOK, SITE1_SFAIL, SITE2_SOK, SITE2_SFAIL !requiretty

4. [A] Start SAP HANA on both nodes. Run as <sid>adm.

Bash

sapcontrol -nr 03 -function StartSystem


5. [1] Verify the hook installation. Run as <sid>adm on the active HANA system
replication site.

Bash

cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*

Output

# 2021-04-12 21:36:16.911343 ha_dr_SAPHanaSR SFAIL


# 2021-04-12 21:36:29.147808 ha_dr_SAPHanaSR SFAIL
# 2021-04-12 21:37:04.898680 ha_dr_SAPHanaSR SOK

For more information on the implementation of the SAP HANA System Replication
hook, see Enable the SAP HA/DR provider hook .

Create SAP HANA cluster resources


Create the HANA topology. Run the following commands on one of the Pacemaker
cluster nodes. Throughout these instructions, be sure to substitute your instance
number, HANA system ID, IP addresses, and system names, where appropriate.

Bash

sudo pcs property set maintenance-mode=true

sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1


InstanceNumber=03 \
op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600
\
clone clone-max=2 clone-node-max=1 interleave=true

Next, create the HANA resources.

7 Note

This article contains references to a term that Microsoft no longer uses. When the
term is removed from the software, we'll remove it from this article.

If you're building a cluster on RHEL 7.x, use the following commands:

Bash
sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03
PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=false \
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 op demote timeout=3600 \
master notify=true clone-max=2 clone-node-max=1 interleave=true

sudo pcs resource create vip_HN1_03 IPaddr2 ip="10.0.0.13"


sudo pcs resource create nc_HN1_03 azure-lb port=62503
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03

sudo pcs constraint order SAPHanaTopology_HN1_03-clone then SAPHana_HN1_03-


master symmetrical=false
sudo pcs constraint colocation add g_ip_HN1_03 with master SAPHana_HN1_03-
master 4000

sudo pcs resource defaults resource-stickiness=1000


sudo pcs resource defaults migration-threshold=5000

sudo pcs property set maintenance-mode=false

If you're building a cluster on RHEL 8.x/9.x, use the following commands:

Bash

sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03


PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=false \
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 op demote timeout=3600 \
promotable notify=true clone-max=2 clone-node-max=1 interleave=true

sudo pcs resource create vip_HN1_03 IPaddr2 ip="10.0.0.13"


sudo pcs resource create nc_HN1_03 azure-lb port=62503
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03

sudo pcs constraint order SAPHanaTopology_HN1_03-clone then SAPHana_HN1_03-


clone symmetrical=false
sudo pcs constraint colocation add g_ip_HN1_03 with master SAPHana_HN1_03-
clone 4000

sudo pcs resource defaults update resource-stickiness=1000


sudo pcs resource defaults update migration-threshold=5000

sudo pcs property set maintenance-mode=false


To configure priority-fencing-delay for SAP HANA (applicable only as of pacemaker-
2.0.4-6.el8 or higher), the following commands need to be executed.

7 Note

If you have a two-node cluster, you can configure the priority-fencing-delay


cluster property. This property introduces a delay in fencing a node that has higher
total resource priority when a split-brain scenario occurs. For more information, see
Can Pacemaker fence the cluster node with the fewest running resources? .

The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8


version or higher. If you're setting up priority-fencing-delay on an existing
cluster, make sure to unset the pcmk_delay_max option in the fencing device.

Bash

sudo pcs property set maintenance-mode=true

sudo pcs resource defaults update priority=1


sudo pcs resource update SAPHana_HN1_03-clone meta priority=10

sudo pcs property set priority-fencing-delay=15s

sudo pcs property set maintenance-mode=false

) Important

It's a good idea to set AUTOMATED_REGISTER to false , while you're performing


failover tests, to prevent a failed primary instance to automatically register as
secondary. After testing, as a best practice, set AUTOMATED_REGISTER to true so that
after takeover, system replication can resume automatically.

Make sure that the cluster status is okay and that all of the resources are started. Which
node the resources are running on isn't important.

7 Note

The timeouts in the preceding configuration are only examples and might need to
be adapted to the specific HANA setup. For instance, you might need to increase
the start timeout, if it takes longer to start the SAP HANA database.
Use the command sudo pcs status to check the state of the cluster resources created:

Output

# Online: [ hn1-db-0 hn1-db-1 ]


#
# Full list of resources:
#
# azure_fence (stonith:fence_azure_arm): Started hn1-db-0
# Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
# Started: [ hn1-db-0 hn1-db-1 ]
# Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

Configure HANA active/read-enabled system


replication in Pacemaker cluster
Starting with SAP HANA 2.0 SPS 01, SAP allows active/read-enabled setups for SAP
HANA System Replication, where the secondary systems of SAP HANA System
Replication can be used actively for read-intense workloads.

To support such a setup in a cluster, a second virtual IP address is required, which allows
clients to access the secondary read-enabled SAP HANA database. To ensure that the
secondary replication site can still be accessed after a takeover has occurred, the cluster
needs to move the virtual IP address around with the secondary SAPHana resource.

This section describes the other steps that are required to manage HANA active/read-
enabled system replication in a Red Hat HA cluster with a second virtual IP.

Before you proceed further, make sure that you've fully configured the Red Hat HA
cluster managing an SAP HANA database, as described in preceding segments of the
documentation.
Additional setup in Azure Load Balancer for active/read-
enabled setup
To proceed with more steps on provisioning a second virtual IP, make sure that you've
configured Azure Load Balancer as described in the Deploy Linux VMs manually via
Azure portal section.

1. For a standard load balancer, follow these steps on the same load balancer that
you created in an earlier section.

a. Create a second front-end IP pool:

Open the load balancer, select frontend IP pool, and select Add.
Enter the name of the second front-end IP pool (for example, hana-
secondaryIP).
Set Assignment to Static and enter the IP address (for example, 10.0.0.14).
Select OK.
After the new front-end IP pool is created, note the pool IP address.

b. Create a health probe:

Open the load balancer, select health probes, and select Add.
Enter the name of the new health probe (for example, hana-secondaryhp).
Select TCP as the protocol and port 62603. Keep the Interval value set to 5
and the Unhealthy threshold value set to 2.
Select OK.

c. Create the load-balancing rules:

Open the load balancer, select load balancing rules, and select Add.
Enter the name of the new load balancer rule (for example, hana-
secondarylb).
Select the front-end IP address, the back-end pool, and the health probe that
you created earlier (for example, hana-secondaryIP, hana-backend, and
hana-secondaryhp).
Select HA Ports.
Make sure to enable Floating IP.
Select OK.

Configure HANA active/read-enabled system replication


The steps to configure HANA System Replication are described in the Configure SAP
HANA 2.0 System Replication section. If you're deploying a read-enabled secondary
scenario while you're configuring system replication on the second node, run the
following command as hanasidadm:

Bash

sapcontrol -nr 03 -function StopWait 600 10

hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --


replicationMode=sync --name=SITE2 --operationMode=logreplay_readaccess

Add a secondary virtual IP address resource for an


active/read-enabled setup
The second virtual IP and the appropriate colocation constraint can be configured with
the following commands:

Bash

pcs property set maintenance-mode=true

pcs resource create secvip_HN1_03 ocf:heartbeat:IPaddr2 ip="10.40.0.16"

pcs resource create secnc_HN1_03 ocf:heartbeat:azure-lb port=62603

pcs resource group add g_secip_HN1_03 secnc_HN1_03 secvip_HN1_03

pcs constraint location g_secip_HN1_03 rule score=INFINITY


hana_hn1_sync_state eq SOK and hana_hn1_roles eq
4:S:master1:master:worker:master

pcs constraint location g_secip_HN1_03 rule score=4000 hana_hn1_sync_state


eq PRIM and hana_hn1_roles eq 4:P:master1:master:worker:master
pcs property set maintenance-mode=false

Make sure that the cluster status is okay and that all the resources are started. The
second virtual IP runs on the secondary site along with the SAPHana secondary
resource.

Output

sudo pcs status

# Online: [ hn1-db-0 hn1-db-1 ]


#
# Full List of Resources:
# rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hn1-db-0
# Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]:
# Started: [ hn1-db-0 hn1-db-1 ]
# Clone Set: SAPHana_HN1_03-clone [SAPHana_HN1_03] (promotable):
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_03:
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
# Resource Group: g_secip_HN1_03:
# secnc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
# secvip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

In the next section, you can find the typical set of failover tests to run.

Be aware of the second virtual IP behavior while you're testing a HANA cluster
configured with read-enabled secondary:

1. When you migrate the SAPHana_HN1_03 cluster resource to the secondary site
hn1-db-1, the second virtual IP continues to run on the same site hn1-db-1. If
you've set AUTOMATED_REGISTER="true" for the resource and HANA system
replication is registered automatically on hn1-db-0, your second virtual IP also
moves to hn1-db-0.

2. On testing a server crash, the second virtual IP resources (secvip_HN1_03) and the
Azure Load Balancer port resource (secnc_HN1_03) run on the primary server
alongside the primary virtual IP resources. So, until the time that the secondary
server is down, applications that are connected to the read-enabled HANA
database connect to the primary HANA database. The behavior is expected
because you don't want applications that are connected to the read-enabled
HANA database to be inaccessible until the time the secondary server is
unavailable.
3. During failover and fallback of the second virtual IP address, the existing
connections on applications that use the second virtual IP to connect to the HANA
database might get interrupted.

The setup maximizes the time that the second virtual IP resource is assigned to a node
where a healthy SAP HANA instance is running.

Test the cluster setup


This section describes how you can test your setup. Before you start a test, make sure
that Pacemaker doesn't have any failed action (via pcs status), there are no unexpected
location constraints (for example, leftovers of a migration test), and that HANA is in sync
state, for example, with systemReplicationStatus .

Bash

sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

Test the migration


Resource state before starting the test:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

You can migrate the SAP HANA master node by running the following command as
root:

Bash

# On RHEL 7.x
pcs resource move SAPHana_HN1_03-master
# On RHEL 8.x
pcs resource move SAPHana_HN1_03-clone --master
The cluster would migrate the SAP HANA master node and the group containing virtual
IP address to hn1-db-1 .

After the migration is done, the sudo pcs status output looks like:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Stopped: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

With AUTOMATED_REGISTER="false" , the cluster would not restart the failed HANA
database or register it against the new primary on hn1-db-0 . In this case, configure the
HANA instance as secondary by running these commands, as hn1adm:

Bash

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --
replicationMode=sync --name=SITE1

The migration creates location constraints that need to be deleted again. Run the
following command as root, or via sudo :

Bash

pcs resource clear SAPHana_HN1_03-master

Monitor the state of the HANA resource by using pcs status . After HANA is started on
hn1-db-0 , the output should look like:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
Block network communication
Resource state before starting the test:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

Run the firewall rule to block the communication on one of the nodes.

Bash

# Execute iptable rule on hn1-db-1 (10.0.0.6) to block the incoming and


outgoing traffic to hn1-db-0 (10.0.0.5)
iptables -A INPUT -s 10.0.0.5 -j DROP; iptables -A OUTPUT -d 10.0.0.5 -j
DROP

When cluster nodes can't communicate with each other, there's a risk of a split-brain
scenario. In such situations, cluster nodes try to simultaneously fence each other,
resulting in a fence race. To avoid such a situation, we recommend that you set the
priority-fencing-delay property in cluster configuration (applicable only for pacemaker-
2.0.4-6.el8 or higher).

By enabling the priority-fencing-delay property, the cluster introduces a delay in the


fencing action specifically on the node hosting the HANA master resource, allowing the
node to win the fence race.

Run the following command to delete the firewall rule:

Bash

# If the iptables rule set on the server gets reset after a reboot, the
rules will be cleared out. In case they have not been reset, please proceed
to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.5 -j DROP; iptables -D OUTPUT -d 10.0.0.5 -j
DROP

Test the Azure fencing agent


7 Note

This article contains references to a term that Microsoft no longer uses. When the
term is removed from the software, we'll remove it from this article.

Resource state before starting the test:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

You can test the setup of the Azure fencing agent by disabling the network interface on
the node where SAP HANA is running as Master. For a description on how to simulate a
network failure, see Red Hat Knowledge Base article 79523 .

In this example, we use the net_breaker script as root to block all access to the network:

Bash

sh ./net_breaker.sh BreakCommCmd 10.0.0.6

The VM should now restart or stop depending on your cluster configuration. If you set
the stonith-action setting to off , the VM is stopped and the resources are migrated to
the running VM.

After you start the VM again, the SAP HANA resource fails to start as secondary if you
set AUTOMATED_REGISTER="false" . In this case, configure the HANA instance as secondary
by running this command as the hn1adm user:

Bash

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --
replicationMode=sync --name=SITE2

Switch back to root and clean up the failed state:


Bash

# On RHEL 7.x
pcs resource cleanup SAPHana_HN1_03-master
# On RHEL 8.x
pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource
needs to be cleaned>

Resource state after the test:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

Test a manual failover


Resource state before starting the test:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0

You can test a manual failover by stopping the cluster on the hn1-db-0 node, as root:

Bash

pcs cluster stop

After the failover, you can start the cluster again. If you set AUTOMATED_REGISTER="false" ,
the SAP HANA resource on the hn1-db-0 node fails to start as secondary. In this case,
configure the HANA instance as secondary by running this command as root:
Bash

pcs cluster start

Run the following as hn1adm:

Bash

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --
replicationMode=sync --name=SITE1

Then as root:

Bash

# On RHEL 7.x
pcs resource cleanup SAPHana_HN1_03-master
# On RHEL 8.x
pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource
needs to be cleaned>

Resource state after the test:

Output

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_03
nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP HANA VM storage configurations
High availability of SAP HANA scale-up
with Azure NetApp Files on RHEL
Article • 01/17/2024

This article describes how to configure SAP HANA System Replication in scale-up
deployment, when the HANA file systems are mounted via NFS, by using Azure NetApp
Files. In the example configurations and installation commands, instance number 03 and
HANA System ID HN1 are used. SAP HANA System Replication consists of one primary
node and at least one secondary node.

When steps in this document are marked with the following prefixes, the meaning is as
follows:

[A]: The step applies to all nodes


[1]: The step applies to node1 only
[2]: The step applies to node2 only

Prerequisites
Read the following SAP Notes and papers first:

SAP Note 1928533 , which has:


The list of Azure virtual machine (VM) sizes that are supported for the
deployment of SAP software.
Important capacity information for Azure VM sizes.
The supported SAP software and operating system (OS) and database
combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 405827 lists recommended file systems for HANA environments.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 3108302 has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA system replication in Pacemaker cluster
General Red Hat Enterprise Linux (RHEL) documentation:
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster
when the HANA file systems are on NFS shares
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
Configure SAP HANA scale-up system replication in a Pacemaker cluster when
the HANA file systems are on NFS shares
NFS v4.1 volumes on Azure NetApp Files for SAP HANA

Overview
Traditionally in a scale-up environment, all file systems for SAP HANA are mounted from
local storage. Setting up high availability (HA) of SAP HANA System Replication on Red
Hat Enterprise Linux is published in Set up SAP HANA System Replication on RHEL.

To achieve SAP HANA HA of a scale-up system on Azure NetApp Files NFS shares, we
need some more resource configuration in the cluster, in order for HANA resources to
recover, when one node loses access to the NFS shares on Azure NetApp Files. The
cluster manages the NFS mounts, allowing it to monitor the health of the resources. The
dependencies between the file system mounts and the SAP HANA resources are
enforced.
.

SAP HANA file systems are mounted on NFS shares by using Azure NetApp Files on
each node. File systems /hana/data , /hana/log , and /hana/shared are unique to each
node.

Mounted on node1 (hanadb1):

10.32.2.4:/hanadb1-data-mnt00001 on /hana/data
10.32.2.4:/hanadb1-log-mnt00001 on /hana/log
10.32.2.4:/hanadb1-shared-mnt00001 on /hana/shared

Mounted on node2 (hanadb2):

10.32.2.4:/hanadb2-data-mnt00001 on /hana/data
10.32.2.4:/hanadb2-log-mnt00001 on /hana/log
10.32.2.4:/hanadb2-shared-mnt00001 on /hana/shared

7 Note

File systems /hana/shared , /hana/data , and /hana/log aren't shared between the
two nodes. Each cluster node has its own separate file systems.
The SAP HANA System Replication configuration uses a dedicated virtual hostname and
virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
configuration shown here has a load balancer with:

Front-end IP address: 10.32.0.10 for hn1-db


Probe port: 62503

Set up the Azure NetApp Files infrastructure


Before you proceed with the setup for Azure NetApp Files infrastructure, familiarize
yourself with the Azure NetApp Files documentation.

Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.

For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files availability by Azure region .

Important considerations
As you're creating your Azure NetApp Files volumes for SAP HANA scale-up systems, be
aware of the important considerations documented in NFS v4.1 volumes on Azure
NetApp Files for SAP HANA.

Sizing of HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and
service level, as documented in Service level for Azure NetApp Files.

While you're designing the infrastructure for SAP HANA on Azure with Azure NetApp
Files, be aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for
SAP HANA.

The configuration in this article is presented with simple Azure NetApp Files volumes.

) Important

For production systems, where performance is a key, we recommend that you


evaluate and consider using Azure NetApp Files application volume group for
SAP HANA.
Deploy Azure NetApp Files resources
The following instructions assume that you already deployed your Azure virtual network.
The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources will
be mounted, must be deployed in the same Azure virtual network or in peered Azure
virtual networks.

1. Create a NetApp account in your selected Azure region by following the


instructions in Create a NetApp account.

2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.

The HANA architecture shown in this article uses a single Azure NetApp Files
capacity pool at the Ultra service level. For HANA workloads on Azure, we
recommend using an Azure NetApp Files Ultra or Premium service Level.

3. Delegate a subnet to Azure NetApp Files, as described in the instructions in


Delegate a subnet to Azure NetApp Files.

4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.

As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp volumes are assigned automatically.

Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. For example,
hanadb1-data-mnt00001 and hanadb1-log-mnt00001 are the volume names and

nfs://10.32.2.4/hanadb1-data-mnt00001 and nfs://10.32.2.4/hanadb1-log-

mnt00001 are the file paths for the Azure NetApp Files volumes.

On hanadb1:

Volume hanadb1-data-mnt00001 (nfs://10.32.2.4:/hanadb1-data-mnt00001)


Volume hanadb1-log-mnt00001 (nfs://10.32.2.4:/hanadb1-log-mnt00001)
Volume hanadb1-shared-mnt00001 (nfs://10.32.2.4:/hanadb1-shared-
mnt00001)

On hanadb2:

Volume hanadb2-data-mnt00001 (nfs://10.32.2.4:/hanadb2-data-mnt00001)


Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001)
Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-
mnt00001)

7 Note

All commands to mount /hana/shared in this article are presented for NFSv4.1
/hana/shared volumes. If you deployed the /hana/shared volumes as NFSv3

volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3.

Prepare the infrastructure


Azure Marketplace contains images qualified for SAP HANA with the High Availability
add-on, which you can use to deploy new VMs by using various versions of Red Hat.

Deploy Linux VMs manually via the Azure portal


This document assumes that you've already deployed a resource group, an Azure virtual
network, and a subnet.

Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the
HANA system. You can deploy a VM in any one of the availability options: virtual
machine scale set, availability zone, or availability set.

) Important

Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to setup standard load balancer for high
availability setup of HANA database.

Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create frontend IP. Select the same virtual


network and subnet same as your DB virtual machines.
2. Backend Pool: Create backend pool and add DB VMs.
3. Inbound rules: Create load balancing rule. Follow the same steps for both
load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 625<instance-no.>]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
another IP address for the VM, deploy a second NIC.
7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity, unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Standard
Azure Load Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0. For more information, see Load Balancer health
probes and SAP Note 2382421 .

Mount the Azure NetApp Files volume


1. [A] Create mount points for the HANA database volumes.

Bash

sudo mkdir -p /hana/data


sudo mkdir -p /hana/log
sudo mkdir -p /hana/shared

2. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, that is, defaultv4iddomain.com, and the
mapping is set to nobody.

Bash

sudo cat /etc/idmapd.conf

Example output:

Output

[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com. If there's a mismatch between the domain
configuration on the NFS client (that is, the VM) and the NFS server (that is,
the Azure NetApp Files configuration), then the permissions for files on Azure
NetApp Files volumes that are mounted on the VMs display as nobody .

3. [1] Mount the node-specific volumes on node1 (hanadb1).

Bash

sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb1-log-mnt00001 /hana/log
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb1-data-mnt00001 /hana/data

4. [2] Mount the node-specific volumes on node2 (hanadb2).

Bash

sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb2-log-mnt00001 /hana/log
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb2-data-mnt00001 /hana/data

5. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.

Bash

sudo nfsstat -m
Verify that the flag vers is set to 4.1. Example from hanadb1:

Output

/hana/log from 10.32.2.4:/hanadb1-log-mnt00001


Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.32.0.4,local_lock=none,addr=
10.32.2.4
/hana/data from 10.32.2.4:/hanadb1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.32.0.4,local_lock=none,addr=
10.32.2.4
/hana/shared from 10.32.2.4:/hanadb1-shared-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.32.0.4,local_lock=none,addr=
10.32.2.4

6. [A] Verify nfs4_disable_idmapping. It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, run the mount command. You
can't manually create the directory under /sys/modules because access is reserved
for the kernel and drivers.

Check nfs4_disable_idmapping .

Bash

sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping

If you need to set nfs4_disable_idmapping to:

Bash

sudo echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

Make the configuration permanent.

Bash

sudo echo "options nfs nfs4_disable_idmapping=Y" >>


/etc/modprobe.d/nfs.conf

​For more information on how to change the nfs_disable_idmapping parameter, see


the Red Hat Knowledge Base .
SAP HANA installation
1. [A] Set up hostname resolution for all hosts.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows you how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines in the /etc/hosts file. Change the IP address and
hostname to match your environment.

Output

10.32.0.4 hanadb1
10.32.0.5 hanadb2

2. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as
described in SAP Note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp
configuration settings.

Bash

sudo vi /etc/sysctl.d/91-NetApp-HANA.conf

Add the following entries in the configuration file.

Output

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
3. [A] Create the configuration file /etc/sysctl.d/ms-az.conf with more optimization
settings.

Bash

sudo vi /etc/sysctl.d/ms-az.conf

Add the following entries in the configuration file.

Output

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

 Tip

Avoid setting net.ipv4.ip_local_port_range and


net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files
to allow the SAP Host Agent to manage the port ranges. For more
information, see SAP Note 2382421 .

4. [A] Adjust the sunrpc settings, as recommended in SAP Note 3024346 - Linux
Kernel Settings for NetApp NFS .

Bash

sudo vi /etc/modprobe.d/sunrpc.conf

Insert the following line:

Output

options sunrpc tcp_max_slot_table_entries=128

5. [A] Perform RHEL OS configuration for HANA.

Configure the OS as described in the following SAP Notes based on your RHEL
version:

2292690 - SAP HANA DB: Recommended OS settings for RHEL 7


2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8
2455582 - Linux: Running SAP applications compiled with GCC 6.x
2593824 - Linux: Running SAP applications compiled with GCC 7.x
2886607 - Linux: Running SAP applications compiled with GCC 9.x

6. [A] Install the SAP HANA.

Starting with HANA 2.0 SPS 01, MDC is the default option. When you install the
HANA system, SYSTEMDB and a tenant with the same SID are created together. In
some cases, you don't want the default tenant. If you don't want to create an initial
tenant along with the installation, you can follow SAP Note 2629711 .

Run the hdblcm program from the HANA DVD. Enter the following values at the
prompt:
a. Choose installation: Enter 1 (for install).
b. Select more components for installation: Enter 1.
c. Enter Installation Path [/hana/shared]: Select Enter to accept the default.
d. Enter Local Host Name [..]: Select Enter to accept the default. Do you want to
add additional hosts to the system? (y/n) [n]: n.
e. Enter SAP HANA System ID: Enter HN1.
f. Enter Instance Number [00]: Enter 03.
g. Select Database Mode / Enter Index [1]: Select Enter to accept the default.
h. Select System Usage / Enter Index [4]: Enter 4 (for custom).
i. Enter Location of Data Volumes [/hana/data]: Select Enter to accept the default.
j. Enter Location of Log Volumes [/hana/log]: Select Enter to accept the default.
k. Restrict maximum memory allocation? [n]: Select Enter to accept the default.
l. Enter Certificate Host Name For Host '...' [...]: Select Enter to accept the default.
m. Enter SAP Host Agent User (sapadm) Password: Enter the host agent user
password.
n. Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user
password again to confirm.
o. Enter System Administrator (hn1adm) Password: Enter the system administrator
password.
p. Confirm System Administrator (hn1adm) Password: Enter the system
administrator password again to confirm.
q. Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select Enter
to accept the default.
r. Enter System Administrator Login Shell [/bin/sh]: Select Enter to accept the
default.
s. Enter System Administrator User ID [1001]: Select Enter to accept the default.
t. Enter ID of User Group (sapsys) [79]: Select Enter to accept the default.
u. Enter Database User (SYSTEM) Password: Enter the database user password.
v. Confirm Database User (SYSTEM) Password: Enter the database user password
again to confirm.
w. Restart system after machine reboot? [n]: Select Enter to accept the default.
x. Do you want to continue? (y/n): Validate the summary. Enter y to continue.

7. [A] Upgrade the SAP Host Agent.

Download the latest SAP Host Agent archive from the SAP Software Center and
run the following command to upgrade the agent. Replace the path to the archive
to point to the file that you downloaded:

Bash

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP


Host Agent SAR>

8. [A] Configure a firewall.

Create the firewall rule for the Azure Load Balancer probe port.

Bash

sudo firewall-cmd --zone=public --add-port=62503/tcp


sudo firewall-cmd --zone=public --add-port=62503/tcp –permanent

Configure SAP HANA System Replication


Follow the steps in Set up SAP HANA System Replication to configure SAP HANA
System Replication.

Cluster configuration
This section describes the steps required for a cluster to operate seamlessly when SAP
HANA is installed on NFS shares by using Azure NetApp Files.

Create a Pacemaker cluster


Follow the steps in Set up Pacemaker on Red Hat Enterprise Linux in Azure to create a
basic Pacemaker cluster for this HANA server.

) Important
With the systemd based SAP Startup Framework, SAP HANA instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As outlined in SAP Note 3189534 , any new installations
of SAP HANA SPS07 revision 70 or above, or updates to HANA systems to HANA
2.0 SPS07 revision 70 or above, SAP Startup framework will be automatically
registered with systemd.

When using HA solutions to manage SAP HANA system replication in combination


with systemd-enabled SAP HANA instances (refer to SAP Note 3189534 ),
additional steps are necessary to ensure that the HA cluster can manage the SAP
instance without systemd interference. So, for SAP HANA system integrated with
systemd, additional steps outlined in Red Hat KBA 7029705 must be followed on
all cluster nodes.

Implement the Python system replication hook


SAPHanaSR
This step is an important one to optimize the integration with the cluster and improve
the detection when a cluster failover is needed. We highly recommend that you
configure the SAPHanaSR Python hook. Follow the steps in Implement the Python
system replication hook SAPHanaSR.

Configure file system resources


In this example, each cluster node has its own HANA NFS file systems /hana/shared ,
/hana/data , and /hana/log .

1. [1] Put the cluster in maintenance mode.

Bash

sudo pcs property set maintenance-mode=true

2. [1] Create the file system resources for the hanadb1 mounts.

Bash

sudo pcs resource create hana_data1 ocf:heartbeat:Filesystem


device=10.32.2.4:/hanadb1-data-mnt00001 directory=/hana/data fstype=nfs
options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime
,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence
timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
sudo pcs resource create hana_log1 ocf:heartbeat:Filesystem
device=10.32.2.4:/hanadb1-log-mnt00001 directory=/hana/log fstype=nfs
options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime
,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence
timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs
sudo pcs resource create hana_shared1 ocf:heartbeat:Filesystem
device=10.32.2.4:/hanadb1-shared-mnt00001 directory=/hana/shared
fstype=nfs
options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime
,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence
timeout=120s OCF_CHECK_LEVEL=20 --group hanadb1_nfs

3. [2] Create the file system resources for the hanadb2 mounts.

Bash

sudo pcs resource create hana_data2 ocf:heartbeat:Filesystem


device=10.32.2.4:/hanadb2-data-mnt00001 directory=/hana/data fstype=nfs
options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime
,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence
timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
sudo pcs resource create hana_log2 ocf:heartbeat:Filesystem
device=10.32.2.4:/hanadb2-log-mnt00001 directory=/hana/log fstype=nfs
options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime
,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence
timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs
sudo pcs resource create hana_shared2 ocf:heartbeat:Filesystem
device=10.32.2.4:/hanadb2-shared-mnt00001 directory=/hana/shared
fstype=nfs
options=rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime
,lock,_netdev,sec=sys op monitor interval=20s on-fail=fence
timeout=120s OCF_CHECK_LEVEL=20 --group hanadb2_nfs

The OCF_CHECK_LEVEL=20 attribute is added to the monitor operation so that each


monitor performs a read/write test on the file system. Without this attribute, the
monitor operation only verifies that the file system is mounted. This can be a
problem because when connectivity is lost, the file system might remain mounted
despite being inaccessible.

The on-fail=fence attribute is also added to the monitor operation. With this
option, if the monitor operation fails on a node, that node is immediately fenced.
Without this option, the default behavior is to stop all resources that depend on
the failed resource, restart the failed resource, and then start all the resources that
depend on the failed resource.

Not only can this behavior take a long time when an SAPHana resource depends
on the failed resource, but it also can fail altogether. The SAPHana resource can't
stop successfully if the NFS server holding the HANA executables is inaccessible.
The suggested timeout values allow the cluster resources to withstand protocol-
specific pause, related to NFSv4.1 lease renewals. For more information, see NFS in
NetApp Best practice . The timeouts in the preceding configuration might need
to be adapted to the specific SAP setup.

For workloads that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.

4. [1] Configure location constraints.

Configure location constraints to ensure that the resources that manage hanadb1
unique mounts can never run on hanadb2, and vice versa.

Bash

sudo pcs constraint location hanadb1_nfs rule score=-INFINITY resource-


discovery=never \#uname eq hanadb2
sudo pcs constraint location hanadb2_nfs rule score=-INFINITY resource-
discovery=never \#uname eq hanadb1

The resource-discovery=never option is set because the unique mounts for each
node share the same mount point. For example, hana_data1 uses mount point
/hana/data , and hana_data2 also uses mount point /hana/data . Sharing the same

mount point can cause a false positive for a probe operation, when resource state
is checked at cluster startup, and it can in turn cause unnecessary recovery
behavior. To avoid this scenario, set resource-discovery=never .

5. [1] Configure attribute resources.

Configure attribute resources. These attributes are set to true if all of a node's NFS
mounts ( /hana/data , /hana/log , and /hana/data ) are mounted. Otherwise, they're
set to false.

Bash

sudo pcs resource create hana_nfs1_active ocf:pacemaker:attribute


active_value=true inactive_value=false name=hana_nfs1_active
sudo pcs resource create hana_nfs2_active ocf:pacemaker:attribute
active_value=true inactive_value=false name=hana_nfs2_active

6. [1] Configure location constraints.

Configure location constraints to ensure that hanadb1's attribute resource never


runs on hanadb2, and vice versa.
Bash

sudo pcs constraint location hana_nfs1_active avoids hanadb2


sudo pcs constraint location hana_nfs2_active avoids hanadb1

7. [1] Create ordering constraints.

Configure ordering constraints so that a node's attribute resources start only after
all of the node's NFS mounts are mounted.

Bash

sudo pcs constraint order hanadb1_nfs then hana_nfs1_active


sudo pcs constraint order hanadb2_nfs then hana_nfs2_active

 Tip

If your configuration includes file systems, outside of group hanadb1_nfs or


hanadb2_nfs , include the sequential=false option so that there are no

ordering dependencies among the file systems. All file systems must start
before hana_nfs1_active , but they don't need to start in any order relative to
each other. For more information, see How do I configure SAP HANA System
Replication in Scale-Up in a Pacemaker cluster when the HANA file systems
are on NFS shares

Configure SAP HANA cluster resources


1. Follow the steps in Create SAP HANA cluster resources to create the SAP HANA
resources in the cluster. After SAP HANA resources are created, you need to create
a location rule constraint between SAP HANA resources and file systems (NFS
mounts).

2. [1] Configure constraints between the SAP HANA resources and the NFS mounts.

Location rule constraints are set so that the SAP HANA resources can run on a
node only if all of the node's NFS mounts are mounted.

Bash

sudo pcs constraint location SAPHanaTopology_HN1_03-clone rule score=-


INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true
On RHEL 7.x:

Bash

sudo pcs constraint location SAPHana_HN1_03-master rule score=-INFINITY


hana_nfs1_active ne true and hana_nfs2_active ne true

On RHEL 8.x/9.x:

Bash

sudo pcs constraint location SAPHana_HN1_03-clone rule score=-INFINITY


hana_nfs1_active ne true and hana_nfs2_active ne true

Take the cluster out of maintenance mode.

Bash

sudo pcs property set maintenance-mode=false

Check the status of the cluster and all the resources.

7 Note

This article contains references to a term that Microsoft no longer uses. When
the term is removed from the software, we'll remove it from this article.

Bash

sudo pcs status

Example output:

Output

Online: [ hanadb1 hanadb2 ]

Full list of resources:

rsc_hdb_azr_agt(stonith:fence_azure_arm): Started hanadb1

Resource Group: hanadb1_nfs


hana_data1 (ocf::heartbeat:Filesystem):Started hanadb1
hana_log1 (ocf::heartbeat:Filesystem):Started hanadb1
hana_shared1 (ocf::heartbeat:Filesystem):Started hanadb1
Resource Group: hanadb2_nfs
hana_data2 (ocf::heartbeat:Filesystem):Started hanadb2
hana_log2 (ocf::heartbeat:Filesystem):Started hanadb2
hana_shared2 (ocf::heartbeat:Filesystem):Started hanadb2

hana_nfs1_active (ocf::pacemaker:attribute): Started hanadb1


hana_nfs2_active (ocf::pacemaker:attribute): Started hanadb2

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hanadb1 hanadb2 ]

Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]


Masters: [ hanadb1 ]
Slaves: [ hanadb2 ]

Resource Group: g_ip_HN1_03


nc_HN1_03 (ocf::heartbeat:azure-lb): Started hanadb1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb1

Configure HANA active/read-enabled system


replication in Pacemaker cluster
Starting with SAP HANA 2.0 SPS 01, SAP allows active/read-enabled setups for SAP
HANA System Replication, where the secondary systems of SAP HANA System
Replication can be used actively for read-intense workloads. To support such a setup in a
cluster, a second virtual IP address is required, which allows clients to access the
secondary read-enabled SAP HANA database.

To ensure that the secondary replication site can still be accessed after a takeover has
occurred, the cluster needs to move the virtual IP address around with the secondary of
the SAPHana resource.

The extra configuration, which is required to manage HANA active/read-enabled System


Replication in a Red Hat HA cluster with a second virtual IP, is described in Configure
HANA Active/Read-Enabled System Replication in Pacemaker cluster.

Before you proceed further, make sure you've fully configured Red Hat High Availability
Cluster managing SAP HANA database as described in the preceding sections of the
documentation.

Test the cluster setup


This section describes how you can test your setup.
1. Before you start a test, make sure that Pacemaker doesn't have any failed action
(via pcs status), there are no unexpected location constraints (for example,
leftovers of a migration test), and that HANA system replication is in sync state, for
example, with systemReplicationStatus :

Bash

sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

2. Verify the cluster configuration for a failure scenario when a node loses access to
the NFS share ( /hana/shared ).

The SAP HANA resource agents depend on binaries stored on /hana/shared to


perform operations during failover. File system /hana/shared is mounted over NFS
in the presented scenario.

It's difficult to simulate a failure where one of the servers loses access to the NFS
share. As a test, you can remount the file system as read-only. This approach
validates that the cluster can fail over, if access to /hana/shared is lost on the
active node.

Expected result: On making /hana/shared as a read-only file system, the


OCF_CHECK_LEVEL attribute of the resource hana_shared1 , which performs

read/write operations on file systems, fails. It isn't able to write anything on the file
system and performs HANA resource failover. The same result is expected when
your HANA node loses access to the NFS shares.

Resource state before starting the test:

Bash

sudo pcs status

Example output:

Output

Full list of resources:


rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hanadb1

Resource Group: hanadb1_nfs


hana_data1 (ocf::heartbeat:Filesystem): Started hanadb1
hana_log1 (ocf::heartbeat:Filesystem): Started hanadb1
hana_shared1 (ocf::heartbeat:Filesystem): Started hanadb1
Resource Group: hanadb2_nfs
hana_data2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_log2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_shared2 (ocf::heartbeat:Filesystem): Started hanadb2

hana_nfs1_active (ocf::pacemaker:attribute): Started hanadb1


hana_nfs2_active (ocf::pacemaker:attribute): Started hanadb2

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hanadb1 hanadb2 ]

Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]


Masters: [ hanadb1 ]
Slaves: [ hanadb2 ]

Resource Group: g_ip_HN1_03


nc_HN1_03 (ocf::heartbeat:azure-lb): Started hanadb1
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb1

You can place /hana/shared in read-only mode on the active cluster node by using
this command:

Bash

sudo mount -o ro 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared

hanadb will either reboot or power off based on the action set on stonith ( pcs
property show stonith-action ). Once the server ( hanadb1 ) is down, the HANA

resource moves to hanadb2 . You can check the status of the cluster from hanadb2 .

Bash

sudo pcs status

Example output:

Output

Full list of resources:

rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hanadb2

Resource Group: hanadb1_nfs


hana_data1 (ocf::heartbeat:Filesystem): Stopped
hana_log1 (ocf::heartbeat:Filesystem): Stopped
hana_shared1 (ocf::heartbeat:Filesystem): Stopped

Resource Group: hanadb2_nfs


hana_data2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_log2 (ocf::heartbeat:Filesystem): Started hanadb2
hana_shared2 (ocf::heartbeat:Filesystem): Started hanadb2

hana_nfs1_active (ocf::pacemaker:attribute): Stopped


hana_nfs2_active (ocf::pacemaker:attribute): Started hanadb2

Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]


Started: [ hanadb2 ]
Stopped: [ hanadb1 ]

Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]


Masters: [ hanadb2 ]
Stopped: [ hanadb1 ]

Resource Group: g_ip_HN1_03


nc_HN1_03 (ocf::heartbeat:azure-lb): Started hanadb2
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb2

We recommend that you thoroughly test the SAP HANA cluster configuration by
also performing the tests described in Set up SAP HANA System Replication on
RHEL.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
Deploy a SAP HANA scale-out system
with standby node on Azure VMs by
using Azure NetApp Files on Red Hat
Enterprise Linux
Article • 07/11/2023

This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration with standby on Azure Red Hat Enterprise Linux virtual machines (VMs), by
using Azure NetApp Files for the shared storage volumes.

In the example configurations, installation commands, and so on, the HANA instance is
03 and the HANA system ID is HN1. The examples are based on HANA 2.0 SP4 and Red
Hat Enterprise Linux for SAP 7.6.

7 Note

This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we’ll remove them from this article.

Before you begin, refer to the following SAP notes and papers:

Azure NetApp Files documentation


SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 : Lists prerequisites for SAP-supported SAP software
deployments in Azure
SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 3108302 has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x
SAP Note 2178632 : Contains detailed information about all monitoring metrics
reported for SAP in Azure
SAP Note 2191498 : Contains the required SAP Host Agent version for Linux in
Azure
SAP Note 2243692 : Contains information about SAP licensing on Linux in Azure
SAP Note 1999351 : Contains additional troubleshooting information for the
Azure Enhanced Monitoring Extension for SAP
SAP Note 1900823 : Contains information about SAP HANA storage
requirements
SAP Community Wiki : Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
General RHEL documentation
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Red Hat Enterprise Linux Networking Guide
Azure-specific RHEL documentation:
Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure
NFS v4.1 volumes on Azure NetApp Files for SAP HANA

Overview
One method for achieving HANA high availability is by configuring host auto failover. To
configure host auto failover, you add one or more virtual machines to the HANA system
and configure them as standby nodes. When active node fails, a standby node
automatically takes over. In the presented configuration with Azure virtual machines,
you achieve auto failover by using NFS on Azure NetApp Files.

7 Note

The standby node needs access to all database volumes. The HANA volumes must
be mounted as NFSv4 volumes. The improved file lease-based locking mechanism
in the NFSv4 protocol is used for I/O fencing.

) Important

To build the supported configuration, you must deploy the HANA data and log
volumes as NFSv4.1 volumes and mount them by using the NFSv4.1 protocol. The
HANA host auto-failover configuration with standby node is not supported with
NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three
subnets are represented within one Azure virtual network:

For client communication


For communication with the storage system
For internal HANA inter-node communication

The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.

For this example configuration, the subnets are:

client 10.9.1.0/26

storage 10.9.3.0/26
hana 10.9.2.0/26

anf 10.9.0.0/26 (delegated subnet to Azure NetApp Files)


Set up the Azure NetApp Files infrastructure
Before you proceed with the setup for Azure NetApp Files infrastructure, familiarize
yourself with the Azure NetApp Files documentation.

Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.

For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files Availability by Azure Region .

Important considerations
As you're creating your Azure NetApp Files volumes for SAP HANA scale-out with stand
by nodes scenario, be aware of the important considerations documented in NFS v4.1
volumes on Azure NetApp Files for SAP HANA.

Sizing for HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and
service level, as documented in Service level for Azure NetApp Files.

While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be
aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP
HANA.
The configuration in this article is presented with simple Azure NetApp Files Volumes.

) Important

For production systems, where performance is a key, we recommend to evaluate


and consider using Azure NetApp Files application volume group for SAP HANA.

Deploy Azure NetApp Files resources


The following instructions assume that you've already deployed your Azure virtual
network. The Azure NetApp Files resources and VMs, where the Azure NetApp Files
resources will be mounted, must be deployed in the same Azure virtual network or in
peered Azure virtual networks.

1. Create a NetApp account in your selected Azure region by following the


instructions in Create a NetApp account.
2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.

The HANA architecture presented in this article uses a single Azure NetApp Files
capacity pool at the Ultra Service level. For HANA workloads on Azure, we
recommend using an Azure NetApp Files Ultra or Premium service Level.

3. Delegate a subnet to Azure NetApp Files, as described in the instructions in


Delegate a subnet to Azure NetApp Files.

4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.

As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp volumes are assigned automatically.

Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. For example,
HN1-data-mnt00001, HN1-log-mnt00001, and so on, are the volume names and
nfs://10.9.0.4/HN1-data-mnt00001, nfs://10.9.0.4/HN1-log-mnt00001, and so on,
are the file paths for the Azure NetApp Files volumes.

volume HN1-data-mnt00001 (nfs://10.9.0.4/HN1-data-mnt00001)


volume HN1-data-mnt00002 (nfs://10.9.0.4/HN1-data-mnt00002)
volume HN1-log-mnt00001 (nfs://10.9.0.4/HN1-log-mnt00001)
volume HN1-log-mnt00002 (nfs://10.9.0.4/HN1-log-mnt00002)
volume HN1-shared (nfs://10.9.0.4/HN1-shared)

In this example, we used a separate Azure NetApp Files volume for each HANA
data and log volume. For a more cost-optimized configuration on smaller or non-
productive systems, it's possible to place all data mounts on a single volume and
all logs mounts on a different single volume.

Deploy Linux virtual machines via the Azure


portal
First you need to create the Azure NetApp Files volumes. Then do the following steps:

1. Create the Azure virtual network subnets in your Azure virtual network.

2. Deploy the VMs.


3. Create the additional network interfaces, and attach the network interfaces to the
corresponding VMs.

Each virtual machine has three network interfaces, which correspond to the three
Azure virtual network subnets ( client , storage and hana ).

For more information, see Create a Linux virtual machine in Azure with multiple
network interface cards.

) Important

For SAP HANA workloads, low latency is critical. To achieve low latency, work with
your Microsoft representative to ensure that the virtual machines and the Azure
NetApp Files volumes are deployed in close proximity. When you're onboarding
new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the
necessary information.

The next instructions assume that you've already created the resource group, the Azure
virtual network, and the three Azure virtual network subnets: client , storage and hana .
When you deploy the VMs, select the client subnet, so that the client network interface
is the primary interface on the VMs. You will also need to configure an explicit route to
the Azure NetApp Files delegated subnet via the storage subnet gateway.

) Important

Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM
types you're using. For a list of SAP HANA certified VM types and OS releases for
those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS
releases for that type.

1. Create an availability set for SAP HANA. Make sure to set the max update domain.

2. Create three virtual machines (hanadb1, hanadb2, hanadb3) by doing the


following steps:

a. Use a Red Hat Enterprise Linux image in the Azure gallery that's supported for
SAP HANA. We used a RHEL-SAP-HA 7.6 image in this example.

b. Select the availability set that you created earlier for SAP HANA.

c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically
generated. In these instructions for simplicity we'll refer to the automatically
generated network interfaces, which are attached to the client Azure virtual
network subnet, as hanadb1-client, hanadb2-client, and hanadb3-client.

3. Create three network interfaces, one for each virtual machine, for the storage
virtual network subnet (in this example, hanadb1-storage, hanadb2-storage, and
hanadb3-storage).

4. Create three network interfaces, one for each virtual machine, for the hana virtual
network subnet (in this example, hanadb1-hana, hanadb2-hana, and hanadb3-
hana).

5. Attach the newly created virtual network interfaces to the corresponding virtual
machines by doing the following steps:

a. Go to the virtual machine in the Azure portal .

b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select the virtual machine.

c. In the Overview pane, select Stop to deallocate the virtual machine.

d. Select Networking, and then attach the network interface. In the Attach
network interface drop-down list, select the already created network interfaces for
the storage and hana subnets.

e. Select Save.

f. Repeat steps b through e for the remaining virtual machines (in our example,
hanadb2 and hanadb3).

g. Leave the virtual machines in stopped state for now. Next, we'll enable
accelerated networking for all newly attached network interfaces.

6. Enable accelerated networking for the additional network interfaces for the
storage and hana subnets by doing the following steps:

a. Open Azure Cloud Shell in the Azure portal .

b. Execute the following commands to enable accelerated networking for the


additional network interfaces, which are attached to the storage and hana
subnets.
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb1-storage --
accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-storage --
accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-storage --
accelerated-networking true

az network nic update --id /subscriptions/your


subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb1-hana --
accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-hana --
accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-hana --
accelerated-networking true

7. Start the virtual machines by doing the following steps:

a. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select it.

b. In the Overview pane, select Start.

Operating system configuration and


preparation
The instructions in the next sections are prefixed with one of the following:

[A]: Applicable to all nodes


[1]: Applicable only to node 1
[2]: Applicable only to node 2
[3]: Applicable only to node 3

Configure and prepare your OS by doing the following steps:


1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries were added to /etc/hosts for this example.

# Storage
10.9.3.4 hanadb1-storage
10.9.3.5 hanadb2-storage
10.9.3.6 hanadb3-storage
# Client
10.9.1.5 hanadb1
10.9.1.6 hanadb2
10.9.1.7 hanadb3
# Hana
10.9.2.4 hanadb1-hana
10.9.2.5 hanadb2-hana
10.9.2.6 hanadb3-hana

2. [A] Add a network route, so that the communication to the Azure NetApp Files
goes via the storage network interface.

In this example will use Networkmanager to configure the additional network route.
The following instructions assume that the storage network interface is eth1 .
First, determine the connection name for device eth1 . In this example the
connection name for device eth1 is Wired connection 1 .

# Execute as root
nmcli connection
# Result
#NAME UUID TYPE
DEVICE
#System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet
eth0
#Wired connection 1 4b0789d1-6146-32eb-83a1-94d61f8d60a7 ethernet
eth1

Then configure additional route to the Azure NetApp Files delegated network via
eth1 .
# Add the following route
# ANFDelegatedSubnet/cidr via StorageSubnetGW dev
StorageNetworkInterfaceDevice
nmcli connection modify "Wired connection 1" +ipv4.routes "10.9.0.0/26
10.9.3.1"

Reboot the VM to activate the changes.

3. [A] Install the NFS client package.

yum install nfs-utils

4. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.

vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

5. [A] Create configuration file /etc/sysctl.d/ms-az.conf with additional optimization


settings.

vi /etc/sysctl.d/ms-az.conf
# Add the following entries in the configuration file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

 Tip

Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports


explicitly in the sysctl configuration files to allow SAP Host Agent to manage the
port ranges. For more details see SAP note 2382421 .

5. [A] Adjust the sunrpc settings, as recommended in SAP note 3024346 - Linux
Kernel Settings for NetApp NFS .

vi /etc/modprobe.d/sunrpc.conf
# Insert the following line
options sunrpc tcp_max_slot_table_entries=128

6. [A] Red Hat for HANA configuration.

Configure RHEL as described in SAP Note 2292690 , 2455582 , 2593824 , and


Red Hat note 2447641 .

7 Note

If installing HANA 2.0 SP04 you will be required to install package compat-sap-
c++-7 as described in SAP note 2593824 , before you can install SAP HANA.

Mount the Azure NetApp Files volumes


1. [A] Create mount points for the HANA database volumes.

mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1

2. [1] Create node-specific directories for /usr/sap on HN1-shared.

# Create a temporary directory to mount HN1-shared


mkdir /mnt/tmp
# if using NFSv3 for this volume, mount with the following command
mount 10.9.0.4:/HN1-shared /mnt/tmp
# if using NFSv4.1 for this volume, mount with the following command
mount -t nfs -o sec=sys,nfsvers=4.1 10.9.0.4:/HN1-shared /mnt/tmp
cd /mnt/tmp
mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3
# unmount /hana/shared
cd
umount /mnt/tmp

3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, i.e. defaultv4iddomain.com and the mapping is
set to nobody.

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com . If there's a mismatch between the domain
configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure
NetApp configuration, then the permissions for files on Azure NetApp
volumes that are mounted on the VMs will be displayed as nobody .

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

4. [A] Verify nfs4_disable_idmapping . It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, execute the mount command.
You won't be able to manually create the directory under /sys/modules, because
access is reserved for the kernel / drivers.

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >>
/etc/modprobe.d/nfs.conf

For more details on how to change nfs4_disable_idmapping parameter see


https://access.redhat.com/solutions/1749883 .

5. [A] Mount the shared Azure NetApp Files volumes.

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-shared/shared /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount all volumes
sudo mount -a

For workloads, that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.

6. [1] Mount the node-specific volumes on hanadb1.

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a

7. [2] Mount the node-specific volumes on hanadb2.

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a

8. [3] Mount the node-specific volumes on hanadb3.

sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a
9. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.

sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.9.0.4:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/log/HN1/mnt00002 from 10.9.0.4:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/data/HN1/mnt00002 from 10.9.0.4:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/log/HN1/mnt00001 from 10.9.0.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/usr/sap/HN1 from 10.9.0.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/shared from 10.9.0.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4

Installation
In this example for deploying SAP HANA in scale-out configuration with standby node
with Azure, we've used HANA 2.0 SP4.

Prepare for HANA installation


1. [A] Before the HANA installation, set the root password. You can disable the root
password after the installation has been completed. Execute as root command
passwd .

2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3, without being
prompted for a password.

ssh root@hanadb2
ssh root@hanadb3

3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more
information, see SAP Note 2593824 .

yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1

4. [2], [3] Change ownership of SAP HANA data and log directories to hn1adm.

# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1

5. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA
installation. You can re-enable it, after the HANA installation is done.

# Execute as root
systemctl stop firewalld
systemctl disable firewalld

HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation
and Update guide . In this example, we install SAP HANA scale-out with master,
one worker, and one standby node.

a. Start the hdblcm program from the HANA installation software directory. Use
the internal_network parameter and pass the address space for subnet, which is
used for the internal HANA inter-node communication.

./hdblcm --internal_network=10.9.2.0/26

b. At the prompt, enter the following values:

For Choose an action: enter 1 (for install)


For Additional components for installation: enter 2, 3
For installation path: press Enter (defaults to /hana/shared)
For Local Host Name: press Enter to accept the default
Under Do you want to add hosts to the system?: enter y
For comma-separated host names to add: enter hanadb2, hanadb3
For Root User Name [root]: press Enter to accept the default
For roles for host hanadb2: enter 1 (for worker)
For Host Failover Group for host hanadb2 [default]: press Enter to accept the
default
For Storage Partition Number for host hanadb2 [<<assign automatically>>]:
press Enter to accept the default
For Worker Group for host hanadb2 [default]: press Enter to accept the
default
For Select roles for host hanadb3: enter 2 (for standby)
For Host Failover Group for host hanadb3 [default]: press Enter to accept the
default
For Worker Group for host hanadb3 [default]: press Enter to accept the
default
For SAP HANA System ID: enter HN1
For Instance number [00]: enter 03
For Local Host Worker Group [default]: press Enter to accept the default
For Select System Usage / Enter index [4]: enter 4 (for custom)
For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the
default
For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the
default
For Restrict maximum memory allocation? [n]: enter n
For Certificate Host Name For Host hanadb1 [hanadb1]: press Enter to
accept the default
For Certificate Host Name For Host hanadb2 [hanadb2]: press Enter to
accept the default
For Certificate Host Name For Host hanadb3 [hanadb3]: press Enter to
accept the default
For System Administrator (hn1adm) Password: enter the password
For System Database User (system) Password: enter the system's password
For Confirm System Database User (system) Password: enter system's
password
For Restart system after machine reboot? [n]: enter n
For Do you want to continue (y/n): validate the summary and if everything
looks good, enter y

2. [1] Verify global.ini

Display global.ini, and ensure that the configuration for the internal SAP HANA
inter-node communication is in place. Verify the communication section. It should
have the address space for the hana subnet, and listeninterface should be set to
.internal . Verify the internal_hostname_resolution section. It should have the IP

addresses for the HANA virtual machines that belong to the hana subnet.

sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini


# Example
#global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve
[communication]
internal_network = 10.9.2.0/26
listeninterface = .internal
[internal_hostname_resolution]
10.9.2.4 = hanadb1
10.9.2.5 = hanadb2
10.9.2.6 = hanadb3

3. [1] Add host mapping to ensure that the client IP addresses are used for client
communication. Add section public_host_resolution , and add the corresponding
IP addresses from the client subnet.

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[public_hostname_resolution]
map_hanadb1 = 10.9.1.5
map_hanadb2 = 10.9.1.6
map_hanadb3 = 10.9.1.7

4. [1] Restart SAP HANA to activate the changes.

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StopSystem HDB
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function
StartSystem HDB

5. [1] Verify that the client interface will be using the IP addresses from the client
subnet for communication.

# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d
SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result
"hanadb3","net_publicname","10.9.1.7"
"hanadb2","net_publicname","10.9.1.6"
"hanadb1","net_publicname","10.9.1.5"

For information about how to verify the configuration, see SAP Note 2183363 -
Configuration of SAP HANA internal network .

6. [A] Re-enable the firewall.

Stop HANA

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -


function StopSystem HDB

Re-enable the firewall


# Execute as root
systemctl start firewalld
systemctl enable firewalld

Open the necessary firewall ports

) Important

Create firewall rules to allow HANA inter node communication and client
traffic. The required ports are listed on TCP/IP Ports of All SAP
Products . The following commands are just an example. In this
scenario with used system number 03.

# Execute as root
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,
1129,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp --
permanent
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,
1129,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp

Start HANA

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -


function StartSystem HDB

7. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the
following SAP HANA parameters:

max_parallel_io_requests 128
async_read_submit on

async_write_submit_active on
async_write_submit_blocks all
For more information, see I/O stack configuration for SAP HANA .

Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini .
For more information, see SAP Note 1999930 .

For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set
during the installation, as described in SAP Note 2267798 .

8. The storage that's used by Azure NetApp Files has a file size limitation of 16
terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it
won't automatically create a new data file when the file size limit of 16 TB is
reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will
result in errors and, eventually, in an index server crash.

) Important

To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of
the storage subsystem, set the following parameters in global.ini .

datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP
Note 2400005 . Be aware of SAP Note 2631285 .

Test SAP HANA failover


1. Simulate a node crash on an SAP HANA worker node. Do the following:

a. Before you simulate the node crash, run the following commands as hn1adm to
capture the status of the environment:

# Check the landscape status


python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN

b. To simulate a node crash, run the following command as root on the worker
node, which is hanadb2 in this case:

echo b > /proc/sysrq-trigger

c. Monitor the system for failover completion. When the failover has been
completed, capture the status, which should look like the following:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | no | info | | | 2 |
0 | default | default | master 2 | slave | worker |
standby | worker | standby | default | - |
| hanadb3 | yes | info | | | 0 |
2 | default | default | master 3 | slave | standby | slave
| standby | worker | default | default |

) Important

When a node experiences kernel panic, avoid delays with SAP HANA failover
by setting kernel.panic to 20 seconds on all HANA virtual machines. The
configuration is done in /etc/sysctl . Reboot the virtual machines to activate
the change. If this change isn't performed, failover can take 10 or more
minutes when a node is experiencing kernel panic.

2. Kill the name server by doing the following:

a. Prior to the test, check the status of the environment by running the following
commands as hn1adm:

#Landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN

b. Run the following commands as hn1adm on the active master node, which is
hanadb1 in this case:

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill

The standby node hanadb3 will take over as master node. Here is the resource
state after the failover test is completed:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | -------
-- | -------- | -------- | ---------- | ---------- | ----------- | ----
------- | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 |
0 | default | default | master 1 | slave | worker |
standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | yes | info | | | 0 |
1 | default | default | master 3 | master | standby |
master | standby | worker | default | default |

c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine,
where the name server was killed). The hanadb1 node will rejoin the environment
and will keep its standby role.

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb1, expect the following status:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 |
0 | default | default | master 1 | slave | worker |
standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | yes | info | | | 0 |
1 | default | default | master 3 | master | standby |
master | standby | worker | default | default |

d. Again, kill the name server on the currently active master node (that is, on node
hanadb3).

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill

Node hanadb1 will resume the role of master node. After the failover test has been
completed, the status will look like this:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

e. Start SAP HANA on hanadb3, which will be ready to serve as a standby node.

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb3, the status looks like the following:

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs).
High availability of SAP HANA scale-out
system on Red Hat Enterprise Linux
Article • 01/17/2024

This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration. Specifically, the configuration uses HANA system replication (HSR) and
Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file
systems in the presented architecture are NFS mounted and are provided by Azure
NetApp Files or NFS share on Azure Files.

In the example configurations and installation commands, the HANA instance is 03 and
the HANA system ID is HN1 .

Prerequisites
Some readers will benefit from consulting a variety of SAP notes and resources before
proceeding further with the topics in this article:

SAP note 1928533 includes:


A list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software, and operating system and database combinations.
The required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP note 2015553 : Lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP note [2002167]: Has recommended operating system settings for RHEL.
SAP note 2009879 : Has SAP HANA guidelines for RHEL.
SAP Note 3108302 has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
SAP note 2178632 : Contains detailed information about all monitoring metrics
reported for SAP in Azure.
SAP note 2191498 : Contains the required SAP host agent version for Linux in
Azure.
SAP note 2243692 : Contains information about SAP licensing on Linux in Azure.
SAP note 1999351 : Contains additional troubleshooting information for the
Azure enhanced monitoring extension for SAP.
SAP note 1900823 : Contains information about SAP HANA storage
requirements.
SAP community wiki : Contains all required SAP notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux.
Azure Virtual Machines deployment for SAP on Linux.
Azure Virtual Machines DBMS deployment for SAP on Linux.
SAP HANA network requirements .
General RHEL documentation:
High availability add-on overview .
High availability add-on administration .
High availability add-on reference .
Red Hat Enterprise Linux networking guide .
How do I configure SAP HANA scale-out system replication in a Pacemaker
cluster with HANA file systems on NFS shares .
Active/Active (read-enabled): RHEL HA solution for SAP HANA scale out and
system replication .
Azure-specific RHEL documentation:
Install SAP HANA on Red Hat Enterprise Linux for use in Microsoft Azure .
Red Hat Enterprise Linux Solution for SAP HANA scale-out and system
replication .
Azure NetApp Files documentation.
NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Azure Files documentation

Overview
To achieve HANA high availability for HANA scale-out installations, you can configure
HANA system replication, and protect the solution with a Pacemaker cluster to allow
automatic failover. When an active node fails, the cluster fails over the HANA resources
to the other site.

In the following diagram, there are three HANA nodes on each site, and a majority
maker node to prevent a "split-brain" scenario. The instructions can be adapted to
include more VMs as HANA DB nodes.

The HANA shared file system /hana/shared in the presented architecture can be
provided by Azure NetApp Files or NFS share on Azure Files. The HANA shared file
system is NFS mounted on each HANA node in the same HANA system replication site.
File systems /hana/data and /hana/log are local file systems and aren't shared between
the HANA DB nodes. SAP HANA will be installed in non-shared mode.

For recommended SAP HANA storage configurations, see SAP HANA Azure VMs storage
configurations.

) Important
If deploying all HANA file systems on Azure NetApp Files, for production systems,
where performance is a key, we recommend to evaluate and consider using Azure
NetApp Files application volume group for SAP HANA.

The preceding diagram shows three subnets represented within one Azure virtual
network, following the SAP HANA network recommendations:

For client communication: client 10.23.0.0/24


For internal HANA internode communication: inter 10.23.1.128/26
For HANA system replication: hsr 10.23.1.192/26

Because /hana/data and /hana/log are deployed on local disks, it isn't necessary to
deploy separate subnet and separate virtual network cards for communication to the
storage.

If you're using Azure NetApp Files, the NFS volumes for /hana/shared , are deployed in a
separate subnet, delegated to Azure NetApp Files: anf 10.23.1.0/26.

Set up the infrastructure


In the instructions that follow, we assume that you've already created the resource
group, the Azure virtual network with three Azure network subnets: client , inter and
hsr .

Deploy Linux virtual machines via the Azure portal


1. Deploy the Azure VMs. For this configuration, deploy seven virtual machines:

Three virtual machines to serve as HANA DB nodes for HANA replication site
1: hana-s1-db1, hana-s1-db2 and hana-s1-db3.
Three virtual machines to serve as HANA DB nodes for HANA replication site
2: hana-s2-db1, hana-s2-db2 and hana-s2-db3.
A small virtual machine to serve as majority maker: hana-s-mm.

The VMs deployed as SAP DB HANA nodes should be certified by SAP for HANA,
as published in the SAP HANA hardware directory . When you're deploying the
HANA DB nodes, make sure to select accelerated network.

For the majority maker node, you can deploy a small VM, because this VM doesn't
run any of the SAP HANA resources. The majority maker VM is used in the cluster
configuration to achieve and odd number of cluster nodes in a split-brain scenario.
The majority maker VM only needs one virtual network interface in the client
subnet in this example.

Deploy local managed disks for /hana/data and /hana/log . The minimum
recommended storage configuration for /hana/data and /hana/log is described in
SAP HANA Azure VMs storage configurations.

Deploy the primary network interface for each VM in the client virtual network
subnet. When the VM is deployed via Azure portal, the network interface name is
automatically generated. In this article, we'll refer to the automatically generated,
primary network interfaces as hana-s1-db1-client, hana-s1-db2-client, hana-s1-
db3-client, and so on. These network interfaces are attached to the client Azure
virtual network subnet.

) Important

Make sure that the operating system you select is SAP-certified for SAP HANA
on the specific VM types that you're using. For a list of SAP HANA certified
VM types and operating system releases for those types, see SAP HANA
certified IaaS platforms . Drill into the details of the listed VM type to get
the complete list of SAP HANA-supported operating system releases for that
type.

2. Create six network interfaces, one for each HANA DB virtual machine, in the inter
virtual network subnet (in this example, hana-s1-db1-inter, hana-s1-db2-inter,
hana-s1-db3-inter, hana-s2-db1-inter, hana-s2-db2-inter, and hana-s2-db3-
inter).

3. Create six network interfaces, one for each HANA DB virtual machine, in the hsr
virtual network subnet (in this example, hana-s1-db1-hsr, hana-s1-db2-hsr, hana-
s1-db3-hsr, hana-s2-db1-hsr, hana-s2-db2-hsr, and hana-s2-db3-hsr).
4. Attach the newly created virtual network interfaces to the corresponding virtual
machines:
a. Go to the virtual machine in the Azure portal .
b. On the left pane, select Virtual Machines. Filter on the virtual machine name
(for example, hana-s1-db1), and then select the virtual machine.
c. On the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface dropdown list, select the already created network interfaces
for the inter and hsr subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hana-s1-db2, hana-s1-db3, hana-s2-db1, hana-s2-db2 and hana-s2-db3)
g. Leave the virtual machines in the stopped state for now.

5. Enable accelerated networking for the additional network interfaces for the inter
and hsr subnets by doing the following:

a. Open Azure Cloud Shell in the Azure portal .

b. Run the following commands to enable accelerated networking for the


additional network interfaces, which are attached to the inter and hsr subnets.

Azure CLI

az network nic update --id /subscriptions/your


subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr
--accelerated-networking true

6. Start the HANA DB virtual machines.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to setup standard load balancer for high
availability setup of HANA database.

7 Note

For HANA scale out, select the NIC for the client subnet when adding the
virtual machines in the backend pool.
The full set of command in Azure CLI and PowerShell adds the VMs with
primary NIC in the backend pool.

Azure Portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create frontend IP. Select the same virtual
network and subnet same as your DB virtual machines.
2. Backend Pool: Create backend pool and add DB VMs.
3. Inbound rules: Create load balancing rule. Follow the same steps for both
load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 625<instance-no.>]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For details, see Azure Load Balancer limitations. If you need an
additional IP address for the VM, deploy a second NIC.

7 Note

When you're using the standard load balancer, you should be aware of the
following limitation. When you place VMs without public IP addresses in the back-
end pool of an internal load balancer, there's no outbound internet connectivity. To
allow routing to public end points, you need to perform additional configuration.
For more information, see Public endpoint connectivity for Virtual Machines using
Azure Standard Load Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For details, see Load Balancer health probes and

SAP note 2382421 .

Deploy NFS
There are two options for deploying Azure native NFS for /hana/shared . You can deploy
NFS volume on Azure NetApp Files or NFS share on Azure Files. Azure files support
NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.

The next sections describe the steps to deploy NFS - you'll need to select only one of
the options.

 Tip

You chose to deploy /hana/shared on NFS share on Azure Files or NFS volume on
Azure NetApp Files.

Deploy the Azure NetApp Files infrastructure


Deploy the Azure NetApp Files volumes for the /hana/shared file system. You need a
separate /hana/shared volume for each HANA system replication site. For more
information, see Set up the Azure NetApp Files infrastructure.

In this example, you use the following Azure NetApp Files volumes:

volume HN1-shared-s1 (nfs://10.23.1.7/HN1-shared-s1)


volume HN1-shared-s2 (nfs://10.23.1.7/HN1-shared-s2)

Deploy the NFS on Azure Files infrastructure


Deploy Azure Files NFS shares for the /hana/shared file system. You'll need a separate
/hana/shared Azure Files NFS share for each HANA system replication site. For more
information, see How to create an NFS share.

In this example, the following Azure Files NFS shares were used:

share hn1-shared-s1 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1)


share hn1-shared-s2 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2)

Operating system configuration and


preparation
The instructions in the next sections are prefixed with one of the following
abbreviations:

[A]: Applicable to all nodes


[AH]: Applicable to all HANA DB nodes
[M]: Applicable to the majority maker node
[AH1]: Applicable to all HANA DB nodes on SITE 1
[AH2]: Applicable to all HANA DB nodes on SITE 2
[1]: Applicable only to HANA DB node 1, SITE 1
[2]: Applicable only to HANA DB node 1, SITE 2

Configure and prepare your operating system by doing the following:

1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries are added to /etc/hosts for this example.

Bash

# Client subnet
10.23.0.11 hana-s1-db1
10.23.0.12 hana-s1-db1
10.23.0.13 hana-s1-db2
10.23.0.14 hana-s2-db1
10.23.0.15 hana-s2-db2
10.23.0.16 hana-s2-db3
10.23.0.17 hana-s-mm
# Internode subnet
10.23.1.138 hana-s1-db1-inter
10.23.1.139 hana-s1-db2-inter
10.23.1.140 hana-s1-db3-inter
10.23.1.141 hana-s2-db1-inter
10.23.1.142 hana-s2-db2-inter
10.23.1.143 hana-s2-db3-inter
# HSR subnet
10.23.1.202 hana-s1-db1-hsr
10.23.1.203 hana-s1-db2-hsr
10.23.1.204 hana-s1-db3-hsr
10.23.1.205 hana-s2-db1-hsr
10.23.1.206 hana-s2-db2-hsr
10.23.1.207 hana-s2-db3-hsr

2. [A] Create configuration file /etc/sysctl.d/ms-az.conf with Microsoft for Azure


configuration settings.

Bash

vi /etc/sysctl.d/ms-az.conf

# Add the following entries in the configuration file

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

 Tip

Avoid setting net.ipv4.ip_local_port_range and


net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files,

to allow the SAP host agent to manage the port ranges. For more details, see
SAP note 2382421 .

3. [A] Install the NFS client package.

Bash

yum install nfs-utils

4. [AH] Red Hat for HANA configuration.

Configure RHEL, as described in the Red Hat customer portal and in the
following SAP notes:

2292690 - SAP HANA DB: Recommended OS settings for RHEL 7


2777782 - SAP HANA DB: Recommended OS settings for RHEL 8
2455582 - Linux: Running SAP applications compiled with GCC 6.x
2593824 - Linux: Running SAP applications compiled with GCC 7.x
2886607 - Linux: Running SAP applications compiled with GCC 9.x
Prepare the file systems
The following sections provide steps for the preparation of your file systems. You chose
to deploy /hana/shared' on NFS share on Azure Files or NFS volume on Azure NetApp
Files.

Mount the shared file systems (Azure NetApp Files NFS)


In this example, the shared HANA file systems are deployed on Azure NetApp Files and
mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on
Azure NetApp Files.

1. [AH] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.

Bash

vi /etc/sysctl.d/91-NetApp-HANA.conf

# Add the following entries in the configuration file


net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

2. [AH] Adjust the sunrpc settings, as recommended in SAP note 3024346 - Linux
Kernel Settings for NetApp NFS .

Bash

vi /etc/modprobe.d/sunrpc.conf

# Insert the following line


options sunrpc tcp_max_slot_table_entries=128

3. [AH] Create mount points for the HANA database volumes.

Bash
mkdir -p /hana/shared

4. [AH] Verify the NFS domain setting. Make sure that the domain is configured as
the default Azure NetApp Files domain: defaultv4iddomain.com . Make sure the
mapping is set to nobody .
(This step is only needed if you're using Azure NetAppFiles NFS v4.1.)

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com . If there's a mismatch between the domain

configuration on the NFS client and the NFS server, the permissions for files
on Azure NetApp volumes that are mounted on the VMs will be displayed as
nobody .

Bash

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

5. [AH] Verify nfs4_disable_idmapping . It should be set to Y . To create the directory


structure where nfs4_disable_idmapping is located, run the mount command. You
won't be able to manually create the directory under /sys/modules, because access
is reserved for the kernel or drivers.
This step is only needed, if using Azure NetAppFiles NFSv4.1.

Bash

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more information on how to change the nfs4_disable_idmapping parameter,
see the Red Hat customer portal .

6. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.

Bash

sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.23.1.7:/HN1-shared-s1 /hana/shared

7. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.

Bash

sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.23.1.7:/HN1-shared-s2 /hana/shared

8. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs, with NFS protocol version NFSv4.

Bash

sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock=none,addr
=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr
=10.23.1.7

Mount the shared file systems (Azure Files NFS)


In this example, the shared HANA file systems are deployed on NFS on Azure Files.
Follow the steps in this section, only if you're using NFS on Azure Files.

1. [AH] Create mount points for the HANA database volumes.


Bash

mkdir -p /hana/shared

2. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.

Bash

sudo vi /etc/fstab
# Add the following entry
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount all volumes
sudo mount -a

3. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.

Bash

sudo vi /etc/fstab
# Add the following entries
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount the volume
sudo mount -a

4. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs with NFS protocol version NFSv4.1.

Bash

sudo nfsstat -m
# Example from SITE 1, hana-s1-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,a
ddr=10.23.0.35
# Example from SITE 2, hana-s2-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,a
ddr=10.23.0.35

Prepare the data and log local file systems


In the presented configuration, you deploy file systems /hana/data and /hana/log on a
managed disk, and you attach these file systems locally to each HANA DB VM. Run the
following steps to create the local data and log volumes on each HANA DB virtual
machine.

Set up the disk layout with Logical Volume Manager (LVM). The following example
assumes that each HANA virtual machine has three data disks attached, and that these
disks are used to create two volumes.

1. [AH] List all of the available disks:

Bash

ls /dev/disk/azure/scsi1/lun*

Example output:

Bash

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2

2. [AH] Create physical volumes for all of the disks that you want to use:

Bash

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2

3. [AH] Create a volume group for the data files. Use one volume group for the log
files and one for the shared directory of SAP HANA:

Bash

sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0


/dev/disk/azure/scsi1/lun1
sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2

4. [AH] Create the logical volumes. A linear volume is created when you use lvcreate
without the -i switch. We suggest that you create a striped volume for better I/O
performance. Align the stripe sizes to the values documented in SAP HANA VM
storage configurations. The -i argument should be the number of the underlying
physical volumes and the -I argument is the stripe size. In this article, two physical
volumes are used for the data volume, so the -i switch argument is set to 2 . The
stripe size for the data volume is 256 KiB . One physical volume is used for the log
volume, so you don't need to use explicit -i or -I switches for the log volume
commands.

) Important

Use the -i switch, and set it to the number of the underlying physical
volume, when you use more than one physical volume for each data or log
volume. Use the -I switch to specify the stripe size when you're creating a
striped volume. See SAP HANA VM storage configurations for recommended
storage configurations, including stripe sizes and number of disks.

Bash

sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log

5. [AH] Create the mount directories and copy the UUID of all of the logical volumes:

Bash

sudo mkdir -p /hana/data/HN1


sudo mkdir -p /hana/log/HN1
# Write down the ID of /dev/vg_hana_data_HN1/hana_data and
/dev/vg_hana_log_HN1/hana_log
sudo blkid

6. [AH] Create fstab entries for the logical volumes and mount:

Bash

sudo vi /etc/fstab

Insert the following line in the /etc/fstab file:

Bash

/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_data_HN1-hana_data
/hana/data/HN1 xfs defaults,nofail 0 2
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_log_HN1-hana_log
/hana/log/HN1 xfs defaults,nofail 0 2

Mount the new volumes:

Bash

sudo mount -a

Installation
In this example for deploying SAP HANA in a scale-out configuration with HSR on Azure
VMs, you're using HANA 2.0 SP4.

Prepare for HANA installation


1. [AH] Before the HANA installation, set the root password. You can disable the root
password after the installation has been completed. Run as root command passwd
to set the password.

2. [1,2] Change the permissions on /hana/shared .

Bash

chmod 775 /hana/shared

3. [1] Verify that you can sign in hana-s1-db2 and hana-s1-db3 via secure shell (SSH),
without being prompted for a password. If that isn't the case, exchange ssh keys,
as documented in Using key-based authentication .

Bash

ssh root@hana-s1-db2
ssh root@hana-s1-db3

4. [2] Verify that you can sign in hana-s2-db2 and hana-s2-db3 via SSH, without
being prompted for a password. If that isn't the case, exchange ssh keys, as
documented in Using key-based authentication .

Bash
ssh root@hana-s2-db2
ssh root@hana-s2-db3

5. [AH] Install additional packages, which are required for HANA 2.0 SP4. For more
information, see SAP Note 2593824 for RHEL 7.

Bash

# If using RHEL 7
yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1
# If using RHEL 8
yum install libatomic libtool-ltdl.x86_64

6. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA
installation. You can re-enable it after the HANA installation is done.

Bash

# Execute as root
systemctl stop firewalld
systemctl disable firewalld

HANA installation on the first node on each site


1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 installation
and update guide . The following instructions show the SAP HANA installation on
the first node on SITE 1.

a. Start the hdblcm program as root from the HANA installation software
directory. Use the internal_network parameter and pass the address space for
subnet, which is used for the internal HANA internode communication.

Bash

./hdblcm --internal_network=10.23.1.128/26

b. At the prompt, enter the following values:

For Choose an action, enter 1 (for install).


For Additional components for installation, enter 2, 3.
For the installation path, press Enter (defaults to /hana/shared).
For Local Host Name, press Enter to accept the default.
For Do you want to add hosts to the system?, enter n.
For SAP HANA System ID, enter HN1.
For Instance number [00], enter 03.
For Local Host Worker Group [default], press Enter to accept the default.
For Select System Usage / Enter index [4], enter 4 (for custom).
For Location of Data Volumes [/hana/data/HN1], press Enter to accept the
default.
For Location of Log Volumes [/hana/log/HN1], press Enter to accept the
default.
For Restrict maximum memory allocation? [n], enter n.
For Certificate Host Name For Host hana-s1-db1 [hana-s1-db1], press
Enter to accept the default.
For SAP Host Agent User (sapadm) Password, enter the password.
For Confirm SAP Host Agent User (sapadm) Password, enter the
password.
For System Administrator (hn1adm) Password, enter the password.
For System Administrator Home Directory [/usr/sap/HN1/home], press
Enter to accept the default.
For System Administrator Login Shell [/bin/sh], press Enter to accept the
default.
For System Administrator User ID [1001], press Enter to accept the
default.
For Enter ID of User Group (sapsys) [79], press Enter to accept the default.
For System Database User (system) Password, enter the system's
password.
For Confirm System Database User (system) Password, enter system's
password.
For Restart system after machine reboot? [n], enter n.
For Do you want to continue (y/n), validate the summary and if
everything looks good, enter y.

2. [2] Repeat the preceding step to install SAP HANA on the first node on SITE 2.

3. [1,2] Verify global.ini.

Display global.ini, and ensure that the configuration for the internal SAP HANA
internode communication is in place. Verify the communication section. It should
have the address space for the inter subnet, and listeninterface should be set
to .internal . Verify the internal_hostname_resolution section. It should have the
IP addresses for the HANA virtual machines that belong to the inter subnet.

Bash
sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
# Example from SITE1
[communication]
internal_network = 10.23.1.128/26
listeninterface = .internal
[internal_hostname_resolution]
10.23.1.138 = hana-s1-db1
10.23.1.139 = hana-s1-db2
10.23.1.140 = hana-s1-db3

4. [1,2] Prepare global.ini for installation in non-shared environment, as described in


SAP note 2080991 .

Bash

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
[persistence]
basepath_shared = no

5. [1,2] Restart SAP HANA to activate the changes.

Bash

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StopSystem
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function
StartSystem

6. [1,2] Verify that the client interface uses the IP addresses from the client subnet
for communication.

Bash

# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB
'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result - example from SITE 2
"hana-s2-db1","net_publicname","10.23.0.14"

For information about how to verify the configuration, see SAP note 2183363 -
Configuration of SAP HANA internal network .

7. [AH] Change permissions on the data and log directories to avoid a HANA
installation error.

Bash
sudo chmod o+w -R /hana/data /hana/log

8. [1] Install the secondary HANA nodes. The example instructions in this step are for
SITE 1.

a. Start the resident hdblcm program as root .

Bash

cd /hana/shared/HN1/hdblcm
./hdblcm

b. At the prompt, enter the following values:

For Choose an action, enter 2 (for add hosts).


For Enter comma separated host names to add, enter hana-s1-db2, hana-
s1-db3.
For Additional components for installation, enter 2, 3.
For Enter Root User Name [root], press Enter to accept the default.
For Select roles for host 'hana-s1-db2' [1], select 1 (for worker).
For Enter Host Failover Group for host 'hana-s1-db2' [default], press
Enter to accept the default.
For Enter Storage Partition Number for host 'hana-s1-db2' [<<assign
automatically>>], press Enter to accept the default.
For Enter Worker Group for host 'hana-s1-db2' [default], press Enter to
accept the default.
For Select roles for host 'hana-s1-db3' [1], select 1 (for worker).
For Enter Host Failover Group for host 'hana-s1-db3' [default], press
Enter to accept the default.
For Enter Storage Partition Number for host 'hana-s1-db3' [<<assign
automatically>>], press Enter to accept the default.
For Enter Worker Group for host 'hana-s1-db3' [default], press Enter to
accept the default.
For System Administrator (hn1adm) Password, enter the password.
For Enter SAP Host Agent User (sapadm) Password, enter the password.
For Confirm SAP Host Agent User (sapadm) Password, enter the
password.
For Certificate Host Name For Host hana-s1-db2 [hana-s1-db2], press
Enter to accept the default.
For Certificate Host Name For Host hana-s1-db3 [hana-s1-db3], press
Enter to accept the default.
For Do you want to continue (y/n), validate the summary and if
everything looks good, enter y.

9. [2] Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.

Configure SAP HANA 2.0 system replication


The following steps get you set up for system replication:

1. [1] Configure system replication on SITE 1:

Back up the databases as hn1adm:

Bash

hdbsql -d SYSTEMDB -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE


('initialbackupSYS')"
hdbsql -d HN1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE
('initialbackupHN1')"

Copy the system PKI files to the secondary site:

Bash

scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hana-


s2-db1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hana-
s2-db1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/

Create the primary site:

Bash

hdbnsutil -sr_enable --name=HANA_S1

2. [2] Configure system replication on SITE 2:

Register the second site to start the system replication. Run the following
command as <hanasid>adm:

Bash

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --
replicationMode=sync --name=HANA_S2
sapcontrol -nr 03 -function StartSystem

3. [1] Check the replication status and wait until all databases are in sync.

Bash

sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

# | Database | Host | Port | Service Name | Volume ID | Site


ID | Site Name | Secondary | Secondary | Secondary | Secondary |
Secondary | Replication | Replication | Replication |
# | | | | | |
| | Host | Port | Site ID | Site Name |
Active Status | Mode | Status | Status Details |
# | -------- | ------------- | ----- | ------------ | --------- | -----
-- | --------- | ------------- | --------- | --------- | --------- | --
----------- | ----------- | ----------- | -------------- |
# | HN1 | hana-s1-db3 | 30303 | indexserver | 5 |
1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 |
1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | HN1 | hana-s1-db1 | 30307 | xsengine | 2 |
1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | HN1 | hana-s1-db1 | 30303 | indexserver | 3 |
1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | HN1 | hana-s1-db2 | 30303 | indexserver | 4 |
1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#
# status system replication site "2": ACTIVE
# overall system replication status: ACTIVE
#
# Local System Replication State
#
# mode: PRIMARY
# site id: 1
# site name: HANA_S1

4. [1,2] Change the HANA configuration so that communication for HANA system
replication is directed though the HANA system replication virtual network
interfaces.

a. Stop HANA on both sites.

Bash
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function
StopSystem HDB

b. Edit global.ini to add the host mapping for HANA system replication. Use the IP
addresses from the hsr subnet.

Bash

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[system_replication_hostname_resolution]
10.23.1.202 = hana-s1-db1
10.23.1.203 = hana-s1-db2
10.23.1.204 = hana-s1-db3
10.23.1.205 = hana-s2-db1
10.23.1.206 = hana-s2-db2
10.23.1.207 = hana-s2-db3

c. Start HANA on both sites.

Bash

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StartSystem HDB

For more information, see Host name resolution for system replication .

5. [AH] Re-enable the firewall and open the necessary ports.

a. Re-enable the firewall.

Bash

# Execute as root
systemctl start firewalld
systemctl enable firewalld

b. Open the necessary firewall ports. You will need to adjust the ports for your
HANA instance number.

) Important

Create firewall rules to allow HANA internode communication and client


traffic. The required ports are listed on TCP/IP ports of all SAP products .
The following commands are just an example. In this scenario, you use
system number 03.

Bash

# Execute as root
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,11
29,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp --
permanent
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,11
29,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp

Create a Pacemaker cluster


To create a basic Pacemaker cluster, follow the steps in Setting up Pacemaker on Red
Hat Enterprise Linux in Azure. Include all virtual machines, including the majority maker
in the cluster.

) Important

Don't set quorum expected-votes to 2. This isn't a two-node cluster. Make sure that
the cluster property concurrent-fencing is enabled, so that node fencing is
deserialized.

Create file system resources


For the next part of this process, you need to create file system resources. Here's how:

1. [1,2] Stop SAP HANA on both replication sites. Run as <sid>adm.

Bash

sapcontrol -nr 03 -function StopSystem

2. [AH] Unmount file system /hana/shared , which was temporarily mounted for the
installation on all HANA DB VMs. Before you can unmount it, you need to stop any
processes and sessions that are using the file system.

Bash
umount /hana/shared

3. [1] Create the file system cluster resources for /hana/shared in the disabled state.
You use --disabled because you have to define the location constraints before the
mounts are enabled.
You chose to deploy /hana/shared' on NFS share on Azure Files or NFS volume on
Azure NetApp Files.

In this example, the '/hana/shared' file system is deployed on Azure NetApp


Files and mounted over NFSv4.1. Follow the steps in this section, only if
you're using NFS on Azure NetApp Files.

Bash

# /hana/shared file system for site 1


pcs resource create fs_hana_shared_s1 --disabled
ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1
directory=/hana/shared \
fstype=nfs
options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,prot
o=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor
interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120

# /hana/shared file system for site 2


pcs resource create fs_hana_shared_s2 --disabled
ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1
directory=/hana/shared \
fstype=nfs
options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,prot
o=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor
interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120

# clone the /hana/shared file system resources for both site1 and
site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1
interleave=true
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1
interleave=true

The suggested timeouts values allow the cluster resources to withstand protocol-
specific pause, related to NFSv4.1 lease renewals on Azure NetApp Files. For more
information see NFS in NetApp Best practice .
In this example, the '/hana/shared' file system is deployed on NFS on Azure
Files. Follow the steps in this section, only if you're using NFS on Azure Files.

Bash

# /hana/shared file system for site 1


pcs resource create fs_hana_shared_s1 --disabled
ocf:heartbeat:Filesystem
device=sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
directory=/hana/shared \
fstype=nfs
options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op
monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20
\
op start interval=0 timeout=120 op stop interval=0 timeout=120

# /hana/shared file system for site 2


pcs resource create fs_hana_shared_s2 --disabled
ocf:heartbeat:Filesystem
device=sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
directory=/hana/shared \
fstype=nfs
options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op
monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20
\
op start interval=0 timeout=120 op stop interval=0 timeout=120

# clone the /hana/shared file system resources for both site1 and
site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1
interleave=true
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1
interleave=true

The OCF_CHECK_LEVEL=20 attribute is added to the monitor operation, so that


monitor operations perform a read/write test on the file system. Without this
attribute, the monitor operation only verifies that the file system is mounted.
This can be a problem because when connectivity is lost, the file system
might remain mounted, despite being inaccessible.

The on-fail=fence attribute is also added to the monitor operation. With this
option, if the monitor operation fails on a node, that node is immediately
fenced. Without this option, the default behavior is to stop all resources that
depend on the failed resource, then restart the failed resource, and then start
all the resources that depend on the failed resource. Not only can this
behavior take a long time when an SAP HANA resource depends on the failed
resource, but it also can fail altogether. The SAP HANA resource can't stop
successfully, if the NFS share holding the HANA binaries is inaccessible.
The timeouts in the above configurations may need to be adapted to the
specific SAP setup.

4. [1] Configure and verify the node attributes. All SAP HANA DB nodes on replication
site 1 are assigned attribute S1 , and all SAP HANA DB nodes on replication site 2
are assigned attribute S2 .

Bash

# HANA replication site 1


pcs node attribute hana-s1-db1 NFS_SID_SITE=S1
pcs node attribute hana-s1-db2 NFS_SID_SITE=S1
pcs node attribute hana-s1-db3 NFS_SID_SITE=S1
# HANA replication site 2
pcs node attribute hana-s2-db1 NFS_SID_SITE=S2
pcs node attribute hana-s2-db2 NFS_SID_SITE=S2
pcs node attribute hana-s2-db3 NFS_SID_SITE=S2
# To verify the attribute assignment to nodes execute
pcs node attribute

5. [1] Configure the constraints that determine where the NFS file systems will be
mounted, and enable the file system resources.

Bash

# Configure the constraints


pcs constraint location fs_hana_shared_s1-clone rule resource-
discovery=never score=-INFINITY NFS_SID_SITE ne S1
pcs constraint location fs_hana_shared_s2-clone rule resource-
discovery=never score=-INFINITY NFS_SID_SITE ne S2
# Enable the file system resources
pcs resource enable fs_hana_shared_s1
pcs resource enable fs_hana_shared_s2

When you enable the file system resources, the cluster will mount the
/hana/shared file systems.

6. [AH] Verify that the Azure NetApp Files volumes are mounted under /hana/shared ,
on all HANA DB VMs on both sites.

Example, if using Azure NetApp Files:

Bash

sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,prot
o=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock
=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,prot
o=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock
=none,addr=10.23.1.7

Example, if using Azure Files NFS:

Bash

sudo nfsstat -m
# Example from SITE 1, hana-s1-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,p
roto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_l
ock=none,addr=10.23.0.35
# Example from SITE 2, hana-s2-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,p
roto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_l
ock=none,addr=10.23.0.35

7. [1] Configure and clone the attribute resources, and configure the constraints, as
follows:

Bash

# Configure the attribute resources


pcs resource create hana_nfs_s1_active ocf:pacemaker:attribute
active_value=true inactive_value=false name=hana_nfs_s1_active
pcs resource create hana_nfs_s2_active ocf:pacemaker:attribute
active_value=true inactive_value=false name=hana_nfs_s2_active
# Clone the attribute resources
pcs resource clone hana_nfs_s1_active meta clone-node-max=1
interleave=true
pcs resource clone hana_nfs_s2_active meta clone-node-max=1
interleave=true
# Configure the constraints, which will set the attribute values
pcs constraint order fs_hana_shared_s1-clone then hana_nfs_s1_active-
clone
pcs constraint order fs_hana_shared_s2-clone then hana_nfs_s2_active-
clone
 Tip

If your configuration includes file systems other than / hana/shared , and these
file systems are NFS mounted, then include the sequential=false option. This
option ensures that there are no ordering dependencies among the file
systems. All NFS mounted file systems must start before the corresponding
attribute resource, but they don't need to start in any order relative to each
other. For more information, see How do I configure SAP HANA scale-out
HSR in a Pacemaker cluster when the HANA file systems are NFS shares .

8. [1] Place Pacemaker in maintenance mode, in preparation for the creation of the
HANA cluster resources.

Bash

pcs property set maintenance-mode=true

Create SAP HANA cluster resources


Now you're ready to create the cluster resources:

1. [A] Install the HANA scale-out resource agent on all cluster nodes, including the
majority maker.

Bash

yum install -y resource-agents-sap-hana-scaleout

7 Note

For the minimum supported version of package resource-agents-sap-hana-


scaleout for your operating system release, see Support policies for RHEL HA

clusters - Management of SAP HANA in a cluster .

2. [1,2] Install the HANA system replication hook on one HANA DB node on each
system replication site. SAP HANA should still be down.

a. Prepare the hook as root .

Bash
mkdir -p /hana/shared/myHooks
cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
chown -R hn1adm:sapsys /hana/shared/myHooks

b. Adjust global.ini .

Bash

# add to global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1

[trace]
ha_dr_saphanasr = info

3. [AH] The cluster requires sudoers configuration on the cluster node for <sid>adm.
In this example, you achieve this by creating a new file. Run the commands as
root .

Bash

sudo visudo -f /etc/sudoers.d/20-saphana


# Insert the following lines and then save
Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK
-t crm_config -s SAPHanaSR
Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v
SFAIL -t crm_config -s SAPHanaSR
hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL
Defaults!SOK, SFAIL !requiretty

4. [1,2] Start SAP HANA on both replication sites. Run as <sid>adm.

Bash

sapcontrol -nr 03 -function StartSystem

5. [1] Verify the hook installation. Run as <sid>adm on the active HANA system
replication site.

Bash

cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
# Example entries
# 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.092016 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK

6. [1] Create the HANA cluster resources. Run the following commands as root .

a. Make sure the cluster is already in maintenance mode.

b. Next, create the HANA topology resource.


If you're building a RHEL 7.x cluster, use the following commands:

Bash

pcs resource create SAPHanaTopology_HN1_HDB03


SAPHanaTopologyScaleOut \
SID=HN1 InstanceNumber=03 \
op start timeout=600 op stop timeout=300 op monitor interval=10
timeout=600

pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1


interleave=true

If you're building a RHEL >= 8.x cluster, use the following commands:

Bash

pcs resource create SAPHanaTopology_HN1_HDB03 SAPHanaTopology \


SID=HN1 InstanceNumber=03 meta clone-node-max=1 interleave=true \
op methods interval=0s timeout=5 \
op start timeout=600 op stop timeout=300 op monitor interval=10
timeout=600

pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1


interleave=true

c. Create the HANA instance resource.

7 Note

This article contains references to a term that Microsoft no longer uses.


When the term is removed from the software, we’ll remove it from this
article.
If you're building a RHEL 7.x cluster, use the following commands:

Bash

pcs resource create SAPHana_HN1_HDB03 SAPHanaController \


SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true
DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op
promote interval=0 timeout=3600 \
op monitor interval=60 role="Master" timeout=700 op monitor
interval=61 role="Slave" timeout=700

pcs resource master msl_SAPHana_HN1_HDB03 SAPHana_HN1_HDB03 \


meta master-max="1" clone-node-max=1 interleave=true

If you're building a RHEL >= 8.x cluster, use the following commands:

Bash

pcs resource create SAPHana_HN1_HDB03 SAPHanaController \


SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true
DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
op demote interval=0s timeout=320 op methods interval=0s timeout=5
\
op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op
promote interval=0 timeout=3600 \
op monitor interval=60 role="Master" timeout=700 op monitor
interval=61 role="Slave" timeout=700

pcs resource promotable SAPHana_HN1_HDB03 \


meta master-max="1" clone-node-max=1 interleave=true

) Important

It's a good idea to set AUTOMATED_REGISTER to false , while you're


performing failover tests, to prevent a failed primary instance to
automatically register as secondary. After testing, as a best practice, set
AUTOMATED_REGISTER to true , so that after takeover, system replication can

resume automatically.

d. Create the virtual IP and associated resources.

Bash

pcs resource create vip_HN1_03 ocf:heartbeat:IPaddr2 ip=10.23.0.18


op monitor interval="10s" timeout="20s"
sudo pcs resource create nc_HN1_03 azure-lb port=62503
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03

e. Create the cluster constraints.

If you're building a RHEL 7.x cluster, use the following commands:

Bash

#Start HANA topology, before the HANA instance


pcs constraint order SAPHanaTopology_HN1_HDB03-clone then
msl_SAPHana_HN1_HDB03

pcs constraint colocation add g_ip_HN1_03 with master


msl_SAPHana_HN1_HDB03 4000
#HANA resources are only allowed to run on a node, if the node's NFS
file systems are mounted. The constraint also avoids the majority
maker node
pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule
resource-discovery=never score=-INFINITY hana_nfs_s1_active ne true
and hana_nfs_s2_active ne true

If you're building a RHEL >= 8.x cluster, use the following commands:

Bash

#Start HANA topology, before the HANA instance


pcs constraint order SAPHanaTopology_HN1_HDB03-clone then
SAPHana_HN1_HDB03-clone

pcs constraint colocation add g_ip_HN1_03 with master


SAPHana_HN1_HDB03-clone 4000
#HANA resources are only allowed to run on a node, if the node's NFS
file systems are mounted. The constraint also avoids the majority
maker node
pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule
resource-discovery=never score=-INFINITY hana_nfs_s1_active ne true
and hana_nfs_s2_active ne true

7. [1] Place the cluster out of maintenance mode. Make sure that the cluster status is
ok , and that all of the resources are started.

Bash

sudo pcs property set maintenance-mode=false


#If there are failed cluster resources, you may need to run the next
command
pcs resource cleanup
7 Note

The timeouts in the preceding configuration are just examples, and might
need to be adapted to the specific HANA setup. For instance, you might need
to increase the start timeout, if it takes longer to start the SAP HANA
database.

Configure HANA active/read-enabled system


replication
Starting with SAP HANA 2.0 SPS 01, SAP allows active/read-enabled setups for SAP
HANA system replication. With this capability, you can use the secondary systems of
SAP HANA system replication actively for read-intensive workloads. To support such a
setup in a cluster, you need a second virtual IP address, which allows clients to access
the secondary read-enabled SAP HANA database. To ensure that the secondary
replication site can still be accessed after a takeover has occurred, the cluster needs to
move the virtual IP address around with the secondary of the SAP HANA resource.

This section describes the additional steps you must take to manage this type of system
replication in a Red Hat high availability cluster, with a second virtual IP address.

Before proceeding further, make sure you have fully configured a Red Hat high
availability cluster, managing an SAP HANA database, as described earlier in this article.

Additional setup in Azure Load Balancer for active/read-


enabled setup
To proceed with provisioning your second virtual IP, make sure you have configured
Azure Load Balancer as described in Configure Azure Load Balancer.
For the standard load balancer, follow these additional steps on the same load balancer
that you created in the earlier section.

1. Create a second front-end IP pool:


a. Open the load balancer, select frontend IP pool, and select Add.
b. Enter the name of the second front-end IP pool (for example, hana-
secondaryIP).
c. Set the Assignment to Static, and enter the IP address (for example, 10.23.0.19).
d. Select OK.
e. After the new front-end IP pool is created, note the pool IP address.

2. Next, create a health probe:


a. Open the load balancer, select health probes, and select Add.
b. Enter the name of the new health probe (for example, hana-secondaryhp).
c. Select TCP as the protocol and port 62603. Keep the Interval value set to 5, and
the Unhealthy threshold value set to 2.
d. Select OK.

3. Next, create the load-balancing rules:


a. Open the load balancer, select load balancing rules, and select Add.
b. Enter the name of the new load balancer rule (for example, hana-secondarylb).
c. Select the front-end IP address, the back-end pool, and the health probe that
you created earlier (for example, hana-secondaryIP, hana-backend, and hana-
secondaryhp).
d. Select HA Ports.
e. Make sure to enable Floating IP.
f. Select OK.

Configure HANA active/read-enabled system replication


The steps to configure HANA system replication are described in the Configure SAP
HANA 2.0 system replication section. If you are deploying a read-enabled secondary
scenario, while you're configuring system replication on the second node, run following
command as hanasidadm:

Bash

sapcontrol -nr 03 -function StopWait 600 10

hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --


replicationMode=sync --name=HANA_S2 --operationMode=logreplay_readaccess
Add a secondary virtual IP address resource for an
active/read-enabled setup
You can configure the second virtual IP and the additional constraints with the following
commands. If the secondary instance is down, the secondary virtual IP will be switched
to the primary.

Bash

pcs property set maintenance-mode=true

pcs resource create secvip_HN1_03 ocf:heartbeat:IPaddr2 ip="10.23.0.19"


pcs resource create secnc_HN1_03 ocf:heartbeat:azure-lb port=62603
pcs resource group add g_secip_HN1_03 secnc_HN1_03 secvip_HN1_03

# RHEL 8.x:
pcs constraint location g_ip_HN1_03 rule score=500 role=master
hana_hn1_roles eq "master1:master:worker:master" and hana_hn1_clone_state eq
PROMOTED
pcs constraint location g_secip_HN1_03 rule score=50 hana_hn1_roles eq
'master1:master:worker:master'
pcs constraint order promote SAPHana_HN1_HDB03-clone then start g_ip_HN1_03
pcs constraint order start g_ip_HN1_03 then start g_secip_HN1_03
pcs constraint colocation add g_secip_HN1_03 with Slave SAPHana_HN1_HDB03-
clone 5

# RHEL 7.x:
pcs constraint location g_ip_HN1_03 rule score=500 role=master
hana_hn1_roles eq "master1:master:worker:master" and hana_hn1_clone_state eq
PROMOTED
pcs constraint location g_secip_HN1_03 rule score=50 hana_hn1_roles eq
'master1:master:worker:master'
pcs constraint order promote msl_SAPHana_HN1_HDB03 then start g_ip_HN1_03
pcs constraint order start g_ip_HN1_03 then start g_secip_HN1_03
pcs constraint colocation add g_secip_HN1_03 with Slave
msl_SAPHana_HN1_HDB03 5

pcs property set maintenance-mode=false

Make sure that the cluster status is ok , and that all of the resources are started. The
second virtual IP will run on the secondary site along with SAP HANA secondary
resource.

Bash

# Example output from crm_mon


#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm
#Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana--s1-db1 hana-s1-db2 hana-s1-db3 ]
#Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
#Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Clone Set: SAPHanaTopology_HN1_HDB03-clone [SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2
hana-s2-db3 ]
#Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s1-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1
#Resource Group: g_secip_HN1_03
# secnc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s2-db1
# secvip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1

In the next section, you can find the typical set of failover tests to run.

When you're testing a HANA cluster configured with a read-enabled secondary, be


aware of the following behavior of the second virtual IP:

When cluster resource SAPHana_HN1_HDB03 moves to the secondary site (S2),


the second virtual IP will move to the other site, hana-s1-db1. If you have
configured AUTOMATED_REGISTER="false" , and HANA system replication isn't
registered automatically, then the second virtual IP will run on hana-s2-db1.

When you're testing server crash, the second virtual IP resources (secvip_HN1_03)
and the Azure Load Balancer port resource (secnc_HN1_03) run on the primary
server, alongside the primary virtual IP resources. While the secondary server is
down, the applications that are connected to the read-enabled HANA database will
connect to the primary HANA database. This behavior is expected. It allows
applications that are connected to the read-enabled HANA database to operate
while a secondary server is unavailable.

During failover and fallback, the existing connections for applications that are
using the second virtual IP to connect to the HANA database might be interrupted.

Test SAP HANA failover


1. Before you start a test, check the cluster and SAP HANA system replication status.

a. Verify that there are no failed cluster actions.

Bash

#Verify that there are no failed cluster actions


pcs status
# Example
#Stack: corosync
#Current DC: hana-s-mm (version 1.1.19-8.el7_6.5-c3c624ea3d) -
partition with quorum
#Last updated: Thu Sep 24 06:00:20 2020
#Last change: Thu Sep 24 05:59:17 2020 by root via crm_attribute on
hana-s1-db1
#
#7 nodes configured
#45 resources configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm
#Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana--s1-db1 hana-s1-db2 hana-s1-db3 ]
#Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
#Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
#Clone Set: SAPHanaTopology_HN1_HDB03-clone
[SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-
s2-db3 ]
#Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s1-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1

b. Verify that SAP HANA system replication is in sync.

Bash

# Verify HANA HSR is in sync


sudo su - hn1adm -c "python
/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
#| Database | Host | Port | Service Name | Volume ID | Site
ID | Site Name | Secondary | Secondary| Secondary | Secondary |
Secondary | Replication | Replication | Replication |
#| | | | | |
| | Host | Port | Site ID | Site Name |
Active Status | Mode | Status | Status Details |
#| -------- | ----------- | ----- | ------------ | --------- | -----
-- | --------- | ------------- | -------- | --------- | --------- |
------------- | ----------- | ----------- | -------------- |
#| HN1 | hana-s1-db3 | 30303 | indexserver | 5 |
2 | HANA_S1 | hana-s2-db3 | 30303 | 1 | HANA_S2 |
YES | SYNC | ACTIVE | |
#| HN1 | hana-s1-db2 | 30303 | indexserver | 4 |
2 | HANA_S1 | hana-s2-db2 | 30303 | 1 | HANA_S2 |
YES | SYNC | ACTIVE | |
#| SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 |
2 | HANA_S1 | hana-s2-db1 | 30301 | 1 | HANA_S2 |
YES | SYNC | ACTIVE | |
#| HN1 | hana-s1-db1 | 30307 | xsengine | 2 |
2 | HANA_S1 | hana-s2-db1 | 30307 | 1 | HANA_S2 |
YES | SYNC | ACTIVE | |
#| HN1 | hana-s1-db1 | 30303 | indexserver | 3 |
2 | HANA_S1 | hana-s2-db1 | 30303 | 1 | HANA_S2 |
YES | SYNC | ACTIVE | |

#status system replication site "1": ACTIVE


#overall system replication status: ACTIVE

#Local System Replication State


#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

#mode: PRIMARY
#site id: 1
#site name: HANA_S1

2. Verify the cluster configuration for a failure scenario, when a node loses access to
the NFS share ( /hana/shared ).

The SAP HANA resource agents depend on binaries, stored on /hana/shared , to


perform operations during failover. File system /hana/shared is mounted over NFS
in the presented configuration. A test that can be performed, is to create a
temporary firewall rule to block access to the /hana/shared NFS mounted file
system on one of the primary site VMs. This approach validates that the cluster will
fail over, if access to /hana/shared is lost on the active system replication site.

Expected result: When you block the access to the /hana/shared NFS mounted file
system on one of the primary site VMs, the monitoring operation that performs
read/write operation on file system, will fail, as it is not able to access the file
system and will trigger HANA resource failover. The same result is expected when
your HANA node loses access to the NFS share.

You can check the state of the cluster resources by running crm_mon or pcs status .
Resource state before starting the test:

Bash

# Output of crm_mon
#7 nodes configured
#45 resources configured

#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1


hana-s2-db2 hana-s2-db3 ]
#
#Active resources:

#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm


# Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: SAPHanaTopology_HN1_HDB03-clone
[SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-
s2-db3 ]
# Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s1-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1

To simulate failure for /hana/shared :

If using NFS on ANF, first confirm the IP address for the /hana/shared ANF
volume on the primary site. You can do that by running df -kh|grep
/hana/shared .

If using NFS on Azure Files, first determine the IP address of the private end
point for your storage account.

Then, set up a temporary firewall rule to block access to the IP address of the
/hana/shared NFS file system by executing the following command on one of the

primary HANA system replication site VMs.


In this example, the command was executed on hana-s1-db1 for ANF volume
/hana/shared .

Bash

iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7


-j DROP

The HANA VM that lost access to /hana/shared should restart or stop, depending
on the cluster configuration. The cluster resources are migrated to the other HANA
system replication site.

If the cluster hasn't started on the VM that was restarted, start the cluster by
running the following:

Bash

# Start the cluster


pcs cluster start

When the cluster starts, file system /hana/shared is automatically mounted. If you
set AUTOMATED_REGISTER="false" , you will need to configure SAP HANA system
replication on the secondary site. In this case, you can run these commands to
reconfigure SAP HANA as secondary.

Bash

# Execute on the secondary


su - hn1adm
# Make sure HANA is not running on the secondary site. If it is
started, stop HANA
sapcontrol -nr 03 -function StopWait 600 10
# Register the HANA secondary site
hdbnsutil -sr_register --name=HANA_S1 --remoteHost=hana-s2-db1 --
remoteInstance=03 --replicationMode=sync
# Switch back to root and clean up failed resources
pcs resource cleanup SAPHana_HN1_HDB03

The state of the resources, after the test:

Bash

# Output of crm_mon
#7 nodes configured
#45 resources configured
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]

#Active resources:

#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm


# Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: fs_hana_shared_s2-clone [fs_hana_shared_s2]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: hana_nfs_s1_active-clone [hana_nfs_s1_active]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
# Clone Set: hana_nfs_s2_active-clone [hana_nfs_s2_active]
# Started: [ hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
# Clone Set: SAPHanaTopology_HN1_HDB03-clone
[SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [SAPHana_HN1_HDB03]
# Masters: [ hana-s2-db1 ]
# Slaves: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db2 hana-
s2-db3 ]
# Resource Group: g_ip_HN1_03
# nc_HN1_03 (ocf::heartbeat:azure-lb): Started hana-s2-db1
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1

It's a good idea to test the SAP HANA cluster configuration thoroughly, by also
performing the tests documented in HA for SAP HANA on Azure VMs on RHEL.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure VMs.
Set up Pacemaker on SUSE Linux
Enterprise Server in Azure
Article • 04/08/2024

This article discusses how to set up Pacemaker on SUSE Linux Enterprise Server (SLES) in
Azure.

Overview
In Azure, you have two options for setting up fencing in the Pacemaker cluster for SLES.
You can use an Azure fence agent, which restarts a failed node via the Azure APIs, or you
can use SBD device.

Use an SBD device


You can configure the SBD device by using either of two options:

SBD with an iSCSI target server:

The SBD device requires at least one additional virtual machine (VM) that acts as
an Internet Small Computer System Interface (iSCSI) target server and provides an
SBD device. These iSCSI target servers can, however, be shared with other
Pacemaker clusters. The advantage of using an SBD device is that if you're already
using SBD devices on-premises, they don't require any changes to how you
operate the Pacemaker cluster.

You can use up to three SBD devices for a Pacemaker cluster to allow an SBD
device to become unavailable (for example, during OS patching of the iSCSI target
server). If you want to use more than one SBD device per Pacemaker, be sure to
deploy multiple iSCSI target servers and connect one SBD from each iSCSI target
server. We recommend using either one SBD device or three. Pacemaker can't
automatically fence a cluster node if only two SBD devices are configured and one
of them is unavailable. If you want to be able to fence when one iSCSI target server
is down, you have to use three SBD devices and, therefore, three iSCSI target
servers. That's the most resilient configuration when you're using SBDs.
) Important

When you're planning and deploying Linux Pacemaker clustered nodes and
SBD devices, do not allow the routing between your virtual machines and the
VMs that are hosting the SBD devices to pass through any other devices, such
as a network virtual appliance (NVA) .

Maintenance events and other issues with the NVA can have a negative
impact on the stability and reliability of the overall cluster configuration. For
more information, see User-defined routing rules.

SBD with an Azure shared disk:

To configure an SBD device, you need to attach at least one Azure shared disk to
all virtual machines that are part of Pacemaker cluster. The advantage of SBD
device using an Azure shared disk is that you don't need to deploy additional
virtual machines.
Here are some important considerations about SBD devices when you're using an
Azure shared disk:
An Azure shared disk with Premium SSD is supported as an SBD device.
SBD devices that use an Azure shared disk are supported on SLES High
Availability 15 SP01 and later.
SBD devices that use an Azure premium shared disk are supported on locally
redundant storage (LRS) and zone-redundant storage (ZRS).
Depending on the type of your deployment, choose the appropriate redundant
storage for an Azure shared disk as your SBD device.
An SBD device using LRS for Azure premium shared disk (skuName -
Premium_LRS) is only supported with deployment in availability set.
An SBD device using ZRS for an Azure premium shared disk (skuName -
Premium_ZRS) is recommended with deployment in availability zones.
A ZRS for managed disk is currently unavailable in all regions with availability
zones. For more information, review the ZRS "Limitations" section in
Redundancy options for managed disks.
The Azure shared disk that you use for SBD devices doesn't need to be large.
The maxShares value determines how many cluster nodes can use the shared
disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-
node cluster such as SAP ASCS/ERS or SAP HANA scale-up.
For HANA scale-out with HANA system replication (HSR) and Pacemaker, you
can use an Azure shared disk for SBD devices in clusters with up to four nodes
per replication site because of the current limit of maxShares.
We do not recommend attaching an Azure shared disk SBD device across
Pacemaker clusters.
If you use multiple Azure shared disk SBD devices, check on the limit for a
maximum number of data disks that can be attached to a VM.
For more information about limitations for Azure shared disks, carefully review
the "Limitations" section of Azure shared disk documentation.

Use an Azure fence agent


You can set up fencing by using an Azure fence agent. Azure fence agent requires
managed identities for the cluster VMs or a service principal that manages restarting
failed nodes via Azure APIs. Azure fence agent doesn't require the deployment of
additional virtual machines.

SBD with an iSCSI target server


To use an SBD device that uses an iSCSI target server for fencing, follow the instructions
in the next sections.

Set up the iSCSI target server


You first need to create the iSCSI target virtual machines. You can share iSCSI target
servers with multiple Pacemaker clusters.

1. Deploy new SLES 12 SP3 or higher virtual machines and connect to them via SSH.
The machines don't need to be large. Virtual machine sizes Standard_E2s_v3 or
Standard_D2s_v3 are sufficient. Be sure to use Premium storage for the OS disk.

2. On iSCSI target virtual machines, run the following commands:

a. Update SLES.

Bash

sudo zypper update

7 Note

You might need to reboot the OS after you upgrade or update the OS.

b. Remove packages.
To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following
packages. You can ignore errors about packages that can't be found.

Bash

sudo zypper remove lio-utils python-rtslib python-configshell targetcli

c. Install iSCSI target packages.

Bash

sudo zypper install targetcli-fb dbus-1-python

d. Enable the iSCSI target service.

Bash

sudo systemctl enable targetcli


sudo systemctl start targetcli

Create an iSCSI device on the iSCSI target server


To create the iSCSI disks for the clusters to be used by your SAP systems, run the
following commands on all iSCSI target virtual machines. In the example, SBD devices
for multiple clusters are created. It shows how you would use one iSCSI target server for
multiple clusters. The SBD devices are placed on the OS disk. Make sure that you have
enough space.

nfs: Identifies the NFS cluster.


ascsnw1: Identifies the ASCS cluster of NW1.
dbnw1: Identifies the database cluster of NW1.
nfs-0 and nfs-1: The hostnames of the NFS cluster nodes.
nw1-xscs-0 and nw1-xscs-1: The hostnames of the NW1 ASCS cluster nodes.
nw1-db-0 and nw1-db-1: The hostnames of the database cluster nodes.

In the following instructions, replace adjust the hostnames of your cluster nodes and the
SID of your SAP system.

1. Create the root folder for all SBD devices.

Bash

sudo mkdir /sbd


2. Create the SBD device for the NFS server.

Bash

sudo targetcli backstores/fileio create sbdnfs /sbd/sbdnfs 50M


write_back=false
sudo targetcli iscsi/ create iqn.2006-04.nfs.local:nfs
sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/luns/ create
/backstores/fileio/sbdnfs
sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create
iqn.2006-04.nfs-0.local:nfs-0
sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create
iqn.2006-04.nfs-1.local:nfs-1

3. Create the SBD device for the ASCS server of SAP System NW1.

Bash

sudo targetcli backstores/fileio create sbdascsnw1 /sbd/sbdascsnw1 50M


write_back=false
sudo targetcli iscsi/ create iqn.2006-04.ascsnw1.local:ascsnw1
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/luns/
create /backstores/fileio/sbdascsnw1
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/
create iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0
sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/
create iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1

4. Create the SBD device for the database cluster of SAP System NW1.

Bash

sudo targetcli backstores/fileio create sbddbnw1 /sbd/sbddbnw1 50M


write_back=false
sudo targetcli iscsi/ create iqn.2006-04.dbnw1.local:dbnw1
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/luns/ create
/backstores/fileio/sbddbnw1
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create
iqn.2006-04.nw1-db-0.local:nw1-db-0
sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create
iqn.2006-04.nw1-db-1.local:nw1-db-1

5. Save the targetcli changes.

Bash

sudo targetcli saveconfig


6. Check to ensure that everything was set up correctly.

Bash

sudo targetcli ls

o- /
.......................................................................
................................... [...]
o- backstores
.......................................................................
........................ [...]
| o- block
.......................................................................
............ [Storage Objects: 0]
| o- fileio
.......................................................................
........... [Storage Objects: 3]
| | o- sbdascsnw1 ................................................
[/sbd/sbdascsnw1 (50.0MiB) write-thru activated]
| | | o- alua
.......................................................................
............. [ALUA Groups: 1]
| | | o- default_tg_pt_gp
........................................................ [ALUA state:
Active/optimized]
| | o- sbddbnw1 ....................................................
[/sbd/sbddbnw1 (50.0MiB) write-thru activated]
| | | o- alua
.......................................................................
............. [ALUA Groups: 1]
| | | o- default_tg_pt_gp
........................................................ [ALUA state:
Active/optimized]
| | o- sbdnfs ........................................................
[/sbd/sbdnfs (50.0MiB) write-thru activated]
| | o- alua
.......................................................................
............. [ALUA Groups: 1]
| | o- default_tg_pt_gp
........................................................ [ALUA state:
Active/optimized]
| o- pscsi
.......................................................................
............ [Storage Objects: 0]
| o- ramdisk
.......................................................................
.......... [Storage Objects: 0]
o- iscsi
.......................................................................
...................... [Targets: 3]
| o- iqn.2006-04.ascsnw1.local:ascsnw1
..................................................................
[TPGs: 1]
| | o- tpg1
.......................................................................
......... [no-gen-acls, no-auth]
| | o- acls
.......................................................................
.................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0
............................................... [Mapped LUNs: 1]
| | | | o- mapped_lun0
............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | | o- iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1
............................................... [Mapped LUNs: 1]
| | | o- mapped_lun0
............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | o- luns
.......................................................................
.................... [LUNs: 1]
| | | o- lun0 ..........................................
[fileio/sbdascsnw1 (/sbd/sbdascsnw1) (default_tg_pt_gp)]
| | o- portals
.......................................................................
.............. [Portals: 1]
| | o- 0.0.0.0:3260
.......................................................................
............... [OK]
| o- iqn.2006-04.dbnw1.local:dbnw1
......................................................................
[TPGs: 1]
| | o- tpg1
.......................................................................
......... [no-gen-acls, no-auth]
| | o- acls
.......................................................................
.................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-db-0.local:nw1-db-0
................................................... [Mapped LUNs: 1]
| | | | o- mapped_lun0
.............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | | o- iqn.2006-04.nw1-db-1.local:nw1-db-1
................................................... [Mapped LUNs: 1]
| | | o- mapped_lun0
.............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | o- luns
.......................................................................
.................... [LUNs: 1]
| | | o- lun0 ..............................................
[fileio/sbddbnw1 (/sbd/sbddbnw1) (default_tg_pt_gp)]
| | o- portals
.......................................................................
.............. [Portals: 1]
| | o- 0.0.0.0:3260
.......................................................................
............... [OK]
| o- iqn.2006-04.nfs.local:nfs
.......................................................................
... [TPGs: 1]
| o- tpg1
.......................................................................
......... [no-gen-acls, no-auth]
| o- acls
.......................................................................
.................... [ACLs: 2]
| | o- iqn.2006-04.nfs-0.local:nfs-0
......................................................... [Mapped LUNs:
1]
| | | o- mapped_lun0
................................................................ [lun0
fileio/sbdnfs (rw)]
| | o- iqn.2006-04.nfs-1.local:nfs-1
......................................................... [Mapped LUNs:
1]
| | o- mapped_lun0
................................................................ [lun0
fileio/sbdnfs (rw)]
| o- luns
.......................................................................
.................... [LUNs: 1]
| | o- lun0 ..................................................
[fileio/sbdnfs (/sbd/sbdnfs) (default_tg_pt_gp)]
| o- portals
.......................................................................
.............. [Portals: 1]
| o- 0.0.0.0:3260
.......................................................................
............... [OK]
o- loopback
.......................................................................
................... [Targets: 0]
o- vhost
.......................................................................
...................... [Targets: 0]
o- xen-pvscsi
.......................................................................
................. [Targets: 0]

Set up the iSCSI target server SBD device


Connect to the iSCSI device that you created in the last step from the cluster. Run the
following commands on the nodes of the new cluster that you want to create.

7 Note
[A]: Applies to all nodes.
[1]: Applies only to node 1.
[2]: Applies only to node 2.

1. [A] Install iSCSI package.

Bash

sudo zypper install open-iscsi

2. [A] Connect to the iSCSI devices. First, enable the iSCSI and SBD services.

Bash

sudo systemctl enable iscsid


sudo systemctl enable iscsi
sudo systemctl enable sbd

3. [1] Change the initiator name on the first node.

Bash

sudo vi /etc/iscsi/initiatorname.iscsi

4. [1] Change the contents of the file to match the access control lists (ACLs) you used
when you created the iSCSI device on the iSCSI target server (for example, for the
NFS server).

Bash

InitiatorName=iqn.2006-04.nfs-0.local:nfs-0

5. [2] Change the initiator name on the second node.

Bash

sudo vi /etc/iscsi/initiatorname.iscsi

6. [2] Change the contents of the file to match the ACLs you used when you created
the iSCSI device on the iSCSI target server.

Bash
InitiatorName=iqn.2006-04.nfs-1.local:nfs-1

7. [A] Restart the iSCSI service to apply the change.

Bash

sudo systemctl restart iscsid


sudo systemctl restart iscsi

8. [A] Connect the iSCSI devices. In the following example, 10.0.0.17 is the IP address
of the iSCSI target server, and 3260 is the default port. iqn.2006-04.nfs.local:nfs is
one of the target names that's listed when you run the first command, iscsiadm -m
discovery .

Bash

sudo iscsiadm -m discovery --type=st --portal=10.0.0.17:3260


sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --
portal=10.0.0.17:3260
sudo iscsiadm -m node -p 10.0.0.17:3260 -T iqn.2006-04.nfs.local:nfs --
op=update --name=node.startup --value=automatic

9. [A] If you want to use multiple SBD devices, also connect to the second iSCSI
target server.

Bash

sudo iscsiadm -m discovery --type=st --portal=10.0.0.18:3260


sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --
portal=10.0.0.18:3260
sudo iscsiadm -m node -p 10.0.0.18:3260 -T iqn.2006-04.nfs.local:nfs --
op=update --name=node.startup --value=automatic

10. [A] If you want to use multiple SBD devices, also connect to the third iSCSI target
server.

Bash

sudo iscsiadm -m discovery --type=st --portal=10.0.0.19:3260


sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --
portal=10.0.0.19:3260
sudo iscsiadm -m node -p 10.0.0.19:3260 -T iqn.2006-04.nfs.local:nfs --
op=update --name=node.startup --value=automatic
11. [A] Make sure that the iSCSI devices are available and note the device name
(/dev/sde, in the following example).

Bash

lsscsi

# [2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda


# [3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb
# [5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc
# [5:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdd
# [6:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdd
# [7:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sde
# [8:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdf

12. [A] Retrieve the IDs of the iSCSI devices.

Bash

ls -l /dev/disk/by-id/scsi-* | grep sdd

# lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-1LIO-


ORG_sbdnfs:afb0ba8d-3a3c-413b-8cc2-cca03e63ef42 -> ../../sdd
# lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-
36001405afb0ba8d3a3c413b8cc2cca03 -> ../../sdd
# lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-SLIO-
ORG_sbdnfs_afb0ba8d-3a3c-413b-8cc2-cca03e63ef42 -> ../../sdd

ls -l /dev/disk/by-id/scsi-* | grep sde

# lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-1LIO-


ORG_cl1:3fe4da37-1a5a-4bb6-9a41-9a4df57770e4 -> ../../sde
# lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-
360014053fe4da371a5a4bb69a419a4df -> ../../sde
# lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-SLIO-
ORG_cl1_3fe4da37-1a5a-4bb6-9a41-9a4df57770e4 -> ../../sde

ls -l /dev/disk/by-id/scsi-* | grep sdf

# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-1LIO-


ORG_sbdnfs:f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf
# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-
36001405f88f30e7c9684678bc87fe7bf -> ../../sdf
# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-SLIO-
ORG_sbdnfs_f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf

The command lists three device IDs for every SBD device. We recommend using
the ID that starts with scsi-3. In the preceding example, the IDs are:

/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03
/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df
/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf

13. [1] Create the SBD device.

a. Use the device ID of the iSCSI devices to create the new SBD devices on the first
cluster node.

Bash

sudo sbd -d /dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03 -1


60 -4 120 create

b. Also create the second and third SBD devices if you want to use more than one.

Bash

sudo sbd -d /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df -1


60 -4 120 create
sudo sbd -d /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -1
60 -4 120 create

14. [A] Adapt the SBD configuration.

a. Open the SBD config file.

Bash

sudo vi /etc/sysconfig/sbd

b. Change the property of the SBD device, enable the Pacemaker integration, and
change the start mode of SBD.

Bash

[...]
SBD_DEVICE="/dev/disk/by-id/scsi-
36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-
360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-
36001405f88f30e7c9684678bc87fe7bf"
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]
7 Note

If the SBD_DELAY_START property value is set to "no", change the value to


"yes". You must also check the SBD service file to ensure that the value of
TimeoutStartSec is greater than the value of SBD_DELAY_START. For more
information, see SBD file configuraton

15. [A] Create the softdog configuration file.

Bash

echo softdog | sudo tee /etc/modules-load.d/softdog.conf

16. [A] Load the module.

Bash

sudo modprobe -v softdog

SBD with an Azure shared disk


This section applies only if you want to use an SBD device with an Azure shared disk.

Create and attach an Azure shared disk with PowerShell


1. Adjust the values for your resource group, Azure region, virtual machines, logical
unit numbers (LUNs), and so on.

Bash

$ResourceGroup = "MyResourceGroup"
$Location = "MyAzureRegion"

2. Define the size of the disk based on available disk size for Premium SSDs. In this
example, P1 disk size of 4G is mentioned.

Bash

$DiskSizeInGB = 4
$DiskName = "SBD-disk1"
3. With parameter -MaxSharesCount, define the maximum number of cluster nodes
to attach the shared disk for the SBD device.

Bash

$ShareNodes = 2

4. For an SBD device that uses LRS for an Azure premium shared disk, use the
following storage SkuName:

Bash

$SkuName = "Premium_LRS"

5. For an SBD device that uses ZRS for an Azure premium shared disk, use the
following storage SkuName:

Bash

$SkuName = "Premium_ZRS"

6. Set up an Azure shared disk.

Bash

$diskConfig = New-AzDiskConfig -Location $Location -SkuName $SkuName -


CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount
$ShareNodes
$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName
$DiskName -Disk $diskConfig

7. Attach the disk to the cluster VMs.

Bash

$VM1 = "prod-cl1-0"
$VM2 = "prod-cl1-1"

a. Add the Azure shared disk to cluster node 1.

Bash

$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM1


$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -
ManagedDiskId $dataDisk.Id -Lun 0
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose

b. Add the Azure shared disk to cluster node 2.

Bash

$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM2


$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -
ManagedDiskId $dataDisk.Id -Lun 0
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose

If you want to deploy resources by using the Azure CLI or the Azure portal, you can also
refer to Deploy a ZRS disk.

Set up an Azure shared disk SBD device


1. [A] Enable the SBD services.

Bash

sudo systemctl enable sbd

2. [A] Make sure that the attached disk is available.

Bash

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 1G 0 part /boot
├─sda4 8:4 0 28.5G 0 part /
sdb 8:16 0 256G 0 disk
├─sdb1 8:17 0 256G 0 part /mnt
sdc 8:32 0 4G 0 disk
sr0 11:0 1 1024M 0 rom

# lsscsi
[1:0:0:0] cd/dvd Msft Virtual CD/ROM 1.0 /dev/sr0
[2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda
[3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb
[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc

3. [A] Retrieve the IDs of the attached disks.


Bash

# ls -l /dev/disk/by-id/scsi-* | grep sdc


lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-
14d534654202020204208a67da80744439b513b2a9728af19 -> ../../sdc
lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-
3600224804208a67da8073b2a9728af19 -> ../../sdc

The commands list device IDs for the SBD device. We recommend using the ID that
starts with scsi-3. In the preceding example, the ID is /dev/disk/by-id/scsi-
3600224804208a67da8073b2a9728af19.

4. [1] Create the SBD device.

Use the device ID from step 2 to create the new SBD devices on the first cluster
node.

Bash

# sudo sbd -d /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -1


60 -4 120 create

5. [A] Adapt the SBD configuration.

a. Open the SBD config file.

Bash

sudo vi /etc/sysconfig/sbd

b. Change the property of the SBD device, enable the Pacemaker integration, and
change the start mode of the SBD device.

Bash

[...]
SBD_DEVICE="/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19"
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]

7 Note
If the SBD_DELAY_START property value is set to "no", change the value to
"yes". You must also check the SBD service file to ensure that the value of
TimeoutStartSec is greater than the value of SBD_DELAY_START. For more
information, see SBD file configuraton

6. Create the softdog configuration file.

Bash

echo softdog | sudo tee /etc/modules-load.d/softdog.conf

7. Load the module.

Bash

sudo modprobe -v softdog

Use an Azure fence agent


This section applies only if you want to use a fencing device with an Azure fence agent.

Create an Azure fence agent device


This section applies only if you're using a fencing device that's based on an Azure fence
agent. The fencing device uses either a managed identity or a service principal to
authorize against Microsoft Azure.

Managed identity

To create a managed identity (MSI), create a system-assigned managed identity for


each VM in the cluster. Should a system-assigned managed identity already exist, it
will be used. User assigned managed identities shouldn't be used with Pacemaker at
this time. Azure fence agent, based on managed identity is supported for SLES 12
SP5 and SLES 15 SP1 and above.

[1] Create a custom role for the fence agent


By default, neither managed identity nor service principal has permissions to access your
Azure resources. You need to give the managed identity or service principal permissions
to start and stop (deallocate) all virtual machines in the cluster. If you didn't already
create the custom role, you can do so by using PowerShell or the Azure CLI.

Use the following content for the input file. You need to adapt the content to your
subscriptions. That is, replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and yyyyyyyy-yyyy-
yyyy-yyyy-yyyyyyyyyyyy with your own subscription IDs. If you have only one
subscription, remove the second entry under AssignableScopes.

JSON

{
"Name": "Linux fence agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
],
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}

[A] Assign the custom role


Use managed identity or service principal.

Managed identity

Assign the custom role "Linux Fence Agent Role" that was created in the last
chapter to each managed identity of the cluster VMs. Each VM system-assigned
managed identity needs the role assigned for every cluster VM's resource. For
detailed steps, see Assign a managed identity access to a resource by using the
Azure portal. Verify each VM's managed identity role assignment contains all cluster
VMs.

) Important

Be aware assignment and removal of authorization with managed identities


can be delayed until effective.
Install the cluster

7 Note

[A]: Applies to all nodes.


[1]: Applies only to node 1.
[2]: Applies only to node 2.

1. [A] Update SLES.

Bash

sudo zypper update

7 Note

On SLES 15 SP4 check the version of crmsh and pacemaker package, and
make sure that the miniumum version requirements are met:

crmsh-4.4.0+20221028.3e41444-150400.3.9.1 or later
pacemaker-2.1.2+20211124.ada5c3b36-150400.4.6.1 or later

2. [A] Install the component, which you need for the cluster resources.

Bash

sudo zypper in socat

3. [A] Install the azure-lb component, which you need for the cluster resources.

Bash

sudo zypper in resource-agents

7 Note

Check the version of the resource-agents package, and make sure that the
minimum version requirements are met:
SLES 12 SP4/SP5: The version must be resource-agents-
4.3.018.a7fb5035-3.30.1 or later.
SLES 15/15 SP1: The version must be resource-agents-
4.3.0184.6ee15eb2-4.13.1 or later.

4. [A] Configure the operating system.

a. Pacemaker occasionally creates many processes, which can exhaust the allowed
number. When this happens, a heartbeat between the cluster nodes might fail and
lead to a failover of your resources. We recommend increasing the maximum
number of allowed processes by setting the following parameter:

Bash

# Edit the configuration file


sudo vi /etc/systemd/system.conf

# Change the DefaultTasksMax


#DefaultTasksMax=512
DefaultTasksMax=4096

# Activate this setting


sudo systemctl daemon-reload

# Test to ensure that the change was successful


sudo systemctl --no-pager show | grep DefaultTasksMax

b. Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .

Bash

sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

c. Make sure vm.swappiness is set to 10 to reduce swap usage and favor memory.

Bash

sudo vi /etc/sysctl.conf
# Change/set the following setting
vm.swappiness = 10
5. [A] Check the cloud-netconfig-azure package version.

Check the installed version of the cloud-netconfig-azure package by running


zypper info cloud-netconfig-azure. If the version is earlier than 1.3, we
recommend that you update the cloud-netconfig-azure package to the latest
available version.

 Tip

If the version in your environment is 1.3 or later, it's no longer necessary to


suppress the management of network interfaces by the cloud network plug-
in.

Only if the version of cloud-netconfig-azure is lower than 1.3, change the


configuration file for the network interface as shown in the following code to
prevent the cloud network plug-in from removing the virtual IP address
(Pacemaker must control the assignment). For more information, see SUSE KB
7023633 .

Bash

# Edit the configuration file


sudo vi /etc/sysconfig/network/ifcfg-eth0

# Change CLOUD_NETCONFIG_MANAGE
# CLOUD_NETCONFIG_MANAGE="yes"
CLOUD_NETCONFIG_MANAGE="no"

6. [1] Enable SSH access.

Bash

sudo ssh-keygen

# Enter file in which to save the key (/root/.ssh/id_rsa), and then


select Enter
# Enter passphrase (empty for no passphrase), and then select Enter
# Enter same passphrase again, and then select Enter

# copy the public key


sudo cat /root/.ssh/id_rsa.pub

7. [2] Enable SSH access.

Bash
sudo ssh-keygen

# Enter file in which to save the key (/root/.ssh/id_rsa), and then


select Enter
# Enter passphrase (empty for no passphrase), and then select Enter
# Enter same passphrase again, and then select Enter

# Insert the public key you copied in the last step into the authorized
keys file on the second server
sudo vi /root/.ssh/authorized_keys

# copy the public key


sudo cat /root/.ssh/id_rsa.pub

8. [1] Enable SSH access.

Bash

# insert the public key you copied in the last step into the authorized
keys file on the first server
sudo vi /root/.ssh/authorized_keys

9. [A] Install the fence-agents package if you're using a fencing device, based on the
Azure fence agent.

Bash

sudo zypper install fence-agents

) Important

The installed version of the fence-agents package must be 4.4.0 or later to


benefit from the faster failover times with the Azure fence agent, when a
cluster node is fenced. If you're running an earlier version, we recommend
that you update the package.

) Important

If using managed identity, the installed version of the fence-agents package


must be -

SLES 12 SP5: fence-agents 4.9.0+git.1624456340.8d746be9-3.35.2 or


later
SLES 15 SP1 and higher: fence-agents 4.5.2+git.1592573838.1eee0863 or
later.

Earlier versions will not work correctly with a managed identity configuration.

10. [A] Install the Azure Python SDK and Azure Identity Python module.

Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5:

Bash

# You might need to activate the public cloud extension first


SUSEConnect -p sle-module-public-cloud/12/x86_64
sudo zypper install python-azure-mgmt-compute
sudo zypper install python-azure-identity

Install the Azure Python SDK on SLES 15 or later:

Bash

# You might need to activate the public cloud extension first. In this
example, the SUSEConnect command is for SLES 15 SP1
SUSEConnect -p sle-module-public-cloud/15.1/x86_64
sudo zypper install python3-azure-mgmt-compute
sudo zypper install python3-azure-identity

) Important

Depending on your version and image type, you might need to activate the
public cloud extension for your OS release before you can install the Azure
Python SDK. You can check the extension by running SUSEConnect ---list-
extensions . To achieve the faster failover times with the Azure fence agent:

On SLES 12 SP4 or SLES 12 SP5, install version 4.6.2 or later of the


python-azure-mgmt-compute package.
If your python-azure-mgmt-compute or python3-azure-mgmt-compute
package version is 17.0.0-6.7.1, follow the instructions in SUSE KBA to
update the fence-agents version and install the Azure Identity client
library for Python module if it is missing.

11. [A] Set up the hostname resolution.


You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file.

Replace the IP address and the hostname in the following commands.

) Important

If you're using hostnames in the cluster configuration, it's essential to have a


reliable hostname resolution. The cluster communication will fail if the names
are unavailable, and that can lead to cluster failover delays.

The benefit of using /etc/hosts is that your cluster becomes independent of


the DNS, which could be a single point of failure too.

Bash

sudo vi /etc/hosts

Insert the following lines in the /etc/hosts. Change the IP address and hostname to
match your environment.

text

# IP address of the first cluster node


10.0.0.6 prod-cl1-0
# IP address of the second cluster node
10.0.0.7 prod-cl1-1

12. [1] Install the cluster.

If you're using SBD devices for fencing (for either the iSCSI target server or
Azure shared disk):

Bash

sudo crm cluster init


# ! NTP is not configured to start at system boot.
# Do you want to continue anyway (y/n)? y
# /root/.ssh/id_rsa already exists - overwrite (y/n)? n
# Address for ring0 [10.0.0.6] Select Enter
# Port for ring0 [5405] Select Enter
# SBD is already configured to use /dev/disk/by-id/scsi-
36001405639245768818458b930abdf69;/dev/disk/by-id/scsi-
36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-
36001405f88f30e7c9684678bc87fe7bf - overwrite (y/n)? n
# Do you wish to configure an administration IP (y/n)? n

If you're not using SBD devices for fencing:

Bash

sudo crm cluster init


# ! NTP is not configured to start at system boot.
# Do you want to continue anyway (y/n)? y
# /root/.ssh/id_rsa already exists - overwrite (y/n)? n
# Address for ring0 [10.0.0.6] Select Enter
# Port for ring0 [5405] Select Enter
# Do you wish to use SBD (y/n)? n
# WARNING: Not configuring SBD - STONITH will be disabled.
# Do you wish to configure an administration IP (y/n)? n

13. [2] Add the node to the cluster.

Bash

sudo crm cluster join


# ! NTP is not configured to start at system boot.
# Do you want to continue anyway (y/n)? y
# IP address or hostname of existing node (for example, 192.168.1.1)
[]10.0.0.6
# /root/.ssh/id_rsa already exists - overwrite (y/n)? n

14. [A] Change the hacluster password to the same password.

Bash

sudo passwd hacluster

15. [A] Adjust the corosync settings.

Bash

sudo vi /etc/corosync/corosync.conf

a. Check the following section in the file and adjust, if the values aren't there or are
different. Be sure to change the token to 30000 to allow memory-preserving
maintenance. For more information, see the "Maintenance for virtual machines in
Azure" article for Linux or Windows.

text
[...]
token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20

interface {
[...]
}
transport: udpu
}
nodelist {
node {
ring0_addr:10.0.0.6
}
node {
ring0_addr:10.0.0.7
}
}
logging {
[...]
}
quorum {
# Enable and configure quorum subsystem (default: off)
# See also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}

b. Restart the corosync service.

Bash

sudo service corosync restart

Create a fencing device on the Pacemaker cluster

 Tip

To avoid fence races within a two-node pacemaker cluster, you can configure
additional "priority-fencing-delay" cluster property. This property introduces
additional delay in fencing a node that has higher total resource priority when
a split-brain scenario occurs. For additional details, see SUSE Linux Enterprise
Server high availability extension administration guide .
The instruction on setting "priority-fencing-delay" cluster property can be
found in respective SAP ASCS/ERS (applicable only on ENSA2) and SAP HANA
scale-up high availability document.

1. [1] If you're using an SBD device (iSCSI target server or Azure shared disk) as a
fencing device, run the following commands. Enable the use of a fencing device,
and set the fence delay.

Bash

sudo crm configure property stonith-timeout=144


sudo crm configure property stonith-enabled=true

# List the resources to find the name of the SBD device


sudo crm resource list
sudo crm resource stop stonith-sbd
sudo crm configure delete stonith-sbd
sudo crm configure primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15" \
op monitor interval="600" timeout="15"

2. [1] If you're using an Azure fence agent for fencing, run the following commands.
After you've assigned roles to both cluster nodes, you can configure the fencing
devices in the cluster.

Bash

sudo crm configure property stonith-enabled=true


sudo crm configure property concurrent-fencing=true

7 Note

The 'pcmk_host_map' option is required in the command only if the


hostnames and the Azure VM names are not identical. Specify the mapping in
the format hostname:vm-name.

Managed identity

Bash

# Adjust the command with your subscription ID and resource group of the
VM

sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \


params msi=true subscriptionId="subscription ID" resourceGroup="resource
group" \
pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240
pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-
0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
op monitor interval=3600 timeout=120

sudo crm configure property stonith-timeout=900

If you're using fencing device, based on service principal configuration, read Change
from SPN to MSI for Pacemaker clusters using Azure fencing and learn how to convert
to managed identity configuration.

) Important

The monitoring and fencing operations are deserialized. As a result, if there's a


longer-running monitoring operation and simultaneous fencing event, there's no
delay to the cluster failover because the monitoring operation is already running.

 Tip

The Azure fence agent requires outbound connectivity to the public endpoints, as
documented, along with possible solutions, in Public endpoint connectivity for
VMs using standard ILB.

Configure Pacemaker for Azure scheduled


events
Azure offers scheduled events. Scheduled events are provided via the metadata service
and allow time for the application to prepare for such events. Resource agent azure-
events-az monitors for scheduled Azure events. If events are detected and the
resource agent determines that another cluster node is available, it sets a cluster health
attribute. When the cluster health attribute is set for a node, the location constraint
triggers and all resources, whose name doesn't start with "health-" are migrated away
from the node with scheduled event. Once the affected cluster node is free of running
cluster resources, scheduled event is acknowledged and can execute its action, such as
restart.

) Important
Previously, this document described the use of resource agent azure-events .
New resource agent azure-events-az fully supports Azure environments
deployed in different availability zones. It is recommended to utilize the newer
azure-events-az agent for all SAP highly available systems with Pacemaker.

1. [A] Make sure that the package for the azure-events agent is already installed and
up to date.

Bash

sudo zypper info resource-agents

Minimum version requirements:

SLES 12 SP5: resource-agents-4.3.018.a7fb5035-3.98.1


SLES 15 SP1: resource-agents-4.3.0184.6ee15eb2-150100.4.72.1
SLES 15 SP2: resource-agents-4.4.0+git57.70549516-150200.3.56.1
SLES 15 SP3: resource-agents-4.8.0+git30.d0077df0-150300.8.31.1
SLES 15 SP4 and newer: resource-agents-4.10.0+git40.0f4de473-
150400.3.19.1

2. [1] Configure the resources in Pacemaker.

Bash

#Place the cluster in maintenance mode


sudo crm configure property maintenance-mode=true

3. [1] Set the pacemaker cluster health node strategy and constraint

Bash

sudo crm configure property node-health-strategy=custom


sudo crm configure location loc_azure_health \
/'!health-.*'/ rule '#health-azure': defined '#uname'

) Important

Don't define any other resources in the cluster starting with "health-", besides
the resources described in the next steps of the documentation.
4. [1] Set initial value of the cluster attributes. Run for each cluster node. For scale-out
environments including majority maker VM.

Bash

sudo crm_attribute --node prod-cl1-0 --name '#health-azure' --update 0


sudo crm_attribute --node prod-cl1-1 --name '#health-azure' --update 0

5. [1] Configure the resources in Pacemaker. Important: The resources must start with
'health-azure'.

Bash

sudo crm configure primitive health-azure-events ocf:heartbeat:azure-


events-az \
meta allow-unhealthy-nodes=true \
op monitor interval=10s

sudo crm configure clone health-azure-events-cln health-azure-events

7 Note

On configuring 'health-azure-events' resource, following warning message


can be ignored.

WARNING: health-azure-events: unknown attribute 'allow-unhealthy-nodes'.

6. Take the Pacemaker cluster out of maintenance mode

Bash

sudo crm configure property maintenance-mode=false

7. Clear any errors during enablement and verify that the health-azure-events
resources have started successfully on all cluster nodes.

Bash

sudo crm resource cleanup

First time query execution for scheduled events can take up to 2 minutes.
Pacemaker testing with scheduled events can use reboot or redeploy actions for
the cluster VMs. For more information, see scheduled events documentation.
7 Note

After you've configured the Pacemaker resources for the azure-events agent,
if you place the cluster in or out of maintenance mode, you might get warning
messages such as:

WARNING: cib-bootstrap-options: unknown attribute 'hostName_hostname'


WARNING: cib-bootstrap-options: unknown attribute 'azure-
events_globalPullState'
WARNING: cib-bootstrap-options: unknown attribute 'hostName_ hostname'
These warning messages can be ignored.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server
for SAP applications
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High availability of SAP HANA on Azure Virtual Machines
High availability for SAP HANA on
Azure VMs on SUSE Linux Enterprise
Server
Article • 04/08/2024

To establish high availability in an on-premises SAP HANA deployment, you can use
either SAP HANA system replication or shared storage.

Currently on Azure virtual machines (VMs), SAP HANA system replication on Azure is the
only supported high availability function.

SAP HANA system replication consists of one primary node and at least one secondary
node. Changes to the data on the primary node are replicated to the secondary node
synchronously or asynchronously.

This article describes how to deploy and configure the VMs, install the cluster
framework, and install and configure SAP HANA system replication.

Before you begin, read the following SAP Notes and papers:

SAP Note 1928533 . The note includes:


The list of Azure VM sizes that are supported for the deployment of SAP
software.
Important capacity information for Azure VM sizes.
The supported SAP software, operating system (OS), and database
combinations.
The required SAP kernel versions for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists the prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server 12 (SLES 12) for SAP Applications.
SAP Note 2684254 has recommended OS settings for SUSE Linux Enterprise
Server 15 (SLES 15) for SAP Applications.
SAP Note 2235581 has SAP HANA supported Operating systems
SAP Note 2178632 has detailed information about all the monitoring metrics
that are reported for SAP in Azure.
SAP Note 2191498 has the required SAP host agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing for Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Note 401162 has information about how to avoid "address already in use"
errors when you set up HANA system replication.
SAP Community Support Wiki has all the required SAP Notes for Linux.
SAP HANA Certified IaaS Platforms .
Azure Virtual Machines planning and implementation for SAP on Linux guide.
Azure Virtual Machines deployment for SAP on Linux guide.
Azure Virtual Machines DBMS deployment for SAP on Linux guide.
SUSE Linux Enterprise Server for SAP Applications 15 best practices guides and
SUSE Linux Enterprise Server for SAP Applications 12 best practices guides :
Setting up an SAP HANA SR Performance Optimized Infrastructure (SLES for SAP
Applications). The guide contains all the required information to set up SAP
HANA system replication for on-premises development. Use this guide as a
baseline.
Setting up an SAP HANA SR Cost Optimized Infrastructure (SLES for SAP
Applications).

Plan for SAP HANA high availability


To achieve high availability, install SAP HANA on two VMs. The data is replicated by
using HANA system replication.

The SAP HANA system replication setup uses a dedicated virtual host name and virtual
IP addresses. In Azure, you need a load balancer to deploy a virtual IP address.
The preceding figure shows an example load balancer that has these configurations:

Front-end IP address: 10.0.0.13 for HN1-db


Probe port: 62503

Prepare the infrastructure


The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.

Deploy Linux VMs manually via Azure portal


This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported
for HANA system. You can deploy VM in any one of the availability options - virtual
machine scale set, availability zone, or availability set.

) Important

Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to setup standard load balancer for high
availability setup of HANA database.

Azure Portal

Follow the steps in Create load balancer to set up a standard load balancer for a
high-availability SAP system by using the Azure portal. During the setup of the load
balancer, consider the following points:
1. Frontend IP Configuration: Create a front-end IP. Select the same virtual
network and subnet name as your database virtual machines.
2. Backend Pool: Create a back-end pool and add database VMs.
3. Inbound rules: Create a load-balancing rule. Follow the same steps for both
load-balancing rules.

Frontend IP address: Select a front-end IP.


Backend pool: Select a back-end pool.
High-availability ports: Select this option.
Protocol: Select TCP.
Health Probe: Create a health probe with the following details:
Protocol: Select TCP.
Port: For example, 625<instance-no.>.
Interval: Enter 5.
Probe Threshold: Enter 2.
Idle timeout (minutes): Enter 30.
Enable Floating IP: Select this option.

7 Note

The health probe configuration property numberOfProbes , otherwise known as


Unhealthy threshold in the portal, isn't respected. To control the number of
successful or failed consecutive probes, set the property probeThreshold to 2 .
It's currently not possible to set this property by using the Azure portal, so use
either the Azure CLI or the PowerShell command.

For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .

) Important

A floating IP address isn't supported on a network interface card (NIC) secondary IP


configuration in load-balancing scenarios. For details, see Azure Load Balancer
limitations. If you need another IP address for the VM, deploy a second NIC.

7 Note
When VMs that don't have public IP addresses are placed in the back-end pool of
an internal (no public IP address) standard instance of Azure Load Balancer, the
default configuration is no outbound internet connectivity. You can take extra steps
to allow routing to public endpoints. For details on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs by using Azure Standard
Load Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs that are placed behind Azure
Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set
parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer
health probes or SAP note 2382421 .
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , update saptune version to 3.1.1 or higher. For more
details, see saptune 3.1.1 – Do I Need to Update? .

Create a Pacemaker cluster


Follow the steps in Set up Pacemaker on SUSE Linux Enterprise Server in Azure to create
a basic Pacemaker cluster for this HANA server. You can use the same Pacemaker cluster
for SAP HANA and SAP NetWeaver (A)SCS.

Install SAP HANA


The steps in this section use the following prefixes:

[A]: The step applies to all nodes.


[1]: The step applies only to node 1.
[2]: The step applies only to node 2 of the Pacemaker cluster.

Replace <placeholders> with the values for your SAP HANA installation.

1. [A] Set up the disk layout by using Logical Volume Manager (LVM).

We recommend that you use LVM for volumes that store data and log files. The
following example assumes that the VMs have four attached data disks that are
used to create two volumes.

a. Run this command to list all the available disks:


Bash

/dev/disk/azure/scsi1/lun*

Example output:

Output

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2 /dev/disk/azure/scsi1/lun3

b. Create physical volumes for all the disks that you want to use:

Bash

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2
sudo pvcreate /dev/disk/azure/scsi1/lun3

c. Create a volume group for the data files. Use one volume group for the log files
and one volume group for the shared directory of SAP HANA:

Bash

sudo vgcreate vg_hana_data_<HANA SID> /dev/disk/azure/scsi1/lun0


/dev/disk/azure/scsi1/lun1
sudo vgcreate vg_hana_log_<HANA SID> /dev/disk/azure/scsi1/lun2
sudo vgcreate vg_hana_shared_<HANA SID> /dev/disk/azure/scsi1/lun3

d. Create the logical volumes.

A linear volume is created when you use lvcreate without the -i switch. We
suggest that you create a striped volume for better I/O performance. Align the
stripe sizes to the values that are described in SAP HANA VM storage
configurations. The -i argument should be the number of underlying physical
volumes, and the -I argument is the stripe size.

For example, if two physical volumes are used for the data volume, the -i
switch argument is set to 2, and the stripe size for the data volume is 256KiB.
One physical volume is used for the log volume, so no -i or -I switches are
explicitly used for the log volume commands.

) Important
When you use more than one physical volume for each data volume, log
volume, or shared volume, use the -i switch and set it the number of
underlying physical volumes. When you create a striped volume, use the -
I switch to specify the stripe size.

For recommended storage configurations, including stripe sizes and the


number of disks, see SAP HANA VM storage configurations.

Bash

sudo lvcreate <-i number of physical volumes> <-I stripe size for
the data volume> -l 100%FREE -n hana_data vg_hana_data_<HANA SID>
sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_<HANA SID>
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared_<HANA SID>
sudo mkfs.xfs /dev/vg_hana_data_<HANA SID>/hana_data
sudo mkfs.xfs /dev/vg_hana_log_<HANA SID>/hana_log
sudo mkfs.xfs /dev/vg_hana_shared_<HANA SID>/hana_shared

e. Create the mount directories and copy the universally unique identifier (UUID)
of all the logical volumes:

Bash

sudo mkdir -p /hana/data/<HANA SID>


sudo mkdir -p /hana/log/<HANA SID>
sudo mkdir -p /hana/shared/<HANA SID>
# Write down the ID of /dev/vg_hana_data_<HANA SID>/hana_data,
/dev/vg_hana_log_<HANA SID>/hana_log, and /dev/vg_hana_shared_<HANA
SID>/hana_shared
sudo blkid

f. Edit the /etc/fstab file to create fstab entries for the three logical volumes:

Bash

sudo vi /etc/fstab

g. Insert the following lines in the /etc/fstab file:

Bash

/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_data_<HANA SID>-


hana_data> /hana/data/<HANA SID> xfs defaults,nofail 0 2
/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_log_<HANA SID>-
hana_log> /hana/log/<HANA SID> xfs defaults,nofail 0 2
/dev/disk/by-uuid/<UUID of /dev/mapper/vg_hana_shared_<HANA SID>-
hana_shared> /hana/shared/<HANA SID> xfs defaults,nofail 0 2

h. Mount the new volumes:

Bash

sudo mount -a

2. [A] Set up the disk layout by using plain disks.

For demo systems, you can place your HANA data and log files on one disk.

a. Create a partition on /dev/disk/azure/scsi1/lun0 and format it by using XFS:

Bash

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk


/dev/disk/azure/scsi1/lun0'
sudo mkfs.xfs /dev/disk/azure/scsi1/lun0-part1

# Write down the ID of /dev/disk/azure/scsi1/lun0-part1


sudo /sbin/blkid
sudo vi /etc/fstab

b. Insert this line in the /etc/fstab file:

Bash

/dev/disk/by-uuid/<UUID> /hana xfs defaults,nofail 0 2

c. Create the target directory and mount the disk:

Bash

sudo mkdir /hana


sudo mount -a

3. [A] Set up host name resolution for all hosts.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows you how to use the /etc/hosts file. Replace the IP addresses and the
host names in the following commands.

a. Edit the /etc/hosts file:


Bash

sudo vi /etc/hosts

b. Insert the following lines in the /etc/hosts file. Change the IP addresses and host
names to match your environment.

Bash

10.0.0.5 hn1-db-0
10.0.0.6 hn1-db-1

4. [A] Install the SAP HANA high availability packages:

Run the following command to install the high availability packages:

Bash

sudo zypper install SAPHanaSR

To install SAP HANA system replication, review chapter 4 in the SAP HANA SR
Performance Optimized Scenario guide.

5. [A] Run the hdblcm program from the HANA installation media.

When you're prompted, enter the following values:


a. Choose installation: Enter 1.
b. Select additional components for installation: Enter 1.
c. Enter installation path: Enter /hana/shared and select Enter.
d. Enter local host name: Enter .. and select Enter.
e. Do you want to add additional hosts to the system? (y/n): Enter n and select
Enter.
f. Enter the SAP HANA system ID: Enter your HANA SID.
g. Enter the instance number: Enter the HANA instance number. If you deployed
by using the Azure template or if you followed the manual deployment section
of this article, enter 03.
h. Select the database mode / Enter the index: Enter or select 1 and select Enter.
i. Select the system usage / Enter the index: Select the system usage value 4.
j. Enter the location of the data volumes: Enter /hana/data/<HANA SID> and
select Enter.
k. Enter the location of the log volumes: Enter /hana/log/<HANA SID> and select
Enter.
l. Restrict maximum memory allocation?: Enter n and select Enter.
m. Enter the certificate host name for the host: Enter ... and select Enter.
n. Enter the SAP host agent user (sapadm) password: Enter the host agent user
password, and then select Enter.
o. Confirm the SAP host agent user (sapadm) password: Enter the host agent user
password again, and then select Enter.
p. Enter the system administrator (hdbadm) password: Enter the system
administrator password, and then select Enter.
q. Confirm the system administrator (hdbadm) password: Enter the system
administrator password again, and then select Enter.
r. Enter the system administrator home directory: Enter /usr/sap/<HANA
SID>/home and select Enter.
s. Enter the system administrator login shell: Enter /bin/sh and select Enter.
t. Enter the system administrator user ID: Enter 1001 and select Enter.
u. Enter ID of the user group (sapsys): Enter 79 and select Enter.
v. Enter the database user (SYSTEM) password: Enter the database user password,
and then select Enter.
w. Confirm the database user (SYSTEM) password: Enter the database user
password again, and then select Enter.
x. Restart the system after machine reboot? (y/n): Enter n and select Enter.
y. Do you want to continue? (y/n): Validate the summary. Enter y to continue.

6. [A] Upgrade the SAP host agent.

Download the latest SAP host agent archive from the SAP Software Center . Run
the following command to upgrade the agent. Replace the path to the archive to
point to the file that you downloaded.

Bash

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP


host agent SAR>

Configure SAP HANA 2.0 system replication


The steps in this section use the following prefixes:

[A]: The step applies to all nodes.


[1]: The step applies only to node 1.
[2]: The step applies only to node 2 of the Pacemaker cluster.

Replace <placeholders> with the values for your SAP HANA installation.
1. [1] Create the tenant database.

If you're using SAP HANA 2.0 or SAP HANA MDC, create a tenant database for
your SAP NetWeaver system.

Run the following command as <HANA SID>adm:

Bash

hdbsql -u SYSTEM -p "<password>" -i <instance number> -d SYSTEMDB


'CREATE DATABASE <SAP SID> SYSTEM USER PASSWORD "<password>"'

2. [1] Configure system replication on the first node:

First, back up the databases as <HANA SID>adm:

Bash

hdbsql -d SYSTEMDB -u SYSTEM -p "<password>" -i <instance number>


"BACKUP DATA USING FILE ('<name of initial backup file for SYS>')"
hdbsql -d <HANA SID> -u SYSTEM -p "<password>" -i <instance number>
"BACKUP DATA USING FILE ('<name of initial backup file for HANA SID>')"
hdbsql -d <SAP SID> -u SYSTEM -p "<password>" -i <instance number>
"BACKUP DATA USING FILE ('<name of initial backup file for SAP SID>')"

Then, copy the system public key infrastructure (PKI) files to the secondary site:

Bash

scp /usr/sap/<HANA SID>/SYS/global/security/rsecssfs/data/SSFS_<HANA


SID>.DAT hn1-db-1:/usr/sap/<HANA
SID>/SYS/global/security/rsecssfs/data/
scp /usr/sap/<HANA SID>/SYS/global/security/rsecssfs/key/SSFS_<HANA
SID>.KEY hn1-db-1:/usr/sap/<HANA
SID>/SYS/global/security/rsecssfs/key/

Create the primary site:

Bash

hdbnsutil -sr_enable --name=<site 1>

3. [2] Configure system replication on the second node:

Register the second node to start the system replication.

Run the following command as <HANA SID>adm:


Bash

sapcontrol -nr <instance number> -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=<instance
number> --replicationMode=sync --name=<site 2>

Configure SAP HANA 1.0 system replication


The steps in this section use the following prefixes:

[A]: The step applies to all nodes.


[1]: The step applies only to node 1.
[2]: The step applies only to node 2 of the Pacemaker cluster.

Replace <placeholders> with the values for your SAP HANA installation.

1. [1] Create the required users.

Run the following command as root:

Bash

PATH="$PATH:/usr/sap/<HANA SID>/HDB<instance number>/exe"


hdbsql -u system -i <instance number> 'CREATE USER hdbhasync PASSWORD "
<password>"'
hdbsql -u system -i <instance number> 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i <instance number> 'ALTER USER hdbhasync DISABLE
PASSWORD LIFETIME'

2. [A] Create the keystore entry.

Run the following command as root to create a new keystore entry:

Bash

PATH="$PATH:/usr/sap/<HANA SID>/HDB<instance number>/exe"


hdbuserstore SET hdbhaloc localhost:3<instance number>15 hdbhasync
<password>

3. [1] Back up the database.

Back up the databases as root:

Bash
PATH="$PATH:/usr/sap/<HANA SID>/HDB<instance number>/exe"
hdbsql -d SYSTEMDB -u system -i <instance number> "BACKUP DATA USING
FILE ('<name of initial backup file>')"

If you use a multi-tenant installation, also back up the tenant database:

Bash

hdbsql -d <HANA SID> -u system -i <instance number> "BACKUP DATA USING


FILE ('<name of initial backup file>')"

4. [1] Configure system replication on the first node.

Create the primary site as <HANA SID>adm:

Bash

su - hdbadm
hdbnsutil -sr_enable --name=<site 1>

5. [2] Configure system replication on the secondary node.

Register the secondary site as <HANA SID>adm:

Bash

sapcontrol -nr <instance number> -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=<HANA SID>-db-<database 1> --
remoteInstance=<instance number> --replicationMode=sync --name=<site 2>

Implement HANA hooks SAPHanaSR and


susChkSrv
In this important step, you optimize the integration with the cluster and improve
detection when a cluster failover is needed. We highly recommend that you configure
the SAPHanaSR Python hook. For HANA 2.0 SP5 and later, we recommend that you
implement the SAPHanaSR hook and the susChkSrv hook.

The susChkSrv hook extends the functionality of the main SAPHanaSR HA provider. It
acts when the HANA process hdbindexserver crashes. If a single process crashes, HANA
typically tries to restart it. Restarting the indexserver process can take a long time,
during which the HANA database isn't responsive.
With susChkSrv implemented, an immediate and configurable action is executed. The
action triggers a failover in the configured timeout period instead of waiting for the
hdbindexserver process to restart on the same node.

1. [A] Install the HANA system replication hook. The hook must be installed on both
HANA database nodes.

 Tip

The SAPHanaSR Python hook can be implemented only for HANA 2.0. The
SAPHanaSR package must be at least version 0.153.

The susChkSrv Python hook requires SAP HANA 2.0 SP5, and SAPHanaSR
version 0.161.1_BF or later must be installed.

a. Stop HANA on both nodes.

Run the following code as <sapsid>adm:

Bash

sapcontrol -nr <instance number> -function StopSystem

b. Adjust global.ini on each cluster node. If the requirements for the susChkSrv
hook aren't met, remove the entire [ha_dr_provider_suschksrv] block from the
following parameters.

You can adjust the behavior of susChkSrv by using the action_on_lost


parameter. Valid values are [ ignore | stop | kill | fence ].

Bash

# add to global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /usr/share/SAPHanaSR
execution_order = 1

[ha_dr_provider_suschksrv]
provider = susChkSrv
path = /usr/share/SAPHanaSR
execution_order = 3
action_on_lost = fence
[trace]
ha_dr_saphanasr = info

If you point to the standard /usr/share/SAPHanaSR location, the Python hook


code updates automatically through OS updates or package updates. HANA
uses the hook code updates when it next restarts. With an optional own path
like /hana/shared/myHooks, you can decouple OS updates from the hook
version that you use.

2. [A] The cluster requires sudoers configuration on each cluster node for <SAP
SID>adm. In this example, that's achieved by creating a new file.

Run the following command as root:

Bash

cat << EOF > /etc/sudoers.d/20-saphana


# Needed for SAPHanaSR and susChkSrv Python hooks
hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n
hana_hn1_site_srHook_*
hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=HN1 --
case=fenceMe
EOF

For details about implementing the SAP HANA system replication hook, see Set up
HANA HA/DR providers .

3. [A] Start SAP HANA on both nodes.

Run the following command as <SAP SID>adm:

Bash

sapcontrol -nr <instance number> -function StartSystem

4. [1] Verify the hook installation.

Run the following command as <SAP SID>adm on the active HANA system
replication site:

Bash

cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
# Example output
# 2021-04-08 22:18:15.877583 ha_dr_SAPHanaSR SFAIL
# 2021-04-08 22:18:46.531564 ha_dr_SAPHanaSR SFAIL
# 2021-04-08 22:21:26.816573 ha_dr_SAPHanaSR SOK

Verify the susChkSrv hook installation.

Run the following command as <SAP SID>adm on all HANA VMs:

Bash

cdtrace
egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)'
nameserver_suschksrv.trc
# Example output
# 2022-11-03 18:06:21.116728 susChkSrv.init() version 0.7.7,
parameter info: action_on_lost=fence stop_timeout=20 kill_signal=9
# 2022-11-03 18:06:27.613588 START: indexserver event looks like
graceful tenant start
# 2022-11-03 18:07:56.143766 START: indexserver event looks like
graceful tenant start (indexserver started)

Create SAP HANA cluster resources


First, create the HANA topology.

Run the following commands on one of the Pacemaker cluster nodes:

Bash

sudo crm configure property maintenance-mode=true

# Replace <placeholders> with your instance number and HANA system ID

sudo crm configure primitive rsc_SAPHanaTopology_<HANA SID>_HDB<instance


number> ocf:suse:SAPHanaTopology \
operations \$id="rsc_sap2_<HANA SID>_HDB<instance number>-operations" \
op monitor interval="10" timeout="600" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
params SID="<HANA SID>" InstanceNumber="<instance number>"

sudo crm configure clone cln_SAPHanaTopology_<HANA SID>_HDB<instance number>


rsc_SAPHanaTopology_<HANA SID>_HDB<instance number> \
meta clone-node-max="1" target-role="Started" interleave="true"

Next, create the HANA resources:

) Important
In recent testing, netcat stops responding to requests due to a backlog and
because of its limitation of handling only one connection. The netcat resource
stops listening to the Azure Load Balancer requests, and the floating IP becomes
unavailable.

For existing Pacemaker clusters, we previously recommended that you replace


netcat with socat . Currently, we recommend that you use the azure-lb resource

agent, which is part of a package of resource-agents . The following package


versions are required:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Making this change requires a brief downtime.

For existing Pacemaker clusters, if your configuration was already changed to use
socat as described in Azure Load Balancer Detection Hardening , you don't
need to immediately switch to the azure-lb resource agent.

7 Note

This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we'll remove them from this article.

Bash

# Replace <placeholders> with your instance number, HANA system ID, and the
front-end IP address of the Azure load balancer.

sudo crm configure primitive rsc_SAPHana_<HANA SID>_HDB<instance number>


ocf:suse:SAPHana \
operations \$id="rsc_sap_<HANA SID>_HDB<instance number>-operations" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
params SID="<HANA SID>" InstanceNumber="<instance number>"
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"

sudo crm configure ms msl_SAPHana_<HANA SID>_HDB<instance number>


rsc_SAPHana_<HANA SID>_HDB<instance number> \
meta notify="true" clone-max="2" clone-node-max="1" \
target-role="Started" interleave="true"

sudo crm resource meta msl_SAPHana_<HANA SID>_HDB<instance number> set


priority 100

sudo crm configure primitive rsc_ip_<HANA SID>_HDB<instance number>


ocf:heartbeat:IPaddr2 \
meta target-role="Started" \
operations \$id="rsc_ip_<HANA SID>_HDB<instance number>-operations" \
op monitor interval="10s" timeout="20s" \
params ip="<front-end IP address>"

sudo crm configure primitive rsc_nc_<HANA SID>_HDB<instance number> azure-lb


port=625<instance number> \
op monitor timeout=20s interval=10 \
meta resource-stickiness=0

sudo crm configure group g_ip_<HANA SID>_HDB<instance number> rsc_ip_<HANA


SID>_HDB<instance number> rsc_nc_<HANA SID>_HDB<instance number>

sudo crm configure colocation col_saphana_ip_<HANA SID>_HDB<instance number>


4000: g_ip_<HANA SID>_HDB<instance number>:Started \
msl_SAPHana_<HANA SID>_HDB<instance number>:Master

sudo crm configure order ord_SAPHana_<HANA SID>_HDB<instance number>


Optional: cln_SAPHanaTopology_<HANA SID>_HDB<instance number> \
msl_SAPHana_<HANA SID>_HDB<instance number>

# Clean up the HANA resources. The HANA resources might have failed because
of a known issue.
sudo crm resource cleanup rsc_SAPHana_<HANA SID>_HDB<instance number>

sudo crm configure property priority-fencing-delay=30

sudo crm configure property maintenance-mode=false


sudo crm configure rsc_defaults resource-stickiness=1000
sudo crm configure rsc_defaults migration-threshold=5000

) Important

We recommend that you set AUTOMATED_REGISTER to false only while you complete
thorough failover tests, to prevent a failed primary instance from automatically
registering as secondary. When the failover tests are successfully completed, set
AUTOMATED_REGISTER to true , so that after takeover, system replication

automatically resumes.

Make sure that the cluster status is OK and that all the resources started. It doesn't
matter which node the resources are running on.
Bash

sudo crm_mon -r

# Online: [ hn1-db-0 hn1-db-1 ]


#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started hn1-db-0
# Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hn1-db-0 hn1-db-1 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Configure HANA active/read-enabled system


replication in a Pacemaker cluster
In SAP HANA 2.0 SPS 01 and later versions, SAP allows an active/read-enabled setup for
SAP HANA system replication. In this scenario, the secondary systems of SAP HANA
system replication can be actively used for read-intensive workloads.

To support this setup in a cluster, a second virtual IP address is required so that clients
can access the secondary read-enabled SAP HANA database. To ensure that the
secondary replication site can still be accessed after a takeover, the cluster needs to
move the virtual IP address around with the secondary of the SAPHana resource.

This section describes the extra steps that are required to manage a HANA active/read-
enabled system replication in a SUSE high availability cluster that uses a second virtual
IP address.

Before you proceed, make sure that you have fully configured the SUSE high availability
cluster that manages SAP HANA database as described in earlier sections.
Set up the load balancer for active/read-enabled system
replication
To proceed with extra steps to provision the second virtual IP, make sure that you
configured Azure Load Balancer as described in Deploy Linux VMs manually via Azure
portal.

For the standard load balancer, complete these extra steps on the same load balancer
that you created earlier.

1. Create a second front-end IP pool:


a. Open the load balancer, select frontend IP pool, and select Add.
b. Enter the name of the second front-end IP pool (for example, hana-
secondaryIP).
c. Set the Assignment to Static and enter the IP address (for example, 10.0.0.14).
d. Select OK.
e. After the new front-end IP pool is created, note the front-end IP address.
2. Create a health probe:
a. In the load balancer, select health probes, and select Add.
b. Enter the name of the new health probe (for example, hana-secondaryhp).
c. Select TCP as the protocol and port 626<instance number>. Keep the Interval
value set to 5, and the Unhealthy threshold value set to 2.
d. Select OK.
3. Create the load-balancing rules:
a. In the load balancer, select load balancing rules, and select Add.
b. Enter the name of the new load balancer rule (for example, hana-secondarylb).
c. Select the front-end IP address, the back-end pool, and the health probe that
you created earlier (for example, hana-secondaryIP, hana-backend, and hana-
secondaryhp).
d. Select HA Ports.
e. Increase idle timeout to 30 minutes.
f. Make sure that you enable floating IP.
g. Select OK.

Set up HANA active/read-enabled system replication


The steps to configure HANA system replication are described in Configure SAP HANA
2.0 system replication. If you're deploying a read-enabled secondary scenario, when you
set up system replication on the second node, run the following command as <HANA
SID>adm:

Bash

sapcontrol -nr <instance number> -function StopWait 600 10

hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=<instance


number> --replicationMode=sync --name=<site 2> --
operationMode=logreplay_readaccess

Add a secondary virtual IP address resource


You can set up the second virtual IP and the appropriate colocation constraint by using
the following commands:

Bash

crm configure property maintenance-mode=true

crm configure primitive rsc_secip_<HANA SID>_HDB<instance number>


ocf:heartbeat:IPaddr2 \
meta target-role="Started" \
operations \$id="rsc_secip_<HANA SID>_HDB<instance number>-operations" \
op monitor interval="10s" timeout="20s" \
params ip="<secondary IP address>"

crm configure primitive rsc_secnc_<HANA SID>_HDB<instance number> azure-lb


port=626<instance number> \
op monitor timeout=20s interval=10 \
meta resource-stickiness=0

crm configure group g_secip_<HANA SID>_HDB<instance number> rsc_secip_<HANA


SID>_HDB<instance number> rsc_secnc_<HANA SID>_HDB<instance number>

crm configure colocation col_saphana_secip_<HANA SID>_HDB<instance number>


4000: g_secip_<HANA SID>_HDB<instance number>:Started \
msl_SAPHana_<HANA SID>_HDB<instance number>:Slave

crm configure property maintenance-mode=false

Make sure that the cluster status is OK and that all the resources started. The second
virtual IP runs on the secondary site along with the SAPHana secondary resource.

Bash

sudo crm_mon -r

# Online: [ hn1-db-0 hn1-db-1 ]


#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started hn1-db-0
# Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hn1-db-0 hn1-db-1 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
# Resource Group: g_secip_HN1_HDB03:
# rsc_secip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started
hn1-db-1
# rsc_secnc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started
hn1-db-1

The next section describes the typical set of failover tests to execute.

Considerations when you test a HANA cluster that's configured with a read-enabled
secondary:

When you migrate the SAPHana_<HANA SID>_HDB<instance number> cluster resource


to hn1-db-1 , the second virtual IP moves to hn1-db-0 . If you have configured
AUTOMATED_REGISTER="false" and HANA system replication isn't registered

automatically, the second virtual IP runs on hn1-db-0 because the server is


available and cluster services are online.

When you test a server crash, the second virtual IP resources ( rsc_secip_<HANA
SID>_HDB<instance number> ) and the Azure load balancer port resource

( rsc_secnc_<HANA SID>_HDB<instance number> ) run on the primary server alongside


the primary virtual IP resources. While the secondary server is down, the
applications that are connected to a read-enabled HANA database connect to the
primary HANA database. The behavior is expected because you don't want
applications that are connected to a read-enabled HANA database to be
inaccessible while the secondary server is unavailable.

When the secondary server is available and the cluster services are online, the
second virtual IP and port resources automatically move to the secondary server,
even though HANA system replication might not be registered as secondary. Make
sure that you register the secondary HANA database as read-enabled before you
start cluster services on that server. You can configure the HANA instance cluster
resource to automatically register the secondary by setting the parameter
AUTOMATED_REGISTER="true" .

During failover and fallback, the existing connections for applications, which are
then using the second virtual IP to connect to the HANA database, might be
interrupted.

Test the cluster setup


This section describes how you can test your setup. Every test assumes that you're
signed in as root and that the SAP HANA master is running on the hn1-db-0 VM.

Test the migration


Before you start the test, make sure that Pacemaker doesn't have any failed action (run
crm_mon -r ), that there are no unexpected location constraints (for example, leftovers of

a migration test), and that HANA is in sync state, for example, by running SAPHanaSR-
showAttr .

Bash

hn1-db-0:~ # SAPHanaSR-showAttr
Sites srHook
----------------
SITE2 SOK
Global cib-time
--------------------------------
global Mon Aug 13 11:26:04 2018
Hosts clone_state lpa_hn1_lpt node_state op_mode remoteHost roles
score site srmode sync_state version vhost
----------------------------------------------------------------------------
----------------------------------------------------------------------------
---------------------
hn1-db-0 PROMOTED 1534159564 online logreplay nws-hana-vm-1
4:P:master1:master:worker:master 150 SITE1 sync PRIM
2.00.030.00.1522209842 nws-hana-vm-0
hn1-db-1 DEMOTED 30 online logreplay nws-hana-vm-0
4:S:master1:master:worker:master 100 SITE2 sync SOK
2.00.030.00.1522209842 nws-hana-vm-1

You can migrate the SAP HANA master node by running the following command:

Bash

crm resource move msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-1 force

The cluster would migrate the SAP HANA master node and the group containing virtual
IP address to hn1-db-1 .

When the migration is finished, the crm_mon -r output looks like this example:

Bash

Online: [ hn1-db-0 hn1-db-1 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started hn1-db-1
Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Stopped: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
Failed Actions:
* rsc_SAPHana_HN1_HDB03_start_0 on hn1-db-0 'not running' (7): call=84,
status=complete, exitreason='none',
last-rc-change='Mon Aug 13 11:31:37 2018', queued=0ms, exec=2095ms

With AUTOMATED_REGISTER="false" , the cluster would not restart the failed HANA
database or register it against the new primary on hn1-db-0 . In this case, configure the
HANA instance as secondary by running this command:

Bash

su - <hana sid>adm

# Stop the HANA instance, just in case it is running


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> sapcontrol -nr <instance number> -
function StopWait 600 10
hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-
db-1 --remoteInstance=<instance number> --replicationMode=sync --name=<site
1>
The migration creates location constraints that need to be deleted again:

Bash

# Switch back to root and clean up the failed state


exit
hn1-db-0:~ # crm resource clear msl_SAPHana_<HANA SID>_HDB<instance number>

You also need to clean up the state of the secondary node resource:

Bash

hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance


number> hn1-db-0

Monitor the state of the HANA resource by using crm_mon -r . When HANA is started on
hn1-db-0 , the output looks like this example:

Bash

Online: [ hn1-db-0 hn1-db-1 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started hn1-db-1
Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Blocking network communication


Resource state before starting the test:

Bash

Online: [ hn1-db-0 hn1-db-1 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started hn1-db-1
Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Execute firewall rule to block the communication on one of the nodes.

Bash

# Execute iptable rule on hn1-db-1 (10.0.0.6) to block the incoming and


outgoing traffic to hn1-db-0 (10.0.0.5)
iptables -A INPUT -s 10.0.0.5 -j DROP; iptables -A OUTPUT -d 10.0.0.5 -j
DROP

When cluster nodes can't communicate to each other, there's a risk of a split-brain
scenario. In such situations, cluster nodes will try to simultaneously fence each other,
resulting in fence race.

When configuring a fencing device, it's recommended to configure pcmk_delay_max


property. So, in the event of split-brain scenario, the cluster introduces a random delay
up to the pcmk_delay_max value, to the fencing action on each node. The node with the
shortest delay will be selected for fencing.

Additionally, to ensure that the node running the HANA master takes priority and wins
the fence race in a split brain scenario, it's recommended to set priority-fencing-delay
property in the cluster configuration. By enabling priority-fencing-delay property, the
cluster can introduce an additional delay in the fencing action specifically on the node
hosting HANA master resource, allowing the node to win the fence race.

Execute below command to delete the firewall rule.

Bash

# If the iptables rule set on the server gets reset after a reboot, the
rules will be cleared out. In case they have not been reset, please proceed
to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.5 -j DROP; iptables -D OUTPUT -d 10.0.0.5 -j
DROP

Test SBD fencing


You can test the setup of SBD by killing the inquisitor process:

Bash
hn1-db-0:~ # ps aux | grep sbd
root 1912 0.0 0.0 85420 11740 ? SL 12:25 0:00 sbd:
inquisitor
root 1929 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd:
watcher: /dev/disk/by-id/scsi-360014056f268462316e4681b704a9f73 - slot: 0 -
uuid: 7b862dba-e7f7-4800-92ed-f76a4e3978c8
root 1930 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd:
watcher: /dev/disk/by-id/scsi-360014059bc9ea4e4bac4b18808299aaf - slot: 0 -
uuid: 5813ee04-b75c-482e-805e-3b1e22ba16cd
root 1931 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd:
watcher: /dev/disk/by-id/scsi-36001405b8dddd44eb3647908def6621c - slot: 0 -
uuid: 986ed8f8-947d-4396-8aec-b933b75e904c
root 1932 0.0 0.0 90524 16656 ? SL 12:25 0:00 sbd:
watcher: Pacemaker
root 1933 0.0 0.0 102708 28260 ? SL 12:25 0:00 sbd:
watcher: Cluster
root 13877 0.0 0.0 9292 1572 pts/0 S+ 12:27 0:00 grep sbd

hn1-db-0:~ # kill -9 1912

The <HANA SID>-db-<database 1> cluster node reboots. The Pacemaker service might not
restart. Make sure that you start it again.

Test a manual failover


You can test a manual failover by stopping the Pacemaker service on the hn1-db-0 node:

Bash

service pacemaker stop

After the failover, you can start the service again. If you set AUTOMATED_REGISTER="false" ,
the SAP HANA resource on the hn1-db-0 node fails to start as secondary.

In this case, configure the HANA instance as secondary by running this command:

Bash

service pacemaker start


su - <hana sid>adm

# Stop the HANA instance, just in case it is running


sapcontrol -nr <instance number> -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=<instance
number> --replicationMode=sync --name=<site 1>

# Switch back to root and clean up the failed state


exit
crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-0

SUSE tests

) Important

Make sure that the OS that you select is SAP certified for SAP HANA on the specific
VM types you plan to use. You can look up SAP HANA-certified VM types and their
OS releases in SAP HANA Certified IaaS Platforms . Make sure that you look at
the details of the VM type you plan to use to get the complete list of SAP HANA-
supported OS releases for that VM type.

Run all test cases that are listed in the SAP HANA SR Performance Optimized Scenario
guide or SAP HANA SR Cost Optimized Scenario guide, depending on your scenario.
You can find the guides listed in SLES for SAP best practices .

The following tests are a copy of the test descriptions of the SAP HANA SR Performance
Optimized Scenario SUSE Linux Enterprise Server for SAP Applications 12 SP1 guide. For
an up-to-date version, also read the guide itself. Always make sure that HANA is in sync
before you start the test, and make sure that the Pacemaker configuration is correct.

In the following test descriptions, we assume PREFER_SITE_TAKEOVER="true" and


AUTOMATED_REGISTER="false" .

7 Note

The following tests are designed to be run in sequence. Each test depends on the
exit state of the preceding test.

1. Test 1: Stop the primary database on node 1.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hana sid>adm on the hn1-db-0 node:

Bash

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> HDB stop

Pacemaker detects the stopped HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-0 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.

Run the following commands to register the hn1-db-0 node as secondary and
clean up the failed resource:

Bash

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --


remoteHost=hn1-db-1 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 1>

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

2. Test 2: Stop the primary database on node 2.

The resource state before starting the test:

Output
Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Run the following commands as <hana sid>adm on the hn1-db-1 node:

Bash

hn1adm@hn1-db-1:/usr/sap/HN1/HDB01> HDB stop

Pacemaker detects the stopped HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-1 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.

Run the following commands to register the hn1-db-1 node as secondary and
clean up the failed resource:

Bash

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --


remoteHost=hn1-db-0 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 2>

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
3. Test 3: Crash the primary database on node 1.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hana sid>adm on the hn1-db-0 node:

Bash

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> HDB kill-9

Pacemaker detects the killed HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-0 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.

Run the following commands to register the hn1-db-0 node as secondary and
clean up the failed resource:

Bash

hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --


remoteHost=hn1-db-1 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 1>

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0

The resource state after the test:

Bash

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

4. Test 4: Crash the primary database on node 2.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Run the following commands as <hana sid>adm on the hn1-db-1 node:

Bash

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB kill-9

Pacemaker detects the killed HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-1 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.

Run the following commands to register the hn1-db-1 node as secondary and
clean up the failed resource.

Bash

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --


remoteHost=hn1-db-0 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 2>

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1

The resource state after the test:

Output
Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

5. Test 5: Crash the primary site node (node 1).

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as root on the hn1-db-0 node:

Bash

hn1-db-0:~ # echo 'b' > /proc/sysrq-trigger

Pacemaker detects the killed cluster node and fences the node. When the node is
fenced, Pacemaker triggers a takeover of the HANA instance. When the fenced
node is rebooted, Pacemaker doesn't start automatically.

Run the following commands to start Pacemaker, clean the SBD messages for the
hn1-db-0 node, register the hn1-db-0 node as secondary, and clean up the failed
resource:

Bash

# run as root
# list the SBD device(s)
hn1-db-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"

hn1-db-0:~ # sbd -d /dev/disk/by-id/scsi-


36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message hn1-db-0 clear

hn1-db-0:~ # systemctl start pacemaker

# run as <hana sid>adm


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --
remoteHost=hn1-db-1 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 1>

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

6. Test 6: Crash the secondary site node (node 2).

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

Run the following commands as root on the hn1-db-1 node:


Bash

hn1-db-1:~ # echo 'b' > /proc/sysrq-trigger

Pacemaker detects the killed cluster node and fences the node. When the node is
fenced, Pacemaker triggers a takeover of the HANA instance. When the fenced
node is rebooted, Pacemaker doesn't start automatically.

Run the following commands to start Pacemaker, clean the SBD messages for the
hn1-db-1 node, register the hn1-db-1 node as secondary, and clean up the failed

resource:

Bash

# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"

hn1-db-1:~ # sbd -d /dev/disk/by-id/scsi-


36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message hn1-db-1 clear

hn1-db-1:~ # systemctl start pacemaker

# run as <hana sid>adm


hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --
remoteHost=hn1-db-0 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 2>

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
</code></pre>

7. Test 7: Stop the secondary database on node 2.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hana sid>adm on the hn1-db-1 node:

Bash

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB stop

Pacemaker detects the stopped HANA instance and marks the resource as failed
on the hn1-db-1 node. Pacemaker automatically restarts the HANA instance.

Run the following command to clean up the failed state:

Bash

# run as root
hn1-db-1>:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

8. Test 8: Crash the secondary database on node 2.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as <hana sid>adm on the hn1-db-1 node:

Bash

hn1adm@hn1-db-1:/usr/sap/HN1/HDB03> HDB kill-9

Pacemaker detects the killed HANA instance and marks the resource as failed on
the hn1-db-1 node. Run the following command to clean up the failed state.
Pacemaker then automatically restarts the HANA instance.

Bash

# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> HN1-db-1

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

9. Test 9: Crash the secondary site node (node 2) that's running the secondary HANA
database.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as root on the hn1-db-1 node:

Bash

hn1-db-1:~ # echo b > /proc/sysrq-trigger

Pacemaker detects the killed cluster node and fenced the node. When the fenced
node is rebooted, Pacemaker doesn't start automatically.

Run the following commands to start Pacemaker, clean the SBD messages for the
hn1-db-1 node, and clean up the failed resource:

Bash

# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"

hn1-db-1:~ # sbd -d /dev/disk/by-id/scsi-


36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message hn1-db-1 clear

hn1-db-1:~ # systemctl start pacemaker


hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

10. Test 10: Crash primary database indexserver

This test is relevant only when you have set up the susChkSrv hook as outlined in
Implement HANA hooks SAPHanaSR and susChkSrv.

The resource state before starting the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Run the following commands as root on the hn1-db-0 node:

Bash

hn1-db-0:~ # killall -9 hdbindexserver

When the indexserver is terminated, the susChkSrv hook detects the event and
trigger an action to fence 'hn1-db-0' node and initiate a takeover process.
Run the following commands to register hn1-db-0 node as secondary and clean up
the failed resource:

Bash

# run as <hana sid>adm


hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --
remoteHost=hn1-db-1 --remoteInstance=<instance number> --
replicationMode=sync --name=<site 1>

# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0

The resource state after the test:

Output

Clone Set: cln_SAPHanaTopology_HN1_HDB03


[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1

You can execute a comparable test case by causing the indexserver on the
secondary node to crash. In the event of indexserver crash, the susChkSrv hook will
recognize the occurrence and initiate an action to fence the secondary node.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability of SAP HANA scale-up
with Azure NetApp Files on SUSE
Enterprise Linux
Article • 02/27/2024

This article describes how to configure SAP HANA system replication in scale-up
deployment when the HANA file systems are mounted via NFS by using Azure NetApp
Files. In the example configurations and installation commands, instance number 03 and
HANA System ID HN1 are used. SAP HANA replication consists of one primary node and
at least one secondary node.

When steps in this document are marked with the following prefixes, they mean:

[A]: The step applies to all nodes.


[1]: The step applies to node1 only.
[2]: The step applies to node2 only.

Read the following SAP Notes and papers first:

SAP Note 1928533 has:


The list of Azure VM sizes that are supported for the deployment of SAP
software.
Important capacity information for Azure virtual machine (VM) sizes.
The supported SAP software and operating system (OS) and database
combinations.
The required SAP kernel version for Windows and Linux on Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 405827 lists the recommended file system for the HANA environment.
SAP Note 2684254 has recommended OS settings for SUSE Linux Enterprise
Server (SLES) 15/SLES for SAP Applications 15.
SAP Note 1944799 has SAP HANA guidelines for SLES OS installation.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring extension for SAP.
SAP Note 1900823 contains information about SAP HANA storage requirements.
SUSE SAP high availability (HA) Best Practice Guides contain all required
information to set up NetWeaver HA and SAP HANA system replication on-
premises (to be used as a general baseline). They provide much more detailed
information.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
General SLES documentation:
Setting up an SAP HANA cluster
SLES High Availability Extension 15 SP3 Release Notes
Operating System Security Hardening Guide for SAP HANA for SUSE Linux
Enterprise Server 15
SUSE Linux Enterprise Server for SAP Applications 15 SP3 Guide
SUSE Linux Enterprise Server for SAP Applications 15 SP3 SAP Automation
SUSE Linux Enterprise Server for SAP Applications 15 SP3 SAP Monitoring
Azure-specific SLES documentation:
Getting Started with SAP HANA High Availability Cluster Automation Operating
on Azure
SUSE and Microsoft Solution Templates for SAP Applications Simplified
Deployment on Microsoft
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
Azure Virtual Machines planning and implementation for SAP on Linux

7 Note

This article contains references to a term that Microsoft no longer uses. When the
term is removed from the software, we'll remove it from this article.

Overview
Traditionally, in a scale-up environment, all file systems for SAP HANA are mounted from
local storage. Setting up HA of SAP HANA system replication on SUSE Enterprise Linux is
published in Set up SAP HANA system replication on SLES.

To achieve SAP HANA HA of a scale-up system on Azure NetApp Files NFS shares, we
need extra resource configuration in the cluster. This configuration is needed so that
HANA resources can recover when one node loses access to the NFS shares on Azure
NetApp Files.
SAP HANA file systems are mounted on NFS shares by using Azure NetApp Files on
each node. The file systems /hana/data, /hana/log, and /hana/shared are unique to each
node.

Mounted on node1 (hanadb1):

10.3.1.4:/hanadb1-data-mnt00001 on /hana/data
10.3.1.4:/hanadb1-log-mnt00001 on /hana/log
10.3.1.4:/hanadb1-shared-mnt00001 on /hana/shared

Mounted on node2 (hanadb2):

10.3.1.4:/hanadb2-data-mnt00001 on /hana/data
10.3.1.4:/hanadb2-log-mnt00001 on /hana/log
10.3.1.4:/hanadb2-shared-mnt0001 on /hana/shared

7 Note
The file systems /hana/shared, /hana/data, and /hana/log aren't shared between
the two nodes. Each cluster node has its own separate file systems.

SAP HA HANA system replication configuration uses a dedicated virtual hostname and
virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
presented configuration shows a load balancer with:

Front-end configuration IP address: 10.3.0.50 for hn1-db


Probe port: 62503

Set up the Azure NetApp Files infrastructure


Before you continue with the setup for Azure NetApp Files infrastructure, familiarize
yourself with the Azure NetApp Files documentation.

Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.

For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files availability by Azure region .

Important considerations
As you create your Azure NetApp Files for SAP HANA scale-up systems, be aware of the
important considerations documented in NFS v4.1 volumes on Azure NetApp Files for
SAP HANA.

Sizing of HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and
service level, as documented in Service level for Azure NetApp Files.

While you design the infrastructure for SAP HANA on Azure with Azure NetApp Files, be
aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP
HANA.

The configuration in this article is presented with simple Azure NetApp Files volumes.

) Important

For production systems, where performance is key, we recommend that you


evaluate and consider using Azure NetApp Files application volume group for
SAP HANA.

All commands to mount /hana/shared in this article are presented for NFSv4.1
/hana/shared volumes. If you deployed the /hana/shared volumes as NFSv3 volumes,
don't forget to adjust the mount commands for /hana/shared for NFSv3.

Deploy Azure NetApp Files resources


The following instructions assume that you already deployed your Azure virtual network.
The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources are
mounted, must be deployed in the same Azure virtual network or in peered Azure virtual
networks.

1. Create a NetApp account in your selected Azure region by following the


instructions in Create a NetApp account.

2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.

The HANA architecture presented in this article uses a single Azure NetApp Files
capacity pool at the Ultra service level. For HANA workloads on Azure, we
recommend using the Azure NetApp Files Ultra or Premium service Level.

3. Delegate a subnet to Azure NetApp Files, as described in the instructions in


Delegate a subnet to Azure NetApp Files.

4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.

As you deploy the volumes, be sure to select the NFSv4.1 version. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp Files volumes are assigned automatically.

The Azure NetApp Files resources and the Azure VMs must be in the same Azure
virtual network or in peered Azure virtual networks. For example, hanadb1-data-
mnt00001, hanadb1-log-mnt00001, and so on are the volume names, and
nfs://10.3.1.4/hanadb1-data-mnt00001, nfs://10.3.1.4/hanadb1-log-mnt00001, and
so on are the file paths for the Azure NetApp Files volumes.

On hanadb1:

Volume hanadb1-data-mnt00001 (nfs://10.3.1.4:/hanadb1-data-mnt00001)


Volume hanadb1-log-mnt00001 (nfs://10.3.1.4:/hanadb1-log-mnt00001)
Volume hanadb1-shared-mnt00001 (nfs://10.3.1.4:/hanadb1-shared-
mnt00001)

On hanadb2:

Volume hanadb2-data-mnt00001 (nfs://10.3.1.4:/hanadb2-data-mnt00001)


Volume hanadb2-log-mnt00001 (nfs://10.3.1.4:/hanadb2-log-mnt00001)
Volume hanadb2-shared-mnt00001 (nfs://10.3.1.4:/hanadb2-shared-
mnt00001)

Prepare the infrastructure


The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.

Deploy Linux VMs manually via the Azure portal


This document assumes that you already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy VMs for SAP HANA. Choose a suitable SLES image that's supported for the
HANA system. You can deploy a VM in any one of the availability options: virtual
machine scale set, availability zone, or availability set.

) Important

Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.

Configure Azure Load Balancer


During VM configuration, you have an option to create or select the existing load
balancer in the networking section. Follow the next steps to set up a standard load
balancer for HA setup of the HANA database.

Azure portal
Follow the steps in Create load balancer to set up a standard load balancer for a
high-availability SAP system by using the Azure portal. During the setup of the load
balancer, consider the following points:

1. Frontend IP Configuration: Create a front-end IP. Select the same virtual


network and subnet name as your database virtual machines.
2. Backend Pool: Create a back-end pool and add database VMs.
3. Inbound rules: Create a load-balancing rule. Follow the same steps for both
load-balancing rules.

Frontend IP address: Select a front-end IP.


Backend pool: Select a back-end pool.
High-availability ports: Select this option.
Protocol: Select TCP.
Health Probe: Create a health probe with the following details:
Protocol: Select TCP.
Port: For example, 625<instance-no.>.
Interval: Enter 5.
Probe Threshold: Enter 2.
Idle timeout (minutes): Enter 30.
Enable Floating IP: Select this option.

7 Note

The health probe configuration property numberOfProbes , otherwise known as


Unhealthy threshold in the portal, isn't respected. To control the number of
successful or failed consecutive probes, set the property probeThreshold to 2 .
It's currently not possible to set this property by using the Azure portal, so use
either the Azure CLI or the PowerShell command.

For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
more IP addresses for the VM, deploy a second NIC.
When VMs without public IP addresses are placed in the back-end pool of internal (no
public IP address) Standard Azure Load Balancer, there's no outbound internet
connectivity unless more configuration is performed to allow routing to public
endpoints. For more information on how to achieve outbound connectivity, see Public
endpoint connectivity for VMs using Azure Standard Load Balancer in SAP high-
availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer

health probes and SAP Note 2382421 .


To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , update the saptune version to 3.1.1 or higher. For
more information, see saptune 3.1.1 – Do I Need to Update? .

Mount the Azure NetApp Files volume


1. [A] Create mount points for the HANA database volumes.

Bash

sudo mkdir -p /hana/data/HN1/mnt00001


sudo mkdir -p /hana/log/HN1/mnt00001
sudo mkdir -p /hana/shared/HN1

2. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, that is, defaultv4iddomain.com, and the
mapping is set to nobody.

Bash

sudo cat /etc/idmapd.conf

Example output:

Bash

[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com. If there's a mismatch between the domain
configuration on the NFS client (that is, the VM) and the NFS server (that is,
the Azure NetApp Files configuration), the permissions for files on Azure
NetApp Files volumes that are mounted on the VMs display as nobody.

3. [A] Edit /etc/fstab on both nodes to permanently mount the volumes relevant to
each node. The following example shows how you mount the volumes
permanently.

Bash

sudo vi /etc/fstab

Add the following entries in /etc/fstab on both nodes.

Example for hanadb1:

example

10.3.1.4:/hanadb1-data-mnt00001 /hana/data/HN1/mnt00001 nfs


rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.3.1.4:/hanadb1-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0

Example for hanadb2:

example

10.3.1.4:/hanadb2-data-mnt00001 /hana/data/HN1/mnt00001 nfs


rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.3.1.4:/hanadb2-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.3.1.4:/hanadb2-shared-mnt00001 /hana/shared/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0

Mount all volumes.

Bash

sudo mount -a

For workloads that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.

4. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.

Bash

sudo nfsstat -m

Verify that flag vers is set to 4.1.

Example from hanadb1:

example

/hana/log/HN1/mnt00001 from 10.3.1.4:/hanadb1-log-mnt00001


Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.3.0.4,local_lock=none,addr=1
0.3.1.4
/hana/data/HN1/mnt00001 from 10.3.1.4:/hanadb1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.3.0.4,local_lock=none,addr=1
0.3.1.4
/hana/shared/HN1 from 10.3.1.4:/hanadb1-shared-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.3.0.4,local_lock=none,addr=1
0.3.1.4

5. [A] Verify nfs4_disable_idmapping. It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, run the mount command. You
won't be able to manually create the directory under /sys/modules because access
is reserved for the kernel/drivers.

Bash

#Check nfs4_disable_idmapping
sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping

#If you need to set nfs4_disable_idmapping to Y


sudo echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

#Make the configuration permanent


sudo echo "options nfs nfs4_disable_idmapping=Y" >>
/etc/modprobe.d/nfs.conf

SAP HANA installation


1. [A] Set up host name resolution for all hosts.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
host name in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines in the /etc/hosts file. Change the IP address and host
name to match your environment.

example

10.3.0.4 hanadb1
10.3.0.5 hanadb2

2. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as
described in SAP Note 3024346 - Linux Kernel Settings for NetApp NFS . Create
the configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp
configuration settings.

Bash

sudo vi /etc/sysctl.d/91-NetApp-HANA.conf
Add the following entries in the configuration file:

parameters

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

3. [A] Create the configuration file /etc/sysctl.d/ms-az.conf with more optimization


settings.

Bash

sudo vi /etc/sysctl.d/ms-az.conf

Add the following entries in the configuration file:

parameters

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

 Tip

Avoid setting net.ipv4.ip_local_port_range and


net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to

allow the SAP Host Agent to manage the port ranges. For more information,
see SAP Note 2382421 .

4. [A] Adjust the sunrpc settings, as recommended in SAP Note 3024346 - Linux
Kernel Settings for NetApp NFS .

Bash
sudo vi /etc/modprobe.d/sunrpc.conf

Insert the following line:

parameter

options sunrpc tcp_max_slot_table_entries=128

5. [A] Configure SLES for HANA.

Configure SLES as described in the following SAP Notes based on your SLES
version:

2684254 Recommended OS settings for SLES 15 / SLES for SAP Applications


15
2205917 Recommended OS settings for SLES 12 / SLES for SAP Applications
12
2455582 Linux: Running SAP applications compiled with GCC 6.x
2593824 Linux: Running SAP applications compiled with GCC 7.x
2886607 Linux: Running SAP applications compiled with GCC 9.x

6. [A] Install the SAP HANA.

Starting with HANA 2.0 SPS 01, Multitenant Database Containers (MDC) is the
default option. When you install the HANA system, SYSTEMDB and a tenant with
the same SID are created together. In some cases, you don't want the default
tenant. If you don't want to create the initial tenant along with the installation,
follow the instructions in SAP Note 2629711 .

a. Start the hdblcm program from the HANA installation software directory.

Bash

./hdblcm

b. At the prompt, enter the following values:

For Choose installation: Enter 1 (for install).


For Select additional components for installation: Enter 1.
For Enter Installation Path [/hana/shared]: Press Enter to accept the
default.
For Enter Local Host Name [..]: Press Enter to accept the default.
Under Do you want to add additional hosts to the system? (y/n) [n]:
Select n.
For Enter SAP HANA System ID: Enter HN1.
For Enter Instance Number [00]: Enter 03.
For Select Database Mode / Enter Index [1]: Press Enter to accept the
default.
For Select System Usage / Enter Index [4]: Enter 4 (for custom).
For Enter Location of Data Volumes [/hana/data]: Press Enter to accept
the default.
For Enter Location of Log Volumes [/hana/log]: Press Enter to accept the
default.
For Restrict maximum memory allocation? [n]: Press Enter to accept the
default.
For Enter Certificate Host Name For Host '...' [...]: Press Enter to accept
the default.
For Enter SAP Host Agent User (sapadm) Password: Enter the host agent
user password.
For Confirm SAP Host Agent User (sapadm) Password: Enter the host
agent user password again to confirm.
For Enter System Administrator (hn1adm) Password: Enter the system
administrator password.
For Confirm System Administrator (hn1adm) Password: Enter the system
administrator password again to confirm.
For Enter System Administrator Home Directory [/usr/sap/HN1/home]:
Press Enter to accept the default.
For Enter System Administrator Login Shell [/bin/sh]: Press Enter to
accept the default.
For Enter System Administrator User ID [1001]: Press Enter to accept the
default.
For Enter ID of User Group (sapsys) [79]: Press Enter to accept the default.
For Enter Database User (SYSTEM) Password: Enter the database user
password.
For Confirm Database User (SYSTEM) Password: Enter the database user
password again to confirm.
For Restart system after machine reboot? [n]: Press Enter to accept the
default.
For Do you want to continue? (y/n): Validate the summary. Enter y to
continue.

7. [A] Upgrade the SAP Host Agent.


Download the latest SAP Host Agent archive from the SAP Software Center and
run the following command to upgrade the agent. Replace the path to the archive
to point to the file that you downloaded.

Bash

sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP


Host Agent SAR>

Configure SAP HANA system replication


Follow the steps in SAP HANA system replication to configure SAP HANA system
replication.

Cluster configuration
This section describes the necessary steps that are required for the cluster to operate
seamlessly when SAP HANA is installed on NFS shares by using Azure NetApp Files.

Create a Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Enterprise Linux in Azure to create a
basic Pacemaker cluster for this HANA server.

Implement HANA hooks SAPHanaSR and


susChkSrv
This important step optimizes the integration with the cluster and improves the
detection when a cluster failover is needed. We highly recommend that you configure
both SAPHanaSR and susChkSrv Python hooks. Follow the steps in Implement the
Python system replication hooks SAPHanaSR and susChkSrv.

Configure SAP HANA cluster resources


This section describes the necessary steps that are required to configure the SAP HANA
cluster resources.

Create SAP HANA cluster resources


Follow the steps in Creating SAP HANA cluster resources to create the cluster resources
for the HANA server. After the resources are created, you should see the status of the
cluster with the following command:

Bash

sudo crm_mon -r

Example output:

Output

# Online: [ hn1-db-0 hn1-db-1 ]


# Full list of resources:
# stonith-sbd (stonith:external/sbd): Started hn1-db-0
# Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hn1-db-0 hn1-db-1 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hn1-db-0 ]
# Slaves: [ hn1-db-1 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0

Create file system resources


Create a dummy file system cluster resource. It monitors and reports failures if there's a
problem accessing the NFS-mounted file system /hana/shared. That allows the cluster to
trigger failover if there's a problem accessing /hana/shared. For more information, see
Handling failed NFS share in SUSE HA cluster for HANA system replication .

1. [A] Create the directory structure on both nodes.

Bash

sudo mkdir -p /hana/shared/HN1/check


sudo mkdir -p /hana/shared/check

2. [1] Configure the cluster to add the directory structure for monitoring.

Bash

sudo crm configure primitive rsc_fs_check_HN1_HDB03 Filesystem params \


device="/hana/shared/HN1/check/" \
directory="/hana/shared/check/" fstype=nfs \
options="bind,defaults,rw,hard,rsize=262144,wsize=262144,proto=tcp,noat
ime,_netdev,nfsvers=4.1,lock,sec=sys" \
op monitor interval=120 timeout=120 on-fail=fence \
op_params OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 \
op stop interval=0 timeout=120

3. [1] Clone and check the newly configured volume in the cluster.

Bash

sudo crm configure clone cln_fs_check_HN1_HDB03 rsc_fs_check_HN1_HDB03


meta clone-node-max=1 interleave=true

Example output:

Bash

sudo crm status

# Cluster Summary:
# Stack: corosync
# Current DC: hanadb1 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Tue Nov 2 17:57:39 2021
# Last change: Tue Nov 2 17:57:38 2021 by root via crm_attribute on
hanadb1
# 2 nodes configured
# 11 resource instances configured

# Node List:
# Online: [ hanadb1 hanadb2 ]

# Full List of Resources:


# Clone Set: cln_azure-events [rsc_azure-events]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]:
# rsc_SAPHanaTopology_HN1_HDB03 (ocf::suse:SAPHanaTopology):
Started hanadb1 (Monitoring)
# rsc_SAPHanaTopology_HN1_HDB03 (ocf::suse:SAPHanaTopology):
Started hanadb2 (Monitoring)
# Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
(promotable):
# rsc_SAPHana_HN1_HDB03 (ocf::suse:SAPHana): Master hanadb1
(Monitoring)
# Slaves: [ hanadb2 ]
# Resource Group: g_ip_HN1_HDB03:
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb1
# rsc_st_azure (stonith:fence_azure_arm): Started hanadb2
# Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]

The OCF_CHECK_LEVEL=20 attribute is added to the monitor operation so that


monitor operations perform a read/write test on the file system. Without this
attribute, the monitor operation only verifies that the file system is mounted. This
can be a problem because when connectivity is lost, the file system might remain
mounted, despite being inaccessible.

The on-fail=fence attribute is also added to the monitor operation. With this
option, if the monitor operation fails on a node, that node is immediately fenced.

) Important

Timeouts in the preceding configuration might need to be adapted to the specific


HANA setup to avoid unnecessary fence actions. Don't set the timeout values too
low. Be aware that the file system monitor isn't related to the HANA system
replication. For more information, see the SUSE documentation .

Test the cluster setup


This section describes how you can test your setup.

1. Before you start a test, make sure that Pacemaker doesn't have any failed action
(via crm status) and no unexpected location constraints (for example, leftovers of a
migration test). Also, ensure that HANA system replication is in sync state, for
example, with systemReplicationStatus .

Bash

sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

2. Verify the status of the HANA resources by using this command:

Bash

SAPHanaSR-showAttr

# You should see something like below


# hanadb1:~ SAPHanaSR-showAttr
# Global cib-time maintenance
# --------------------------------------------
# global Mon Nov 8 22:50:30 2021 false
# Sites srHook
# -------------
# SITE1 PRIM
# SITE2 SOK
# Site2 SOK
# Hosts clone_state lpa_hn1_lpt node_state op_mode remoteHost roles
score site srmode sync_state version vhost
# ---------------------------------------------------------------------
-----------------------------------------------------------------------
------------------
# hanadb1 PROMOTED 1636411810 online logreplay hanadb2
4:P:master1:master:worker:master 150 SITE1 sync PRIM
2.00.058.00.1634122452 hanadb1
# hanadb2 DEMOTED 30 online logreplay hanadb1
4:S:master1:master:worker:master 100 SITE2 sync SOK
2.00.058.00.1634122452 hanadb2

3. Verify the cluster configuration for a failure scenario when a node is shut down.
The following example shows shutting down node 1:

Bash

sudo crm status


sudo crm resource move msl_SAPHana_HN1_HDB03 hanadb2 force
sudo crm resource cleanup

Example output:

Bash

sudo crm status

#Cluster Summary:
# Stack: corosync
# Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Mon Nov 8 23:25:36 2021
# Last change: Mon Nov 8 23:25:19 2021 by root via crm_attribute on
hanadb2
# 2 nodes configured
# 11 resource instances configured

# Node List:
# Online: [ hanadb1 hanadb2 ]
# Full List of Resources:
# Clone Set: cln_azure-events [rsc_azure-events]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
(promotable):
# Masters: [ hanadb2 ]
# Stopped: [ hanadb1 ]
# Resource Group: g_ip_HN1_HDB03:
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb2
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2
# rsc_st_azure (stonith:fence_azure_arm): Started hanadb2
# Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]

Stop the HANA on Node1:

Bash

sudo su - hn1adm
sapcontrol -nr 03 -function StopWait 600 10

Register Node 1 as the Secondary Node and check status:

Bash

hdbnsutil -sr_register --remoteHost=hanadb2 --remoteInstance=03 --


replicationMode=sync --name=SITE1 --operationMode=logreplay

Example output:

example

#adding site ...


#nameserver hanadb1:30301 not responding.
#collecting information ...
#updating local ini files ...
#done.

Bash

sudo crm status

Bash

sudo SAPHanaSR-showAttr

4. Verify the cluster configuration for a failure scenario when a node loses access to
the NFS share (/hana/shared).
The SAP HANA resource agents depend on binaries stored on /hana/shared to
perform operations during failover. File system /hana/shared is mounted over NFS
in the presented scenario.

It's difficult to simulate a failure, where one of the servers loses access to the NFS
share. As a test, you can remount the file system as read-only. This approach
validates that the cluster can fail over if access to /hana/shared is lost on the active
node.

Expected result: On making /hana/shared as a read-only file system, the


OCF_CHECK_LEVEL attribute of the resource hana_shared1 , which performs

read/write operations on the file system, fails. It fails because it can't write anything
on the file system and performs a HANA resource failover. The same result is
expected when your HANA node loses access to the NFS shares.

Resource state before starting the test:

Bash

sudo crm status

#Cluster Summary:
# Stack: corosync
# Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Mon Nov 8 23:01:27 2021
# Last change: Mon Nov 8 23:00:46 2021 by root via crm_attribute on
hanadb1
# 2 nodes configured
# 11 resource instances configured

#Node List:
# Online: [ hanadb1 hanadb2 ]

#Full List of Resources:


# Clone Set: cln_azure-events [rsc_azure-events]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
(promotable):
# Masters: [ hanadb1 ]
# Slaves: [ hanadb2 ]
# Resource Group: g_ip_HN1_HDB03:
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb1
# rsc_st_azure (stonith:fence_azure_arm): Started hanadb2
# Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]

You can place /hana/shared in read-only mode on the active cluster node by using
this command:

Bash

sudo mount -o ro 10.3.1.4:/hanadb1-shared-mnt00001 /hana/sharedb

The server hanadb1 either reboots or powers off based on the action set. After the
server hanadb1 is down, the HANA resource moves to hanadb2 . You can check the
status of the cluster from hanadb2 .

Bash

sudo crm status

#Cluster Summary:
# Stack: corosync
# Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Wed Nov 10 22:00:27 2021
# Last change: Wed Nov 10 21:59:47 2021 by root via crm_attribute on
hanadb2
# 2 nodes configured
# 11 resource instances configured

#Node List:
# Online: [ hanadb1 hanadb2 ]

#Full List of Resources:


# Clone Set: cln_azure-events [rsc_azure-events]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
(promotable):
# Masters: [ hanadb2 ]
# Stopped: [ hanadb1 ]
# Resource Group: g_ip_HN1_HDB03:
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started
hanadb2
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2
# rsc_st_azure (stonith:fence_azure_arm): Started hanadb2
# Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]
We recommend testing the SAP HANA cluster configuration thoroughly by doing
the tests described in SAP HANA system replication.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
Deploy a SAP HANA scale-out system
with standby node on Azure VMs by
using Azure NetApp Files on SUSE Linux
Enterprise Server
Article • 07/11/2023

This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration with standby on Azure virtual machines (VMs) by using Azure NetApp
Files for the shared storage volumes.

In the example configurations, installation commands, and so on, the HANA instance is
03 and the HANA system ID is HN1. The examples are based on HANA 2.0 SP4 and SUSE
Linux Enterprise Server for SAP 12 SP4.

Before you begin, refer to the following SAP notes and papers:

Azure NetApp Files documentation


SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 : Lists prerequisites for SAP-supported SAP software
deployments in Azure
SAP Note 2205917 : Contains recommended OS settings for SUSE Linux
Enterprise Server for SAP Applications
SAP Note 1944799 : Contains SAP Guidelines for SUSE Linux Enterprise Server for
SAP Applications
SAP Note 2178632 : Contains detailed information about all monitoring metrics
reported for SAP in Azure
SAP Note 2191498 : Contains the required SAP Host Agent version for Linux in
Azure
SAP Note 2243692 : Contains information about SAP licensing on Linux in Azure
SAP Note 1984787 : Contains general information about SUSE Linux Enterprise
Server 12
SAP Note 1999351 : Contains additional troubleshooting information for the
Azure Enhanced Monitoring Extension for SAP
SAP Note 1900823 : Contains information about SAP HANA storage
requirements
SAP Community Wiki : Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides : Contains all required information to set up
NetWeaver High Availability and SAP HANA System Replication on-premises (to be
used as a general baseline; they provide much more detailed information)
SUSE High Availability Extension 12 SP3 Release Notes
NFS v4.1 volumes on Azure NetApp Files for SAP HANA

Overview
One method for achieving HANA high availability is by configuring host auto failover. To
configure host auto failover, you add one or more virtual machines to the HANA system
and configure them as standby nodes. When active node fails, a standby node
automatically takes over. In the presented configuration with Azure virtual machines,
you achieve auto failover by using NFS on Azure NetApp Files.

7 Note

The standby node needs access to all database volumes. The HANA volumes must
be mounted as NFSv4 volumes. The improved file lease-based locking mechanism
in the NFSv4 protocol is used for I/O fencing.

) Important

To build the supported configuration, you must deploy the HANA data and log
volumes as NFSv4.1 volumes and mount them by using the NFSv4.1 protocol. The
HANA host auto-failover configuration with standby node is not supported with
NFSv3.

In the preceding diagram, which follows SAP HANA network recommendations, three
subnets are represented within one Azure virtual network:

For client communication


For communication with the storage system
For internal HANA inter-node communication

The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.

For this example configuration, the subnets are:

client 10.23.0.0/24

storage 10.23.2.0/24

hana 10.23.3.0/24
anf 10.23.1.0/26
Set up the Azure NetApp Files infrastructure
Before you proceed with the set up for Azure NetApp Files infrastructure, familiarize
yourself with the Azure NetApp Files documentation.

Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.

For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files Availability by Azure Region .

Important considerations
As you're creating your Azure NetApp Files for SAP NetWeaver on SUSE High Availability
architecture, be aware of the important considerations documented in NFS v4.1 volumes
on Azure NetApp Files for SAP HANA.

Sizing for HANA database on Azure NetApp Files


The throughput of an Azure NetApp Files volume is a function of the volume size and
service level, as documented in Service level for Azure NetApp Files.

As you design the infrastructure for SAP HANA on Azure with Azure NetApp Files, be
aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP
HANA.

The configuration in this article is presented with simple Azure NetApp Files Volumes.

) Important

For production systems, where performance is a key, we recommend to evaluate


and consider using Azure NetApp Files application volume group for SAP HANA.

Deploy Azure NetApp Files resources


The following instructions assume that you've already deployed your Azure virtual
network. The Azure NetApp Files resources and VMs, where the Azure NetApp Files
resources will be mounted, must be deployed in the same Azure virtual network or in
peered Azure virtual networks.
1. Create a NetApp account in your selected Azure region by following the
instructions in Create a NetApp account.

2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.

The HANA architecture presented in this article uses a single Azure NetApp Files
capacity pool at the Ultra Service level. For HANA workloads on Azure, we
recommend using an Azure NetApp Files Ultra or Premium service Level.

3. Delegate a subnet to Azure NetApp Files, as described in the instructions in


Delegate a subnet to Azure NetApp Files.

4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.

As you're deploying the volumes, be sure to select the NFSv4.1 version. Currently,
access to NFSv4.1 requires being added to an allowlist. Deploy the volumes in the
designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp
volumes are assigned automatically.

Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. For example,
HN1-data-mnt00001, HN1-log-mnt00001, and so on, are the volume names and
nfs://10.23.1.5/HN1-data-mnt00001, nfs://10.23.1.4/HN1-log-mnt00001, and so on,
are the file paths for the Azure NetApp Files volumes.

volume HN1-data-mnt00001 (nfs://10.23.1.5/HN1-data-mnt00001)


volume HN1-data-mnt00002 (nfs://10.23.1.6/HN1-data-mnt00002)
volume HN1-log-mnt00001 (nfs://10.23.1.4/HN1-log-mnt00001)
volume HN1-log-mnt00002 (nfs://10.23.1.6/HN1-log-mnt00002)
volume HN1-shared (nfs://10.23.1.4/HN1-shared)

In this example, we used a separate Azure NetApp Files volume for each HANA
data and log volume. For a more cost-optimized configuration on smaller or non-
productive systems, it's possible to place all data mounts and all logs mounts on a
single volume.

Deploy Linux virtual machines via the Azure


portal
First you need to create the Azure NetApp Files volumes. Then do the following steps:
1. Create the Azure virtual network subnets in your Azure virtual network.

2. Deploy the VMs.

3. Create the additional network interfaces, and attach the network interfaces to the
corresponding VMs.

Each virtual machine has three network interfaces, which correspond to the three
Azure virtual network subnets ( client , storage and hana ).

For more information, see Create a Linux virtual machine in Azure with multiple
network interface cards.

) Important

For SAP HANA workloads, low latency is critical. To achieve low latency, work with
your Microsoft representative to ensure that the virtual machines and the Azure
NetApp Files volumes are deployed in close proximity. When you're onboarding
new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the
necessary information.

The next instructions assume that you've already created the resource group, the Azure
virtual network, and the three Azure virtual network subnets: client , storage and hana .
When you deploy the VMs, select the client subnet, so that the client network interface
is the primary interface on the VMs. You will also need to configure an explicit route to
the Azure NetApp Files delegated subnet via the storage subnet gateway.

) Important

Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM
types you're using. For a list of SAP HANA certified VM types and OS releases for
those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS
releases for that type.

1. Create an availability set for SAP HANA. Make sure to set the max update domain.

2. Create three virtual machines (hanadb1, hanadb2, hanadb3) by doing the


following steps:

a. Use a SLES4SAP image in the Azure gallery that's supported for SAP HANA.

b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.

When you deploy the virtual machines, the network interface name is automatically
generated. In these instructions for simplicity we'll refer to the automatically
generated network interfaces, which are attached to the client Azure virtual
network subnet, as hanadb1-client, hanadb2-client, and hanadb3-client.

3. Create three network interfaces, one for each virtual machine, for the storage
virtual network subnet (in this example, hanadb1-storage, hanadb2-storage, and
hanadb3-storage).

4. Create three network interfaces, one for each virtual machine, for the hana virtual
network subnet (in this example, hanadb1-hana, hanadb2-hana, and hanadb3-
hana).

5. Attach the newly created virtual network interfaces to the corresponding virtual
machines by doing the following steps:
a. Go to the virtual machine in the Azure portal .
b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select the virtual machine.
c. In the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface drop-down list, select the already created network interfaces
for the storage and hana subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hanadb2 and hanadb3).
g. Leave the virtual machines in stopped state for now. Next, we'll enable
accelerated networking for all newly attached network interfaces.

6. Enable accelerated networking for the additional network interfaces for the
storage and hana subnets by doing the following steps:

a. Open Azure Cloud Shell in the Azure portal .

b. Execute the following commands to enable accelerated networking for the


additional network interfaces, which are attached to the storage and hana
subnets.

Bash

az network nic update --id /subscriptions/your


subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb1-storage
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-storage
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-storage
--accelerated-networking true

az network nic update --id /subscriptions/your


subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb1-hana --
accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb2-hana --
accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hanadb3-hana --
accelerated-networking true

7. Start the virtual machines by doing the following steps:


a. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select it.
b. In the Overview pane, select Start.

Operating system configuration and


preparation
The instructions in the next sections are prefixed with one of the following:

[A]: Applicable to all nodes


[1]: Applicable only to node 1
[2]: Applicable only to node 2
[3]: Applicable only to node 3

Configure and prepare your OS by doing the following steps:

1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries were added to /etc/hosts for this example.

Bash

# Storage
10.23.2.4 hanadb1-storage
10.23.2.5 hanadb2-storage
10.23.2.6 hanadb3-storage
# Client
10.23.0.5 hanadb1
10.23.0.6 hanadb2
10.23.0.7 hanadb3
# Hana
10.23.3.4 hanadb1-hana
10.23.3.5 hanadb2-hana
10.23.3.6 hanadb3-hana

2. [A] Change DHCP and cloud config settings for the network interface for storage
to avoid unintended hostname changes.

The following instructions assume that the storage network interface is eth1 .

Bash

vi /etc/sysconfig/network/dhcp
# Change the following DHCP setting to "no"
DHCLIENT_SET_HOSTNAME="no"

vi /etc/sysconfig/network/ifcfg-eth1
# Edit ifcfg-eth1
#Change CLOUD_NETCONFIG_MANAGE='yes' to "no"
CLOUD_NETCONFIG_MANAGE='no'

3. [A] Add a network route, so that the communication to the Azure NetApp Files
goes via the storage network interface.

The following instructions assume that the storage network interface is eth1 .

Bash

vi /etc/sysconfig/network/ifroute-eth1

# Add the following routes


# RouterIPforStorageNetwork - - -
# ANFNetwork/cidr RouterIPforStorageNetwork - -
10.23.2.1 - - -
10.23.1.0/26 10.23.2.1 - -

Reboot the VM to activate the changes.

4. [A] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.
Bash

vi /etc/sysctl.d/91-NetApp-HANA.conf

# Add the following entries in the configuration file


net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

5. [A] Create configuration file /etc/sysctl.d/ms-az.conf with Microsoft for Azure


configuration settings.

Bash

vi /etc/sysctl.d/ms-az.conf

# Add the following entries in the configuration file


net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

[!TIP] Avoid setting net.ipv4.ip_local_port_range and


net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to
allow SAP Host Agent to manage the port ranges. For more details see SAP
note 2382421 .

6. [A] Adjust the sunrpc settings for NFSv3 volumes, as recommended in SAP note
3024346 - Linux Kernel Settings for NetApp NFS .

Bash

vi /etc/modprobe.d/sunrpc.conf

# Insert the following line


options sunrpc tcp_max_slot_table_entries=128
Mount the Azure NetApp Files volumes
1. [A] Create mount points for the HANA database volumes.

Bash

mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1

2. [1] Create node-specific directories for /usr/sap on HN1-shared.

Bash

# Create a temporary directory to mount HN1-shared


mkdir /mnt/tmp

# if using NFSv3 for this volume, mount with the following command
mount 10.23.1.4:/HN1-shared /mnt/tmp

# if using NFSv4.1 for this volume, mount with the following command
mount -t nfs -o sec=sys,nfsvers=4.1 10.23.1.4:/HN1-shared /mnt/tmp

cd /mnt/tmp
mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3

# unmount /hana/shared
cd
umount /mnt/tmp

3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, i.e. defaultv4iddomain.com and the mapping is
set to nobody.

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com . If there's a mismatch between the domain

configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure
NetApp configuration, then the permissions for files on Azure NetApp
volumes that are mounted on the VMs will be displayed as nobody .
Bash

sudo cat /etc/idmapd.conf

# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

4. [A] Verify nfs4_disable_idmapping . It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, execute the mount command.
You won't be able to manually create the directory under /sys/modules, because
access is reserved for the kernel / drivers.

Bash

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping

# If you need to set nfs4_disable_idmapping to Y


mkdir /mnt/tmp
mount 10.23.1.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

# Make the configuration permanent


echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

5. [A] Create the SAP HANA group and user manually. The IDs for group sapsys and
user hn1adm must be set to the same IDs, which are provided during the
onboarding. (In this example, the IDs are set to 1001.) If the IDs aren't set correctly,
you won't be able to access the volumes. The IDs for group sapsys and user
accounts hn1adm and sapadm must be the same on all virtual machines.

Bash

# Create user group


sudo groupadd -g 1001 sapsys

# Create users
sudo useradd hn1adm -u 1001 -g 1001 -d /usr/sap/HN1/home -c "SAP HANA
Database System" -s /bin/sh
sudo useradd sapadm -u 1002 -g 1001 -d /home/sapadm -c "SAP Local
Administrator" -s /bin/sh
# Set the password for both user ids
sudo passwd hn1adm
sudo passwd sapadm

6. [A] Mount the shared Azure NetApp Files volumes.

Bash

sudo vi /etc/fstab

# Add the following entries


10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.23.1.4:/HN1-shared/shared /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0

# Mount all volumes


sudo mount -a

For workloads, that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.

7. [1] Mount the node-specific volumes on hanadb1.

Bash

sudo vi /etc/fstab

# Add the following entries


10.23.1.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0

# Mount the volume


sudo mount -a
8. [2] Mount the node-specific volumes on hanadb2.

Bash

sudo vi /etc/fstab

# Add the following entries


10.23.1.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0

# Mount the volume


sudo mount -a

9. [3] Mount the node-specific volumes on hanadb3.

Bash

sudo vi /etc/fstab

# Add the following entries


10.23.1.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0

# Mount the volume


sudo mount -a

10. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.1.

Bash

sudo nfsstat -m

# Verify that flag vers is set to 4.1


# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.23.1.5:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=
10.23.1.5
/hana/log/HN1/mnt00002 from 10.23.1.6:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=
10.23.1.6
/hana/data/HN1/mnt00002 from 10.23.1.6:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=
10.23.1.6
/hana/log/HN1/mnt00001 from 10.23.1.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=
10.23.1.4
/usr/sap/HN1 from 10.23.1.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=
10.23.1.4
/hana/shared from 10.23.1.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=
10.23.1.4

Installation
In this example for deploying SAP HANA in scale-out configuration with standby node
with Azure, we've used HANA 2.0 SP4.

Prepare for HANA installation


1. [A] Before the HANA installation, set the root password. You can disable the root
password after the installation has been completed. Execute as root command
passwd .

2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3, without being
prompted for a password.

Bash

ssh root@hanadb2
ssh root@hanadb3

3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more
information, see SAP Note 2593824 .

Bash

sudo zypper install libgcc_s1 libstdc++6 libatomic1

4. [2], [3] Change ownership of SAP HANA data and log directories to hn1adm.

Bash
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1

HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation
and Update guide . In this example, we install SAP HANA scale-out with master,
one worker, and one standby node.

a. Start the hdblcm program from the HANA installation software directory. Use
the internal_network parameter and pass the address space for subnet, which
is used for the internal HANA inter-node communication.

Bash

./hdblcm --internal_network=10.23.3.0/24

b. At the prompt, enter the following values:

For Choose an action: enter 1 (for install)


For Additional components for installation: enter 2, 3
For installation path: press Enter (defaults to /hana/shared)
For Local Host Name: press Enter to accept the default
Under Do you want to add hosts to the system?: enter y
For comma-separated host names to add: enter hanadb2, hanadb3
For Root User Name [root]: press Enter to accept the default
For Root User Password: enter the root user's password
For roles for host hanadb2: enter 1 (for worker)
For Host Failover Group for host hanadb2 [default]: press Enter to accept
the default
For Storage Partition Number for host hanadb2 [<<assign
automatically>>]: press Enter to accept the default
For Worker Group for host hanadb2 [default]: press Enter to accept the
default
For Select roles for host hanadb3: enter 2 (for standby)
For Host Failover Group for host hanadb3 [default]: press Enter to accept
the default
For Worker Group for host hanadb3 [default]: press Enter to accept the
default
For SAP HANA System ID: enter HN1
For Instance number [00]: enter 03
For Local Host Worker Group [default]: press Enter to accept the default
For Select System Usage / Enter index [4]: enter 4 (for custom)
For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the
default
For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the
default
For Restrict maximum memory allocation? [n]: enter n
For Certificate Host Name For Host hanadb1 [hanadb1]: press Enter to
accept the default
For Certificate Host Name For Host hanadb2 [hanadb2]: press Enter to
accept the default
For Certificate Host Name For Host hanadb3 [hanadb3]: press Enter to
accept the default
For System Administrator (hn1adm) Password: enter the password
For System Database User (system) Password: enter the system's
password
For Confirm System Database User (system) Password: enter system's
password
For Restart system after machine reboot? [n]: enter n
For Do you want to continue (y/n): validate the summary and if
everything looks good, enter y

2. [1] Verify global.ini.

Display global.ini, and ensure that the configuration for the internal SAP HANA
inter-node communication is in place. Verify the communication section. It should
have the address space for the hana subnet, and listeninterface should be set to
.internal . Verify the internal_hostname_resolution section. It should have the IP

addresses for the HANA virtual machines that belong to the hana subnet.

Bash

sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini

# Example
#global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve
[communication]
internal_network = 10.23.3/24
listeninterface = .internal
[internal_hostname_resolution]
10.23.3.4 = hanadb1
10.23.3.5 = hanadb2
10.23.3.6 = hanadb3
3. [1] Add host mapping to ensure that the client IP addresses are used for client
communication. Add section public_host_resolution , and add the corresponding
IP addresses from the client subnet.

Bash

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini

#Add the section


[public_hostname_resolution]
map_hanadb1 = 10.23.0.5
map_hanadb2 = 10.23.0.6
map_hanadb3 = 10.23.0.7

4. [1] Restart SAP HANA to activate the changes.

Bash

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StopSystem HDB
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function
StartSystem HDB

5. [1] Verify that the client interface will be using the IP addresses from the client
subnet for communication.

Bash

sudo -u hn1adm /usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i


03 -d SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep
net_publicname

# Expected result
"hanadb3","net_publicname","10.23.0.7"
"hanadb2","net_publicname","10.23.0.6"
"hanadb1","net_publicname","10.23.0.5"

For information about how to verify the configuration, see SAP Note 2183363 -
Configuration of SAP HANA internal network .

6. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the
following SAP HANA parameters:

max_parallel_io_requests 128

async_read_submit on
async_write_submit_active on
async_write_submit_blocks all

For more information, see I/O stack configuration for SAP HANA .

Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini .
For more information, see SAP Note 1999930 .

For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set
during the installation, as described in SAP Note 2267798 .

7. The storage that's used by Azure NetApp Files has a file size limitation of 16
terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it
won't automatically create a new data file when the file size limit of 16 TB is
reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will
result in errors and, eventually, in an index server crash.

) Important

To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of
the storage subsystem, set the following parameters in global.ini .

datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP
Note 2400005 . Be aware of SAP Note 2631285 .

Test SAP HANA failover

7 Note

This article contains references to the terms master and slave, terms that Microsoft
no longer uses. When these terms are removed from the software, we’ll remove
them from this article.

1. Simulate a node crash on an SAP HANA worker node. Do the following:

a. Before you simulate the node crash, run the following commands as hn1adm to
capture the status of the environment:

Bash

# Check the landscape status


python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN

b. To simulate a node crash, run the following command as root on the worker
node, which is hanadb2 in this case:

Bash

echo b > /proc/sysrq-trigger

c. Monitor the system for failover completion. When the failover has been
completed, capture the status, which should look like the following:

Bash

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY

# Check the landscape status


/usr/sap/HN1/HDB03/exe/python_support> python
landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | no | info | | | 2 |
0 | default | default | master 2 | slave | worker |
standby | worker | standby | default | - |
| hanadb3 | yes | info | | | 0 |
2 | default | default | master 3 | slave | standby |
slave | standby | worker | default | default |

) Important

When a node experiences kernel panic, avoid delays with SAP HANA
failover by setting kernel.panic to 20 seconds on all HANA virtual
machines. The configuration is done in /etc/sysctl . Reboot the virtual
machines to activate the change. If this change isn't performed, failover can
take 10 or more minutes when a node is experiencing kernel panic.

2. Kill the name server by doing the following:

a. Prior to the test, check the status of the environment by running the following
commands as hn1adm:

Bash

#Landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY

b. Run the following commands as hn1adm on the active master node, which is
hanadb1 in this case:

Bash

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill

The standby node hanadb3 will take over as master node. Here is the resource
state after the failover test is completed:

Bash

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | no | info | | | 1 |
0 | default | default | master 1 | slave | worker |
standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 |
1 | default | default | master 3 | master | standby |
master | standby | worker | default | default |

c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine,
where the name server was killed). The hanadb1 node will rejoin the
environment and will keep its standby role.

Bash

hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb1, expect the following status:

Bash

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | info | | | 1 |
0 | default | default | master 1 | slave | worker |
standby | worker | standby | default | - |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | yes | info | | | 0 |
1 | default | default | master 3 | master | standby |
master | standby | worker | default | default |

d. Again, kill the name server on the currently active master node (that is, on node
hanadb3).

Bash

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill

Node hanadb1 will resume the role of master node. After the failover test has
been completed, the status will look like this:

Bash

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY

# Check the landscape status


python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

e. Start SAP HANA on hanadb3, which will be ready to serve as a standby node.

Bash

hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start

After SAP HANA has started on hanadb3, the status looks like the following:

Bash

# Check the instance status


sapcontrol -nr 03 -function GetSystemInstanceList & python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY
# Check the landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs).
High availability for SAP HANA scale-
out system with HSR on SUSE Linux
Enterprise Server
Article • 01/17/2024

This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration with HANA system replication (HSR) and Pacemaker on Azure SUSE Linux
Enterprise Server virtual machines (VMs). The shared file systems in the presented
architecture are NFS mounted and are provided by Azure NetApp Files or NFS share on
Azure Files.

In the example configurations, installation commands, and so on, the HANA instance is
03 and the HANA system ID is HN1.

Before you begin, refer to the following SAP notes and papers:

Azure NetApp Files documentation


Azure Files documentation
SAP Note 1928533 includes:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 : Lists prerequisites for SAP-supported SAP software
deployments in Azure
SAP Note 2205917 : Contains recommended OS settings for SUSE Linux
Enterprise Server for SAP Applications
SAP Note 1944799 : Contains SAP Guidelines for SUSE Linux Enterprise Server for
SAP Applications
SAP Note 2178632 : Contains detailed information about all monitoring metrics
reported for SAP in Azure
SAP Note 2191498 : Contains the required SAP Host Agent version for Linux in
Azure
SAP Note 2243692 : Contains information about SAP licensing on Linux in Azure
SAP Note 1984787 : Contains general information about SUSE Linux Enterprise
Server 12
SAP Note 1999351 : Contains additional troubleshooting information for the
Azure Enhanced Monitoring Extension for SAP
SAP Note 1900823 : Contains information about SAP HANA storage
requirements
SAP Community Wiki : Contains all required SAP notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides : Contains all required information to set up
NetWeaver High Availability and SAP HANA System Replication on-premises (to be
used as a general baseline; they provide much more detailed information)
SUSE High Availability Extension 12 SP5 Release Notes
Handling failed NFS share in SUSE HA cluster for HANA system replication
NFS v4.1 volumes on Azure NetApp Files for SAP HANA

Overview
One method to achieve HANA high availability for HANA scale-out installations, is to
configure HANA system replication and protect the solution with Pacemaker cluster to
allow automatic failover. When an active node fails, the cluster fails over the HANA
resources to the other site.
The presented configuration shows three HANA nodes on each site, plus majority maker
node to prevent split-brain scenario. The instructions can be adapted, to include more
VMs as HANA DB nodes.

The HANA shared file system /hana/shared in the presented architecture can be
provided by Azure NetApp Files or NFS share on Azure Files. The HANA shared file
system is NFS mounted on each HANA node in the same HANA system replication site.
File systems /hana/data and /hana/log are local file systems and aren't shared between
the HANA DB nodes. SAP HANA will be installed in non-shared mode.

For recommended SAP HANA storage configurations, see SAP HANA Azure VMs storage
configurations.

) Important

If deploying all HANA file systems on Azure NetApp Files, for production systems,
where performance is a key, we recommend to evaluate and consider using Azure
NetApp Files application volume group for SAP HANA.

2 Warning
Deploying /hana/data and /hana/log on NFS on Azure Files is not supported.

In the preceding diagram, three subnets are represented within one Azure virtual
network, following the SAP HANA network recommendations:

for client communication - client 10.23.0.0/24


for internal HANA inter-node communication - inter 10.23.1.128/26
for HANA system replication - hsr 10.23.1.192/26

As /hana/data and /hana/log are deployed on local disks, it isn't necessary to deploy
separate subnet and separate virtual network cards for communication to the storage.

If you're using Azure NetApp Files, the NFS volumes for /hana/shared , are deployed in a
separate subnet, delegated to Azure NetApp Files: anf 10.23.1.0/26.

Prepare the infrastructure


In the instructions that follow, we assume that you've already created the resource
group, the Azure virtual network with three Azure network subnets: client , inter and
hsr .

Deploy Linux virtual machines via the Azure portal


1. Deploy the Azure VMs.

For the configuration presented in this document, deploy seven virtual machines:

three virtual machines to serve as HANA DB nodes for HANA replication site
1: hana-s1-db1, hana-s1-db2 and hana-s1-db3
three virtual machines to serve as HANA DB nodes for HANA replication site
2: hana-s2-db1, hana-s2-db2 and hana-s2-db3
a small virtual machine to serve as majority maker: hana-s-mm

The VMs, deployed as SAP DB HANA nodes should be certified by SAP for HANA
as published in the SAP HANA Hardware directory . When deploying the HANA
DB nodes, make sure that Accelerated Network is selected.

For the majority maker node, you can deploy a small VM, as this VM doesn't run
any of the SAP HANA resources. The majority maker VM is used in the cluster
configuration to achieve odd number of cluster nodes in a split-brain scenario. The
majority maker VM only needs one virtual network interface in the client subnet
in this example.

Deploy local managed disks for /hana/data and /hana/log . The minimum
recommended storage configuration for /hana/data and /hana/log is described in
SAP HANA Azure VMs storage configurations.

Deploy the primary network interface for each VM in the client virtual network
subnet.
When the VM is deployed via Azure portal, the network interface name is
automatically generated. In these instructions for simplicity we'll refer to the
automatically generated, primary network interfaces, which are attached to the
client Azure virtual network subnet as hana-s1-db1-client, hana-s1-db2-client,

hana-s1-db3-client, and so on.

) Important

Make sure that the OS you select is SAP-certified for SAP HANA on the
specific VM types you're using. For a list of SAP HANA certified VM types
and OS releases for those types, go to the SAP HANA certified IaaS
platforms site. Click into the details of the listed VM type to get the
complete list of SAP HANA-supported OS releases for that type.
If you choose to deploy /hana/shared on NFS on Azure Files, we
recommend to deploy on SLES 15 SP2 and above.

2. Create six network interfaces, one for each HANA DB virtual machine, in the inter
virtual network subnet (in this example, hana-s1-db1-inter, hana-s1-db2-inter,
hana-s1-db3-inter, hana-s2-db1-inter, hana-s2-db2-inter, and hana-s2-db3-
inter).

3. Create six network interfaces, one for each HANA DB virtual machine, in the hsr
virtual network subnet (in this example, hana-s1-db1-hsr, hana-s1-db2-hsr, hana-
s1-db3-hsr, hana-s2-db1-hsr, hana-s2-db2-hsr, and hana-s2-db3-hsr).

4. Attach the newly created virtual network interfaces to the corresponding virtual
machines:
a. Go to the virtual machine in the Azure portal .
b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hana-s1-db1), and then select the virtual machine.
c. In the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface drop-down list, select the already created network interfaces
for the inter and hsr subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hana-s1-db2, hana-s1-db3, hana-s2-db1, hana-s2-db2 and hana-s2-db3).
g. Leave the virtual machines in stopped state for now. Next, we'll enable
accelerated networking for all newly attached network interfaces.

5. Enable accelerated networking for the additional network interfaces for the inter
and hsr subnets by doing the following steps:

a. Open Azure Cloud Shell in the Azure portal .

b. Execute the following commands to enable accelerated networking for the


additional network interfaces, which are attached to the inter and hsr subnets.

Bash

az network nic update --id /subscriptions/your


subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-
inter --accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-
inter --accelerated-networking true

az network nic update --id /subscriptions/your


subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr
--accelerated-networking true
az network nic update --id /subscriptions/your
subscription/resourceGroups/your resource
group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr
--accelerated-networking true

6. Start the HANA DB virtual machines

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to setup standard load balancer for high
availability setup of HANA database.

7 Note

For HANA scale out, select the NIC for the client subnet when adding the
virtual machines in the backend pool.
The full set of command in Azure CLI and PowerShell adds the VMs with
primary NIC in the backend pool.

Azure Portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create frontend IP. Select the same virtual


network and subnet same as your DB virtual machines.
2. Backend Pool: Create backend pool and add DB VMs.
3. Inbound rules: Create load balancing rule. Follow the same steps for both
load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 625<instance-no.>]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Do not enable TCP timestamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set
parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer
health probes and SAP note 2382421 .
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , update saptune version to 3.1.1 or higher. For more
details, see saptune 3.1.1 – Do I Need to Update? .

Deploy NFS
There are two options for deploying Azure native NFS for /hana/shared . You can deploy
NFS volume on Azure NetApp Files or NFS share on Azure Files. Azure files support
NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.

The next sections describe the steps to deploy NFS - you'll need to select only one of
the options.

 Tip

You chose to deploy /hana/shared on NFS share on Azure Files or NFS volume on
Azure NetApp Files.

Deploy the Azure NetApp Files infrastructure

Deploy ANF volumes for the /hana/shared file system. You'll need a separate
/hana/shared volume for each HANA system replication site. For more information, see

Set up the Azure NetApp Files infrastructure.

In this example, the following Azure NetApp Files volumes were used:
volume HN1-shared-s1 (nfs://10.23.1.7/HN1-shared-s1)
volume HN1-shared-s2 (nfs://10.23.1.7/HN1-shared-s2)

Deploy the NFS on Azure Files infrastructure

Deploy Azure Files NFS shares for the /hana/shared file system. You'll need a separate
/hana/shared Azure Files NFS share for each HANA system replication site. For more

information, see How to create an NFS share.

In this example, the following Azure Files NFS shares were used:

share hn1-shared-s1 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1)


share hn1-shared-s2 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2)

Operating system configuration and


preparation
The instructions in the next sections are prefixed with one of the following
abbreviations:

[A]: Applicable to all nodes, including majority maker


[AH]: Applicable to all HANA DB nodes
[M]: Applicable to the majority maker node only
[AH1]: Applicable to all HANA DB nodes on SITE 1
[AH2]: Applicable to all HANA DB nodes on SITE 2
[1]: Applicable only to HANA DB node 1, SITE 1
[2]: Applicable only to HANA DB node 1, SITE 2

Configure and prepare your OS by doing the following steps:

1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries were added to /etc/hosts for this example.

Bash

# Client subnet
10.23.0.19 hana-s1-db1
10.23.0.20 hana-s1-db2
10.23.0.21 hana-s1-db3
10.23.0.22 hana-s2-db1
10.23.0.23 hana-s2-db2
10.23.0.24 hana-s2-db3
10.23.0.25 hana-s-mm
# Internode subnet
10.23.1.132 hana-s1-db1-inter
10.23.1.133 hana-s1-db2-inter
10.23.1.134 hana-s1-db3-inter
10.23.1.135 hana-s2-db1-inter
10.23.1.136 hana-s2-db2-inter
10.23.1.137 hana-s2-db3-inter

# HSR subnet
10.23.1.196 hana-s1-db1-hsr
10.23.1.197 hana-s1-db2-hsr
10.23.1.198 hana-s1-db3-hsr
10.23.1.199 hana-s2-db1-hsr
10.23.1.200 hana-s2-db2-hsr
10.23.1.201 hana-s2-db3-hsr

2. [A] Create configuration file /etc/sysctl.d/ms-az.conf with Microsoft for Azure


configuration settings.

Bash

vi /etc/sysctl.d/ms-az.conf

# Add the following entries in the configuration file


net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10

 Tip

Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports


explicitly in the sysctl configuration files to allow SAP Host Agent to manage
the port ranges. For more information, see SAP note 2382421 .

3. [A] SUSE delivers special resource agents for SAP HANA and by default agents for
SAP HANA scale-up are installed. Uninstall the packages for scale-up, if installed
and install the packages for scenario SAP HANA scale-out. The step needs to be
performed on all cluster VMs, including the majority maker.

7 Note

SAPHanaSR-ScaleOut version 0.181 or higher must be installed.


Bash

# Uninstall scale-up packages and patterns


sudo zypper remove patterns-sap-hana
sudo zypper remove SAPHanaSR SAPHanaSR-doc yast2-sap-ha

# Install the scale-out packages and patterns


sudo zypper in SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc
sudo zypper in -t pattern ha_sles

4. [AH] Prepare the VMs - apply the recommended settings per SAP note 2205917
for SUSE Linux Enterprise Server for SAP Applications.

Prepare the file systems


You chose to deploy the SAP shared directories on NFS share on Azure Files or NFS
volume on Azure NetApp Files.

Mount the shared file systems (Azure NetApp Files NFS)


In this example, the shared HANA file systems are deployed on Azure NetApp Files and
mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on
Azure NetApp Files.

1. [AH] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.

Bash

vi /etc/sysctl.d/91-NetApp-HANA.conf

# Add the following entries in the configuration file


net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
2. [AH] Adjust the sunrpc settings, as recommended in SAP note 3024346 - Linux
Kernel Settings for NetApp NFS .

Bash

vi /etc/modprobe.d/sunrpc.conf

# Insert the following line


options sunrpc tcp_max_slot_table_entries=128

3. [AH] Create mount points for the HANA database volumes.

Bash

mkdir -p /hana/shared

4. [AH] Verify the NFS domain setting. Make sure that the domain is configured as
the default Azure NetApp Files domain, that is, defaultv4iddomain.com and the
mapping is set to nobody.
This step is only needed, if using Azure NetAppFiles NFSv4.1.

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com . If there's a mismatch between the domain
configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure
NetApp configuration, then the permissions for files on Azure NetApp
volumes that are mounted on the VMs will be displayed as nobody .

Bash

sudo cat /etc/idmapd.conf


# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

5. [AH] Verify nfs4_disable_idmapping . It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, execute the mount command.
You won't be able to manually create the directory under /sys/modules, because
access is reserved for the kernel / drivers.
This step is only needed, if using Azure NetAppFiles NFSv4.1.

Bash

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.23.1.7:/HN1-share-s1 /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

6. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.

Bash

sudo vi /etc/fstab
# Add the following entry
10.23.1.7:/HN1-shared-s1 /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount all volumes
sudo mount -a

7. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.

Bash

sudo vi /etc/fstab
# Add the following entry
10.23.1.7:/HN1-shared-s2 /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a

8. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs with NFS protocol version NFSv4.1.

Bash

sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,addr
=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr
=10.23.1.7

Mount the shared file systems (Azure Files NFS)


In this example, the shared HANA file systems are deployed on NFS on Azure Files.
Follow the steps in this section, only if you're using NFS on Azure Files.

1. [AH] Create mount points for the HANA database volumes.

Bash

mkdir -p /hana/shared

2. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.

Bash

sudo vi /etc/fstab
# Add the following entry
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount all volumes
sudo mount -a

3. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.

Bash

sudo vi /etc/fstab
# Add the following entries
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount the volume
sudo mount -a

4. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs with NFS protocol version NFSv4.1.
Bash

sudo nfsstat -m
# Example from SITE 1, hana-s1-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,a
ddr=10.23.0.35
# Example from SITE 2, hana-s2-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,a
ddr=10.23.0.35

Prepare the data and log local file systems


In the presented configuration, file systems /hana/data and /hana/log are deployed on
managed disk and are locally attached to each HANA DB VM. You'll need to execute the
steps to create the local data and log volumes on each HANA DB virtual machine.

Set up the disk layout with Logical Volume Manager (LVM). The following example
assumes that each HANA virtual machine has three data disks attached, that are used to
create two volumes.

1. [AH] List all of the available disks:

Bash

ls /dev/disk/azure/scsi1/lun*

Example output:

Bash

/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2

2. [AH] Create physical volumes for all of the disks that you want to use:

Bash

sudo pvcreate /dev/disk/azure/scsi1/lun0


sudo pvcreate /dev/disk/azure/scsi1/lun1
sudo pvcreate /dev/disk/azure/scsi1/lun2
3. [AH] Create a volume group for the data files. Use one volume group for the log
files and one for the shared directory of SAP HANA:\

Bash

sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0


/dev/disk/azure/scsi1/lun1
sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2

4. [AH] Create the logical volumes.

A linear volume is created when you use lvcreate without the -i switch. We
suggest that you create a striped volume for better I/O performance, and align the
stripe sizes to the values documented in SAP HANA VM storage configurations.
The -i argument should be the number of the underlying physical volumes and
the -I argument is the stripe size. In this document, two physical volumes are
used for the data volume, so the -i switch argument is set to 2. The stripe size for
the data volume is 256 KiB. One physical volume is used for the log volume, so no
-i or -I switches are explicitly used for the log volume commands.

) Important

Use the -i switch and set it to the number of the underlying physical volume
when you use more than one physical volume for each data or log volumes.
Use the -I switch to specify the stripe size, when creating a striped volume.
See SAP HANA VM storage configurations for recommended storage
configurations, including stripe sizes and number of disks.

Bash

sudo lvcreate -i 2 -I 256 -l 100%FREE -n hana_data vg_hana_data_HN1


sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_HN1
sudo mkfs.xfs /dev/vg_hana_data_HN1/hana_data
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log

5. [AH] Create the mount directories and copy the UUID of all of the logical volumes:

Bash

sudo mkdir -p /hana/data/HN1


sudo mkdir -p /hana/log/HN1
# Write down the ID of /dev/vg_hana_data_HN1/hana_data and
/dev/vg_hana_log_HN1/hana_log
sudo blkid

6. [AH] Create fstab entries for the logical volumes and mount:

Bash

sudo vi /etc/fstab

Insert the following line in the /etc/fstab file:

Bash

/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_data_HN1-hana_data
/hana/data/HN1 xfs defaults,nofail 0 2
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_log_HN1-hana_log
/hana/log/HN1 xfs defaults,nofail 0 2

Mount the new volumes:

Bash

sudo mount -a

Create a Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to
create a basic Pacemaker cluster for this HANA server. Include all virtual machines,
including the majority maker in the cluster.

) Important

Don't set quorum expected-votes to 2, as this is not a two node cluster.


Make sure that cluster property concurrent-fencing is enabled, so that node
fencing is deserialized.

Installation
In this example for deploying SAP HANA in scale-out configuration with HSR on Azure
VMs, we've used HANA 2.0 SP5.
Prepare for HANA installation
1. [AH] Before the HANA installation, set the root password. You can disable the root
password after the installation has been completed. Execute as root command
passwd .

2. [1,2] Change the permissions on /hana/shared

Bash

chmod 775 /hana/shared

3. [1] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s1-db2
and hana-s1-db3, without being prompted for a password. If that isn't the case,
exchange ssh keys as described in Enable SSH Access via Public Key .

Bash

ssh root@hana-s1-db2
ssh root@hana-s1-db3

4. [2] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s2-db2
and hana-s2-db3, without being prompted for a password.
If that isn't the case, exchange ssh keys.

Bash

ssh root@hana-s2-db2
ssh root@hana-s2-db3

5. [AH] Install additional packages, which are required for HANA 2.0 SP4 and above.
For more information, see SAP Note 2593824 for your SLES version.

Bash

# In this example, using SLES12 SP5


sudo zypper install libgcc_s1 libstdc++6 libatomic1

HANA installation on the first node on each site


1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation
and Update guide . In the instructions that follow, we show the SAP HANA
installation on the first node on SITE 1.
a. Start the hdblcm program as root from the HANA installation software
directory. Use the internal_network parameter and pass the address space for
subnet, which is used for the internal HANA inter-node communication.

Bash

./hdblcm --internal_network=10.23.1.128/26

b. At the prompt, enter the following values:

For Choose an action: enter 1 (for install)


For Additional components for installation: enter 2, 3
For installation path: press Enter (defaults to /hana/shared)
For Local Host Name: press Enter to accept the default
For Do you want to add hosts to the system?: enter n
For SAP HANA System ID: enter HN1
For Instance number [00]: enter 03
For Local Host Worker Group [default]: press Enter to accept the default
For Select System Usage / Enter index [4]: enter 4 (for custom)
For Location of Data Volumes [/hana/data/HN1]: press Enter to accept the
default
For Location of Log Volumes [/hana/log/HN1]: press Enter to accept the
default
For Restrict maximum memory allocation? [n]: enter n
For Certificate Host Name For Host hana-s1-db1 [hana-s1-db1]: press Enter
to accept the default
For SAP Host Agent User (sapadm) Password: enter the password
For Confirm SAP Host Agent User (sapadm) Password: enter the password
For System Administrator (hn1adm) Password: enter the password
For System Administrator Home Directory [/usr/sap/HN1/home]: press Enter
to accept the default
For System Administrator Login Shell [/bin/sh]: press Enter to accept the
default
For System Administrator User ID [1001]: press Enter to accept the default
For Enter ID of User Group (sapsys) [79]: press Enter to accept the default
For System Database User (system) Password: enter the system's password
For Confirm System Database User (system) Password: enter system's
password
For Restart system after machine reboot? [n]: enter n
For Do you want to continue (y/n): validate the summary and if everything
looks good, enter y
2. [2] Repeat the preceding step to install SAP HANA on the first node on SITE 2.

3. [1,2] Verify global.ini

Display global.ini, and ensure that the configuration for the internal SAP HANA
inter-node communication is in place. Verify the communication section. It should
have the address space for the inter subnet, and listeninterface should be set
to .internal . Verify the internal_hostname_resolution section. It should have the
IP addresses for the HANA virtual machines that belong to the inter subnet.

Bash

sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini


# Example from SITE1
[communication]
internal_network = 10.23.1.128/26
listeninterface = .internal
[internal_hostname_resolution]
10.23.1.132 = hana-s1-db1
10.23.1.133 = hana-s1-db2
10.23.1.134 = hana-s1-db3

4. [1,2] Prepare global.ini for installation in non-shared environment, as described


in SAP note 2080991 .

Bash

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
[persistence]
basepath_shared = no

5. [1,2] Restart SAP HANA to activate the changes.

Bash

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StopSystem
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function
StartSystem

6. [1,2] Verify that the client interface will be using the IP addresses from the client
subnet for communication.

Bash
# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB
'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result - example from SITE 2
"hana-s2-db1","net_publicname","10.23.0.22"

For information about how to verify the configuration, see SAP Note 2183363 -
Configuration of SAP HANA internal network .

7. [AH] Change permissions on the data and log directories to avoid HANA
installation error.

Bash

sudo chmod o+w -R /hana/data /hana/log

8. [1] Install the secondary HANA nodes. The example instructions in this step are for
SITE 1.

a. Start the resident hdblcm program as root .

Bash

cd /hana/shared/HN1/hdblcm
./hdblcm

b. At the prompt, enter the following values:

For Choose an action: enter 2 (for add hosts)


For Enter comma separated host names to add: hana-s1-db2, hana-s1-db3
For Additional components for installation: enter 2, 3
For Enter Root User Name [root]: press Enter to accept the default
For Select roles for host 'hana-s1-db2' [1]: 1 (for worker)
For Enter Host Failover Group for host 'hana-s1-db2' [default]: press Enter
to accept the default
For Enter Storage Partition Number for host 'hana-s1-db2' [<<assign
automatically>>]: press Enter to accept the default
For Enter Worker Group for host 'hana-s1-db2' [default]: press Enter to
accept the default
For Select roles for host 'hana-s1-db3' [1]: 1 (for worker)
For Enter Host Failover Group for host 'hana-s1-db3' [default]: press Enter
to accept the default
For Enter Storage Partition Number for host 'hana-s1-db3' [<<assign
automatically>>]: press Enter to accept the default
For Enter Worker Group for host 'hana-s1-db3' [default]: press Enter to
accept the default
For System Administrator (hn1adm) Password: enter the password
For Enter SAP Host Agent User (sapadm) Password: enter the password
For Confirm SAP Host Agent User (sapadm) Password: enter the password
For Certificate Host Name For Host hana-s1-db2 [hana-s1-db2]: press Enter
to accept the default
For Certificate Host Name For Host hana-s1-db3 [hana-s1-db3]: press Enter
to accept the default
For Do you want to continue (y/n): validate the summary and if everything
looks good, enter y

9. [2] Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.

Configure SAP HANA 2.0 System Replication


1. [1] Configure System Replication on SITE 1:

Back up the databases as hn1adm:

Bash

hdbsql -d SYSTEMDB -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE


('initialbackupSYS')"
hdbsql -d HN1 -u SYSTEM -p "passwd" -i 03 "BACKUP DATA USING FILE
('initialbackupHN1')"

Copy the system PKI files to the secondary site:

Bash

scp /usr/sap/HN1/SYS/global/security/rsecssfs/data/SSFS_HN1.DAT hana-


s2-db1:/usr/sap/HN1/SYS/global/security/rsecssfs/data/
scp /usr/sap/HN1/SYS/global/security/rsecssfs/key/SSFS_HN1.KEY hana-
s2-db1:/usr/sap/HN1/SYS/global/security/rsecssfs/key/

Create the primary site:

Bash

hdbnsutil -sr_enable --name=HANA_S1


2. [2] Configure System Replication on SITE 2:

Register the second site to start the system replication. Run the following
command as <hanasid>adm:

Bash

sapcontrol -nr 03 -function StopWait 600 10


hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --
replicationMode=sync --name=HANA_S2
sapcontrol -nr 03 -function StartSystem

3. [1] Check replication status

Check the replication status and wait until all databases are in sync.

Bash

sudo su - hn1adm -c "python


/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

# | Database | Host | Port | Service Name | Volume ID | Site


ID | Site Name | Secondary | Secondary | Secondary | Secondary |
Secondary | Replication | Replication | Replication |
# | | | | | |
| | Host | Port | Site ID | Site Name |
Active Status | Mode | Status | Status Details |
# | -------- | ------------- | ----- | ------------ | --------- | -----
-- | --------- | ------------- | --------- | --------- | --------- | --
----------- | ----------- | ----------- | -------------- |
# | HN1 | hana-s1-db3 | 30303 | indexserver | 5 |
1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 |
1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | HN1 | hana-s1-db1 | 30307 | xsengine | 2 |
1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | HN1 | hana-s1-db1 | 30303 | indexserver | 3 |
1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
# | HN1 | hana-s1-db2 | 30303 | indexserver | 4 |
1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#
# status system replication site "2": ACTIVE
# overall system replication status: ACTIVE
#
# Local System Replication State
#
# mode: PRIMARY
# site id: 1
# site name: HANA_S1

4. [1,2] Change the HANA configuration so that communication for HANA system
replication is directed through the HANA system replication virtual network
interfaces.

Stop HANA on both sites

Bash

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StopSystem HDB

Edit global.ini to add the host mapping for HANA system replication: use the
IP addresses from the hsr subnet.

Bash

sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[system_replication_hostname_resolution]
10.23.1.196 = hana-s1-db1
10.23.1.197 = hana-s1-db2
10.23.1.198 = hana-s1-db3
10.23.1.199 = hana-s2-db1
10.23.1.200 = hana-s2-db2
10.23.1.201 = hana-s2-db3

Start HANA on both sites

Bash

sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function


StartSystem HDB

For more information, see Host Name resolution for System Replication .

Create file system resources


Create a dummy file system cluster resource, which will monitor and report failures, in
case there's a problem accessing the NFS-mounted file system /hana/shared . That
allows the cluster to trigger failover, in case there's a problem accessing /hana/shared .
For more information, see Handling failed NFS share in SUSE HA cluster for HANA
system replication

1. [1] Place pacemaker in maintenance mode, in preparation for the creation of the
HANA cluster resources.

Bash

crm configure property maintenance-mode=true

2. [1,2] Create the directory on the NFS mounted file system /hana/shared, which will
be used in the special file system monitoring resource. The directories need to be
created on both sites.

Bash

mkdir -p /hana/shared/HN1/check

3. [AH] Create the directory, which will be used to mount the special file system
monitoring resource. The directory needs to be created on all HANA cluster nodes.

Bash

mkdir -p /hana/check

4. [1] Create the file system cluster resources.

Bash

crm configure primitive fs_HN1_HDB03_fscheck Filesystem \


params device="/hana/shared/HN1/check" \
directory="/hana/check" fstype=nfs4 \
options="bind,defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock" \
op monitor interval=120 timeout=120 on-fail=fence \
op_params OCF_CHECK_LEVEL=20 \
op start interval=0 timeout=120 op stop interval=0 timeout=120

crm configure clone cln_fs_HN1_HDB03_fscheck fs_HN1_HDB03_fscheck \


meta clone-node-max=1 interleave=true

crm configure location loc_cln_fs_HN1_HDB03_fscheck_not_on_mm \


cln_fs_HN1_HDB03_fscheck -inf: hana-s-mm

OCF_CHECK_LEVEL=20 attribute is added to the monitor operation, so that monitor

operations perform a read/write test on the file system. Without this attribute, the
monitor operation only verifies that the file system is mounted. This can be a
problem because when connectivity is lost, the file system may remain mounted,
despite being inaccessible.

on-fail=fence attribute is also added to the monitor operation. With this option, if

the monitor operation fails on a node, that node is immediately fenced.

Implement HANA HA hooks


SAPHanaSrMultiTarget and susChkSrv
This important step is to optimize the integration with the cluster and detection when a
cluster failover is possible. It's highly recommended to configure SAPHanaSrMultiTarget
Python hook. For HANA 2.0 SP5 and higher, implementing both SAPHanaSrMultiTarget
and susChkSrv hooks is recommended.

7 Note

SAPHanaSrMultiTarget HA provider replaces SAPHanaSR for HANA scale-out.


SAPHanaSR was described in earlier version of this document.
See SUSE blog post about changes with the new HANA HA hook.

Provided steps for SAPHanaSrMultiTarget hook are for a new installation. Upgrading an
existing environment from SAPHanaSR to SAPHanaSrMultiTarget provider requires
several changes and are NOT described in this document. If the existing environment
uses no third site for disaster recovery and HANA multi-target system replication isn't
used, SAPHanaSR HA provider can remain in use.

SusChkSrv extends the functionality of the main SAPHanaSrMultiTarget HA provider. It


acts in the situation when HANA process hdbindexserver crashes. If a single process
crashes typically HANA tries to restart it. Restarting the indexserver process can take a
long time, during which the HANA database isn't responsive. With susChkSrv
implemented, an immediate and configurable action is executed, instead of waiting on
hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts
for every HANA VM independently. The configured action will kill HANA or fence the
affected VM, which triggers a failover in the configured timeout period.

SUSE SLES 15 SP1 or higher is required for operation of both HANA HA hooks.
Following table shows other dependencies.

ノ Expand table
SAP HANA HA hook HANA version required SAPHanaSR-ScaleOut required

SAPHanaSrMultiTarget HANA 2.0 SPS4 or higher 0.180 or higher

susChkSrv HANA 2.0 SPS5 or higher 0.184.1 or higher

Steps to implement both hooks:

1. [1,2] Stop HANA on both system replication sites. Execute as <sid>adm:

Bash

sapcontrol -nr 03 -function StopSystem

2. [1,2] Adjust global.ini on each cluster site. If the prerequisites for susChkSrv hook
aren't met, entire block [ha_dr_provider_suschksrv] shouldn't be configured.
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid
values are [ ignore | stop | kill | fence ] .

Bash

# add to global.ini on both sites. Do not copy global.ini between


sites.
[ha_dr_provider_saphanasrmultitarget]
provider = SAPHanaSrMultiTarget
path = /usr/share/SAPHanaSR-ScaleOut
execution_order = 1

[ha_dr_provider_suschksrv]
provider = susChkSrv
path = /usr/share/SAPHanaSR-ScaleOut
execution_order = 3
action_on_lost = kill

[trace]
ha_dr_saphanasrmultitarget = info

Default location of the HA hooks as delivered by SUSE is /usr/share/SAPHanaSR-


ScaleOut. Using the standard location brings a benefit, that the python hook code
is automatically updated through OS or package updates and gets used by HANA
at next restart. With an optional own path, such as /hana/shared/myHooks you can
decouple OS updates from the used hook version.

3. [AH] The cluster requires sudoers configuration on the cluster nodes for
<sid>adm. In this example that is achieved by creating a new file. Execute the
commands as root adapt the values of hn1 with correct lowercase SID.
Bash

cat << EOF > /etc/sudoers.d/20-saphana


# SAPHanaSR-ScaleOut needs for HA/DR hook scripts
so1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n
hana_hn1_site_srHook_*
so1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_gsh *
so1adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=hn1 *
EOF

4. [1,2] Start SAP HANA on both replication sites. Execute as <sid>adm.

Bash

sapcontrol -nr 03 -function StartSystem

5. [A] Verify the hook installation is active on all cluster nodes. Execute as <sid>adm.

Bash

cdtrace
grep HADR.*load.*SAPHanaSrMultiTarget nameserver_*.trc | tail -3
# Example output
# nameserver_hana-s1-db1.31001.000.trc:[14162]{-1}[-1/-1] 2023-01-26
12:53:55.728027 i ha_dr_provider HADRProviderManager.cpp(00083) :
loading HA/DR Provider 'SAPHanaSrMultiTarget' from
/usr/share/SAPHanaSR-ScaleOut/
grep SAPHanaSr.*init nameserver_*.trc | tail -3
# Example output
# nameserver_hana-s1-db1.31001.000.trc:[17636]{-1}[-1/-1] 2023-01-26
16:30:19.256705 i ha_dr_SAPHanaSrM SAPHanaSrMultiTarget.py(00080) :
SAPHanaSrMultiTarget.init() CALLING CRM: <sudo /usr/sbin/crm_attribute
-n hana_hn1_gsh -v 2.2 -l reboot> rc=0
# nameserver_hana-s1-db1.31001.000.trc:[17636]{-1}[-1/-1] 2023-01-26
16:30:19.256739 i ha_dr_SAPHanaSrM SAPHanaSrMultiTarget.py(00081) :
SAPHanaSrMultiTarget.init() Running srHookGeneration 2.2, see attribute
hana_hn1_gsh too

Verify the susChkSrv hook installation. Execute as <sid>adm.

Bash

cdtrace
egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)'
nameserver_suschksrv.trc
# Example output
# 2023-01-19 08:23:10.581529 [1674116590-10005] susChkSrv.init()
version 0.7.7, parameter info: action_on_lost=fence stop_timeout=20
kill_signal=9
# 2023-01-19 08:23:31.553566 [1674116611-14022] START: indexserver
event looks like graceful tenant start
# 2023-01-19 08:23:52.834813 [1674116632-15235] START: indexserver
event looks like graceful tenant start (indexserver started)

Create SAP HANA cluster resources


1. [1] Create the HANA cluster resources. Execute the following commands as root .

a. Make sure the cluster is already maintenance mode.

b. Next, create the HANA Topology resource.

Bash

sudo crm configure primitive rsc_SAPHanaTopology_HN1_HDB03


ocf:suse:SAPHanaTopology \
op monitor interval="10" timeout="600" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
params SID="HN1" InstanceNumber="03"

sudo crm configure clone cln_SAPHanaTopology_HN1_HDB03


rsc_SAPHanaTopology_HN1_HDB03 \
meta clone-node-max="1" target-role="Started" interleave="true"

c. Next, create the HANA instance resource.

7 Note

This article contains references to terms that Microsoft no longer uses.


When these terms are removed from the software, we'll remove them from
this article.

Bash

sudo crm configure primitive rsc_SAPHana_HN1_HDB03


ocf:suse:SAPHanaController \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
params SID="HN1" InstanceNumber="03" PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"
sudo crm configure ms msl_SAPHana_HN1_HDB03 rsc_SAPHana_HN1_HDB03 \
meta clone-node-max="1" master-max="1" interleave="true"

) Important

We recommend as a best practice that you only set AUTOMATED_REGISTER


to no, while performing thorough fail-over tests, to prevent failed primary
instance to automatically register as secondary. Once the fail-over tests
have completed successfully, set AUTOMATED_REGISTER to yes, so that
after takeover system replication can resume automatically.

d. Create Virtual IP and associated resources.

Bash

sudo crm configure primitive rsc_ip_HN1_HDB03 ocf:heartbeat:IPaddr2


\
op monitor interval="10s" timeout="20s" \
params ip="10.23.0.27"

sudo crm configure primitive rsc_nc_HN1_HDB03 azure-lb port=62503 \


op monitor timeout=20s interval=10 \
meta resource-stickiness=0

sudo crm configure group g_ip_HN1_HDB03 rsc_ip_HN1_HDB03


rsc_nc_HN1_HDB03

e. Create the cluster constraints

Bash

# Colocate the IP with HANA master


sudo crm configure colocation col_saphana_ip_HN1_HDB03 4000:
g_ip_HN1_HDB03:Started \
msl_SAPHana_HN1_HDB03:Master

# Start HANA Topology before HANA instance


sudo crm configure order ord_SAPHana_HN1_HDB03 Optional:
cln_SAPHanaTopology_HN1_HDB03 \
msl_SAPHana_HN1_HDB03

# HANA resources don't run on the majority maker node


sudo crm configure location loc_SAPHanaCon_not_on_majority_maker
msl_SAPHana_HN1_HDB03 -inf: hana-s-mm
sudo crm configure location loc_SAPHanaTop_not_on_majority_maker
cln_SAPHanaTopology_HN1_HDB03 -inf: hana-s-mm
2. [1] Configure additional cluster properties

Bash

sudo crm configure rsc_defaults resource-stickiness=1000


sudo crm configure rsc_defaults migration-threshold=50

3. [1] Place the cluster out of maintenance mode. Make sure that the cluster status is
ok and that all of the resources are started.

Bash

# Cleanup any failed resources - the following command is example


crm resource cleanup rsc_SAPHana_HN1_HDB03

# Place the cluster out of maintenance mode


sudo crm configure property maintenance-mode=false

4. [1] Verify the communication between the HANA HA hook and the cluster, showing
status SOK for SID and both replication sites with status P(rimary) or S(econdary).

Bash

sudo /usr/sbin/SAPHanaSR-showAttr
# Expected result
# Global cib-time maintenance prim sec sync_state upd
# ---------------------------------------------------------------------
# HN1 Fri Jan 27 10:38:46 2023 false HANA_S1 - SOK ok
#
# Sites lpt lss mns srHook srr
# -----------------------------------------------
# HANA_S1 1674815869 4 hana-s1-db1 PRIM P
# HANA_S2 30 4 hana-s2-db1 SWAIT S

7 Note

The timeouts in the above configuration are just examples and may need to
be adapted to the specific HANA setup. For instance, you may need to
increase the start timeout, if it takes longer to start the SAP HANA database.

Test SAP HANA failover

7 Note
This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we’ll remove them from this article.

1. Before you start a test, check the cluster and SAP HANA system replication status.

a. Verify that there are no failed cluster actions

Bash

#Verify that there are no failed cluster actions


crm status
# Example
#7 nodes configured
#24 resource instances configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#
#Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started hana-s-mm
# Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Stopped: [ hana-s-mm ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Stopped: [ hana-s-mm ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-
s2-db3 ]
# Stopped: [ hana-s-mm ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-
s1-db1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-
s1-db1

b. Verify that SAP HANA system replication is in sync

Bash

# Verify HANA HSR is in sync


sudo su - hn1adm -c "python
/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
#| Database | Host | Port | Service Name | Volume ID | Site ID
| Site Name | Secondary | Secondary | Secondary | Secondary |
Secondary | Replication | Replication | Replication |
#| | | | | |
| | Host | Port | Site ID | Site Name | Active
Status | Mode | Status | Status Details |
#| -------- | ------------ | ----- | ------------ | --------- | -------
| --------- | ------------ | --------- | --------- | --------- | ------
------- | ----------- | ----------- | -------------- |
#| SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1
| HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#| HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1
| HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#| HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1
| HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#| HN1 | hana-s1-db3 | 30303 | indexserver | 4 | 1
| HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#| HN1 | hana-s1-db2 | 30303 | indexserver | 5 | 1
| HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES
| SYNC | ACTIVE | |
#
#status system replication site "1": ACTIVE
#overall system replication status: ACTIVE
#
#Local System Replication State
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
#mode: PRIMARY
#site id: 1
#site name: HANA_S1

2. We recommend to thoroughly validate the SAP HANA cluster configuration, by


performing the tests, documented in HA for SAP HANA on Azure VMs on SLES and
in SLES Replication scale-out Performance Optimized Scenario .

3. Verify the cluster configuration for a failure scenario, when a node loses access to
the NFS share ( /hana/shared ).

The SAP HANA resource agents depend on binaries, stored on /hana/shared to


perform operations during failover. File system /hana/shared is mounted over NFS
in the presented configuration. A test that can be performed, is to create a
temporary firewall rule to block access to the /hana/shared NFS mounted file
system on one of the primary site VMs. This approach validates that the cluster will
fail over, if access to /hana/shared is lost on the active system replication site.

Expected result: When you block the access to the /hana/shared NFS mounted file
system on one of the primary site VMs, the monitoring operation that performs
read/write operation on file system, will fail, as it is not able to access the file
system and will trigger HANA resource failover. The same result is expected when
your HANA node loses access to the NFS share.

You can check the state of the cluster resources by executing crm_mon or crm
status . Resource state before starting the test:

Bash

# Output of crm_mon
#7 nodes configured
#24 resource instances configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#stonith-sbd (stonith:external/sbd): Started hana-s-mm
# Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-
s2-db3 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-
s2-db1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-
s2-db1

To simulate failure for /hana/shared :

If using NFS on ANF, first confirm the IP address for the /hana/shared ANF
volume on the primary site. You can do that by running df -kh|grep
/hana/shared .

If using NFS on Azure Files, first determine the IP address of the private end
point for your storage account.

Then, set up a temporary firewall rule to block access to the IP address of the
/hana/shared NFS file system by executing the following command on one of the

primary HANA system replication site VMs.


In this example, the command was executed on hana-s1-db1 for ANF volume
/hana/shared .

Bash

iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7


-j DROP

The cluster resources will be migrated to the other HANA system replication site.

If you set AUTOMATED_REGISTER="false", you'll need to configure SAP HANA


system replication on secondary site. In this case, you can execute these
commands to reconfigure SAP HANA as secondary.

Bash

# Execute on the secondary


su - hn1adm
# Make sure HANA is not running on the secondary site. If it is
started, stop HANA
sapcontrol -nr 03 -function StopWait 600 10
# Register the HANA secondary site
hdbnsutil -sr_register --name=HANA_S1 --remoteHost=hana-s2-db1 --
remoteInstance=03 --replicationMode=sync
# Switch back to root and cleanup failed resources
crm resource cleanup SAPHana_HN1_HDB03

The state of the resources, after the test:

Bash

# Output of crm_mon
#7 nodes configured
#24 resource instances configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#stonith-sbd (stonith:external/sbd): Started hana-s-mm
# Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hana-s2-db1 ]
# Slaves: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db2 hana-
s2-db3 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-
s2-db1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-
s2-db1

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs).
High availability of IBM Db2 LUW on
Azure VMs on Red Hat Enterprise Linux
Server
Article • 01/19/2024

IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery
(HADR) configuration consists of one node that runs a primary database instance and
at least one node that runs a secondary database instance. Changes to the primary
database instance are replicated to a secondary database instance synchronously or
asynchronously, depending on your configuration.

7 Note

This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we'll remove them from this article.

This article describes how to deploy and configure the Azure virtual machines (VMs),
install the cluster framework, and install the IBM Db2 LUW with HADR configuration.

The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP
software installation. To help you accomplish these tasks, we provide references to SAP
and IBM installation manuals. This article focuses on parts that are specific to the Azure
environment.

The supported IBM Db2 versions are 10.5 and later, as documented in SAP note
1928533 .

Before you begin an installation, see the following SAP notes and documentation:

ノ Expand table

SAP note Description

1928533 SAP applications on Azure: Supported products and Azure VM types

2015553 SAP on Azure: Support prerequisites

2178632 Key monitoring metrics for SAP on Azure

2191498 SAP on Linux with Azure: Enhanced monitoring

2243692 Linux on Azure (IaaS) VM: SAP license issues


SAP note Description

2002167 Red Hat Enterprise Linux 7.x: Installation and Upgrade

2694118 Red Hat Enterprise Linux HA Add-On on Azure

1999351 Troubleshooting enhanced Azure monitoring for SAP

2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux, UNIX, and Windows -
additional information

1612105 DB6: FAQ on Db2 with HADR

ノ Expand table

Documentation

SAP Community Wiki : Has all of the required SAP Notes for Linux

Azure Virtual Machines planning and implementation for SAP on Linux guide

Azure Virtual Machines deployment for SAP on Linux (this article)

Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide

SAP workload on Azure planning and deployment checklist

Overview of the High Availability Add-On for Red Hat Enterprise Linux 7

High Availability Add-On Administration

High Availability Add-On Reference

Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members

Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure

IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload

IBM Db2 HADR 11.1

IBM Db2 HADR 10.5

Support Policy for RHEL High Availability Clusters - Management of IBM Db2 for Linux, Unix, and
Windows in a Cluster

Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure
virtual machines, which are deployed in an virtual machine scale set with flexible
orchestration across availability zones or in an availability set.

The following graphics display a setup of two database server Azure VMs. Both database
server Azure VMs have their own storage attached and are up and running. In HADR,
one database instance in one of the Azure VMs has the role of the primary instance. All
clients are connected to primary instance. All changes in database transactions are
persisted locally in the Db2 transaction log. As the transaction log records are persisted
locally, the records are transferred via TCP/IP to the database instance on the second
database server, the standby server, or standby instance. The standby instance updates
the local database by rolling forward the transferred transaction log records. In this way,
the standby server is kept in sync with the primary server.

HADR is only a replication functionality. It has no failure detection and no automatic


takeover or failover facilities. A takeover or transfer to the standby server must be
initiated manually by a database administrator. To achieve an automatic takeover and
failure detection, you can use the Linux Pacemaker clustering feature. Pacemaker
monitors the two database server instances. When the primary database server instance
crashes, Pacemaker initiates an automatic HADR takeover by the standby server.
Pacemaker also ensures that the virtual IP address is assigned to the new primary server.

To have SAP application servers connect to primary database, you need a virtual host
name and a virtual IP address. After a failover, the SAP application servers connect to
new primary database instance. In an Azure environment, an Azure load balancer is
required to use a virtual IP address in the way that's required for HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a
highly available SAP system setup, the following image presents an overview of a highly
available setup of an SAP system based on IBM Db2 database. This article covers only
IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.

High-level overview of the required steps


To deploy an IBM Db2 configuration, you need to follow these steps:

Plan your environment.


Deploy the VMs.
Update RHEL Linux and configure file systems.
Install and configure Pacemaker.
Setup glusterfs cluster or Azure NetApp Files
Install ASCS/ERS on a separate cluster.
Install IBM Db2 database with Distributed/High Availability option (SWPM).
Install and create a secondary database node and instance, and configure HADR.
Confirm that HADR is working.
Apply the Pacemaker configuration to control IBM Db2.
Configure Azure Load Balancer.
Install primary and dialog application servers.
Check and adapt the configuration of SAP application servers.
Perform failover and takeover tests.

Plan Azure infrastructure for hosting IBM Db2


LUW with HADR
Complete the planning process before you execute the deployment. Planning builds the
foundation for deploying a configuration of Db2 with HADR in Azure. Key elements that
need to be part of planning for IMB Db2 LUW (database part of SAP environment) are
listed in the following table:

ノ Expand table

Topic Short description

Define Azure resource groups Resource groups where you deploy VM, virtual network, Azure
Load Balancer, and other resources. Can be existing or new.

Virtual network / Subnet Where VMs for IBM Db2 and Azure Load Balancer are being
definition deployed. Can be existing or newly created.

Virtual machines hosting IBM VM size, storage, networking, IP address.


Db2 LUW

Virtual host name and virtual The virtual IP or host name is used for connection of SAP
IP for IBM Db2 database application servers. db-virt-hostname, db-virt-ip.

Azure fencing Method to avoid split brain situations is prevented.

Azure Load Balancer Usage of Standard (recommended), probe port for Db2 database
(our recommendation 62500) probe-port.

Name resolution How name resolution works in the environment. DNS service is
highly recommended. Local hosts file can be used.

For more information about Linux Pacemaker in Azure, see Setting up Pacemaker on
Red Hat Enterprise Linux in Azure.

) Important

For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using
Pacemaker from IBM.

Integrated solution using Pacemaker


Alternate or additional configurations available on Microsoft Azure
Deployment on Red Hat Enterprise Linux
The resource agent for IBM Db2 LUW is included in Red Hat Enterprise Linux Server HA
Addon. For the setup that's described in this document, you should use Red Hat
Enterprise Linux for SAP. The Azure Marketplace contains an image for Red Hat
Enterprise Linux 7.4 for SAP or higher that you can use to deploy new Azure virtual
machines. Be aware of the various support or service models that are offered by Red Hat
through the Azure Marketplace when you choose a VM image in the Azure VM
Marketplace.

Hosts: DNS updates


Make a list of all host names, including virtual host names, and update your DNS servers
to enable proper IP address to host-name resolution. If a DNS server doesn't exist or
you can't update and create DNS entries, you need to use the local host files of the
individual VMs that are participating in this scenario. If you're using host files entries,
make sure that the entries are applied to all VMs in the SAP system environment.
However, we recommend that you use your DNS that, ideally, extends into Azure

Manual deployment
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of
supported OS versions for Azure VMs and Db2 releases is available in SAP note
1928533 . The list of OS releases by individual Db2 release is available in the SAP
Product Availability Matrix. We highly recommend a minimum of Red Hat Enterprise
Linux 7.4 for SAP because of Azure-related performance improvements in this or later
Red Hat Enterprise Linux versions.

1. Create or select a resource group.


2. Create or select a virtual network and subnet.
3. Choose a suitable deployment type for SAP virtual machines. Typically a virtual
machine scale set with flexible orchestration.
4. Create Virtual Machine 1.
a. Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
b. Select the scale set, availability zone or availability set created in step 3.
5. Create Virtual Machine 2.
a. Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
b. Select the scale set, availability zone or availability set created in step 3 (not the
same zone as in step 4).
6. Add data disks to the VMs, and then check the recommendation of a file system
setup in the article IBM Db2 Azure Virtual Machines DBMS deployment for SAP
workload.

Install the IBM Db2 LUW and SAP environment


Before you start the installation of an SAP environment based on IBM Db2 LUW, review
the following documentation:

Azure documentation.
SAP documentation.
IBM documentation.

Links to this documentation are provided in the introductory section of this article.

Check the SAP installation manuals about installing NetWeaver-based applications on


IBM Db2 LUW. You can find the guides on the SAP Help portal by using the SAP
Installation Guide Finder .

You can reduce the number of guides displayed in the portal by setting the following
filters:

I want to: Install a new system.


My Database: IBM Db2 for Linux, Unix, and Windows.
Additional filters for SAP NetWeaver versions, stack configuration, or operating
system.

Red Hat firewall rules


Red Hat Enterprise Linux has firewall enabled by default.

Bash

#Allow access to SWPM tool. Rule is not permanent.


sudo firewall-cmd --add-port=4237/tcp

Installation hints for setting up IBM Db2 LUW with HADR


To set up the primary IBM Db2 LUW database instance:

Use the high availability or distributed option.


Install the SAP ASCS/ERS and Database instance.
Take a backup of the newly installed database.
) Important

Write down the "Database Communication port" that's set during installation. It
must be the same port number for both database instances.

IBM Db2 HADR settings for Azure


When you use an Azure Pacemaker fencing agent, set the following parameters:

HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 240


HADR timeout value (HADR_TIMEOUT) = 45

We recommend the preceding parameters based on initial failover/takeover testing. It's


mandatory that you test for proper functionality of failover and takeover with these
parameter settings. Because individual configurations can vary, the parameters might
require adjustment.

7 Note

Specific to IBM Db2 with HADR configuration with normal startup: The secondary
or standby database instance must be up and running before you can start the
primary database instance.

7 Note

For installation and configuration that's specific to Azure and Pacemaker: During
the installation procedure through SAP Software Provisioning Manager, there is an
explicit question about high availability for IBM Db2 LUW:

Do not select IBM Db2 pureScale.


Do not select Install IBM Tivoli System Automation for Multiplatforms.
Do not select Generate cluster configuration files.

To set up the Standby database server by using the SAP homogeneous system copy
procedure, execute these steps:

1. Select the System copy option > Target systems > Distributed > Database
instance.
2. As a copy method, select Homogeneous System so that you can use backup to
restore a backup on the standby server instance.
3. When you reach the exit step to restore the database for homogeneous system
copy, exit the installer. Restore the database from a backup of the primary host. All
subsequent installation phases have already been executed on the primary
database server.

Red Hat firewall rules for DB2 HADR

Add firewall rules to allow traffic to DB2 and between DB2 for HADR to work:

Database communication port. If using partitions, add those ports too.


HADR port (value of DB2 parameter HADR_LOCAL_SVC).
Azure probe port.

Bash

sudo firewall-cmd --add-port=<port>/tcp --permanent


sudo firewall-cmd --reload

IBM Db2 HADR check


For demonstration purposes and the procedures described in this article, the database
SID is ID2.

After you've configured HADR and the status is PEER and CONNECTED on the primary
and standby nodes, perform the following check:

Bash

Execute command as db2<sid> db2pd -hadr -db <SID>

#Primary output:
Database Member 0 -- Database ID2 -- Active -- Up 1 days 15:45:23 -- Date
2019-06-25-10.55.25.349375

HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 1
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.076494
(1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 5
HEARTBEAT_EXPECTED = 52
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 5
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 369280
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 132242668
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 300
PEER_WINDOW_END = 06/25/2019 11:12:03.000000
(1561461123)
READS_ON_STANDBY_ENABLED = N

#Secondary output:
Database Member 0 -- Database ID2 -- Standby -- Up 1 days 15:45:18 -- Date
2019-06-25-10.56.19.820474

HADR_ROLE = STANDBY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 0
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.078116
(1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 0
HEARTBEAT_EXPECTED = 10
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 1
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 367360
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 0
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 1000
PEER_WINDOW_END = 06/25/2019 11:12:59.000000
(1561461179)
READS_ON_STANDBY_ENABLED = N

Configure Azure Load Balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to set up standard load balancer for high
availability setup of DB2 database.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create frontend IP. Select the same virtual


network and subnet same as your DB virtual machines.
2. Backend Pool: Create backend pool and add DB VMs.
3. Inbound rules: Create load balancing rule. Follow the same steps for both
load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 625<instance-no.>]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
another IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health

probes.

[A] Add firewall rule for probe port:

Bash
sudo firewall-cmd --add-port=<probe-port>/tcp --permanent
sudo firewall-cmd --reload

Create the Pacemaker cluster


To create a basic Pacemaker cluster for this IBM Db2 server, see Setting up Pacemaker
on Red Hat Enterprise Linux in Azure.

Db2 Pacemaker configuration


When you use Pacemaker for automatic failover in the event of a node failure, you need
to configure your Db2 instances and Pacemaker accordingly. This section describes this
type of configuration.

The following items are prefixed with either:

[A]: Applicable to all nodes


[1]: Applicable only to node 1
[2]: Applicable only to node 2

[A] Prerequisite for Pacemaker configuration:

Shut down both database servers with user db2<sid> with db2stop.

Change the shell environment for db2<sid> user to /bin/ksh:

Bash

# Install korn shell:


sudo yum install ksh
# Change users shell:
sudo usermod -s /bin/ksh db2<sid>

Pacemaker configuration
1. [1] IBM Db2 HADR-specific Pacemaker configuration:

Bash

# Put Pacemaker into maintenance mode


sudo pcs property set maintenance-mode=true
2. [1] Create IBM Db2 resources:

If building a cluster on RHEL 7.x, make sure to update package resource-agents to


version resource-agents-4.1.1-61.el7_9.15 or higher. Use the following
commands to create the cluster resources:

Bash

# Replace bold strings with your instance name db2sid, database SID,
and virtual IP address/Azure Load Balancer.
sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2'
dblist='ID2' master meta notify=true resource-stickiness=5000

#Configure resource stickiness and correct cluster notifications for


master resoruce
sudo pcs resource update Db2_HADR_ID2-master meta notify=true resource-
stickiness=5000

# Configure virtual IP - same as Azure Load Balancer IP


sudo pcs resource create vip_db2id2_ID2 IPaddr2 ip='10.100.0.40'

# Configure probe port for Azure load Balancer


sudo pcs resource create nc_db2id2_ID2 azure-lb port=62500

#Create a group for ip and Azure loadbalancer probe port


sudo pcs resource group add g_ipnc_db2id2_ID2 vip_db2id2_ID2
nc_db2id2_ID2

#Create colocation constrain - keep Db2 HADR Master and Group on same
node
sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master
Db2_HADR_ID2-master

#Create start order constrain


sudo pcs constraint order promote Db2_HADR_ID2-master then
g_ipnc_db2id2_ID2

If building a cluster on RHEL 8.x, make sure to update package resource-agents to


version resource-agents-4.1.1-93.el8 or higher. For details see Red Hat KBA db2
resource with HADR fails promote with state
PRIMARY/REMOTE_CATCHUP_PENDING/CONNECTED . Use the following
commands to create the cluster resources:

Bash

# Replace bold strings with your instance name db2sid, database SID,
and virtual IP address/Azure Load Balancer.
sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2'
dblist='ID2' promotable meta notify=true resource-stickiness=5000
#Configure resource stickiness and correct cluster notifications for
master resoruce
sudo pcs resource update Db2_HADR_ID2-clone meta notify=true resource-
stickiness=5000

# Configure virtual IP - same as Azure Load Balancer IP


sudo pcs resource create vip_db2id2_ID2 IPaddr2 ip='10.100.0.40'

# Configure probe port for Azure load Balancer


sudo pcs resource create nc_db2id2_ID2 azure-lb port=62500

#Create a group for ip and Azure loadbalancer probe port


sudo pcs resource group add g_ipnc_db2id2_ID2 vip_db2id2_ID2
nc_db2id2_ID2

#Create colocation constrain - keep Db2 HADR Master and Group on same
node
sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master
Db2_HADR_ID2-clone

#Create start order constrain


sudo pcs constraint order promote Db2_HADR_ID2-clone then
g_ipnc_db2id2_ID2

3. [1] Start IBM Db2 resources:

Put Pacemaker out of maintenance mode.

Bash

# Put Pacemaker out of maintenance-mode - that start IBM Db2


sudo pcs property set maintenance-mode=false

4. [1] Make sure that the cluster status is OK and that all of the resources are started.
It's not important which node the resources are running on.

Bash

sudo pcs status


2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-
idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-
idb01

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

) Important

You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools.
If you use db2 commands such as db2stop, Pacemaker detects the action as a
failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and
you can then use normal db2 administration commands.

Make changes to SAP profiles to use virtual IP for


connection
To connect to the primary instance of the HADR configuration, the SAP application layer
needs to use the virtual IP address that you defined and configured for the Azure Load
Balancer. The following changes are required:

/sapmnt/<SID>/profile/DEFAULT.PFL

Bash

SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname

/sapmnt/<SID>/global/db6/db2cli.ini

Bash

Hostname=db-virt-hostname

Install primary and dialog application servers


When you install primary and dialog application servers against a Db2 HADR
configuration, use the virtual host name that you picked for the configuration.
If you performed the installation before you created the Db2 HADR configuration, make
the changes as described in the preceding section and as follows for SAP Java stacks.

ABAP+Java or Java stack systems JDBC URL check


Use the J2EE Config tool to check or update the JDBC URL. Because the J2EE Config tool
is a graphical tool, you need to have X server installed:

1. Sign in to the primary application server of the J2EE instance and execute:

Bash

sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh

2. In the left frame, choose security store.

3. In the right frame, choose the key jdbc/pool/\<SAPSID>/url .

4. Change the host name in the JDBC URL to the virtual host name.

Bash

jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0

5. Select Add.

6. To save your changes, select the disk icon at the upper left.

7. Close the configuration tool.

8. Restart the Java instance.

Configure log archiving for HADR setup


To configure the Db2 log archiving for HADR setup, we recommend that you configure
both the primary and the standby database to have automatic log retrieval capability
from all log archive locations. Both the primary and standby database must be able to
retrieve log archive files from all the log archive locations to which either one of the
database instances might archive log files.

The log archiving is performed only by the primary database. If you change the HADR
roles of the database servers or if a failure occurs, the new primary database is
responsible for log archiving. If you've set up multiple log archive locations, your logs
might be archived twice. In the event of a local or remote catch-up, you might also have
to manually copy the archived logs from the old primary server to the active log location
of the new primary server.

We recommend configuring a common NFS share or GlusterFS, where logs are written
from both nodes. The NFS share or GlusterFS has to be highly available.

You can use existing highly available NFS shares or GlusterFS for transports or a profile
directory. For more information, see:

GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver.
High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux
with Azure NetApp Files for SAP Applications.
Azure NetApp Files (to create NFS shares).

Test the cluster setup


This section describes how you can test your Db2 HADR setup. Every test assumes IBM
Db2 primary is running on the az-idb01 virtual machine. User with sudo privileges or
root (not recommended) must be used.

The initial status for all test cases is explained here: (crm_mon -r or pcs status)

pcs status is a snapshot of Pacemaker status at execution time.


crm_mon -r is continuous output of Pacemaker status.

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
The original status in an SAP system is documented in Transaction DBACOCKPIT >
Configuration > Overview, as shown in the following image:

Test takeover of IBM Db2

) Important

Before you start the test, make sure that:

Pacemaker doesn't have any failed actions (pcs status).

There are no location constraints (leftovers of migration test).

The IBM Db2 HADR synchronization is working. Check with user db2<sid>.
Bash

db2pd -hadr -db <DBSID>

Migrate the node that's running the primary Db2 database by executing following
command:

Bash

# On RHEL 7.x
sudo pcs resource move Db2_HADR_ID2-master
# On RHEL 8.x
sudo pcs resource move Db2_HADR_ID2-clone --master

After the migration is done, the crm status output looks like:

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

The original status in an SAP system is documented in Transaction DBACOCKPIT >


Configuration > Overview, as shown in the following image:
Resource migration with "pcs resource move" creates location constraints. Location
constraints in this case are preventing running IBM Db2 instance on az-idb01. If location
constraints aren't deleted, the resource can't fail back.

Remove the location constrain and standby node would be started on az-idb01.

Bash

# On RHEL 7.x
sudo pcs resource clear Db2_HADR_ID2-master
# On RHEL 8.x
sudo pcs resource clear Db2_HADR_ID2-clone

And cluster status changes to:

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb01


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Slaves: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02
Migrate the resource back to az-idb01 and clear the location constraints

Bash

# On RHEL 7.x
sudo pcs resource move Db2_HADR_ID2-master az-idb01
sudo pcs resource clear Db2_HADR_ID2-master
# On RHEL 8.x
sudo pcs resource move Db2_HADR_ID2-clone --master
sudo pcs resource clear Db2_HADR_ID2-clone

On RHEL 7.x - pcs resource move <resource_name> <host> : Creates location


constraints and can cause issues with takeover
On RHEL 8.x - pcs resource move <resource_name> --master : Creates location
constraints and can cause issues with takeover
pcs resource clear <resource_name> : Clears location constraints

pcs resource cleanup <resource_name> : Clears all errors of the resource

Test a manual takeover


You can test a manual takeover by stopping the Pacemaker service on az-idb01 node:

Bash

systemctl stop pacemaker

status on az-ibdb02

Bash
2 nodes configured
5 resources configured

Node az-idb01: pending


Online: [ az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

After the failover, you can start the service again on az-idb01.

Bash

systemctl start pacemaker

Kill the Db2 process on the node that runs the HADR
primary database
Bash

#Kill main db2 process - db2sysc


[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc
db2ptr 34598 34596 8 14:21 ? 00:00:07 db2sysc 0
[sapadmin@az-idb02 ~]$ sudo kill -9 34598

The Db2 instance is going to fail, and Pacemaker will move master node and report
following status:

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]


Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=49,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 09:57:35 2019', queued=0ms, exec=362ms

Pacemaker restarts the Db2 primary database instance on the same node, or it fails over
to the node that's running the secondary database instance and an error is reported.

Kill the Db2 process on the node that runs the secondary
database instance
Bash

[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc


db2id2 23144 23142 2 09:53 ? 00:00:13 db2sysc 0
[sapadmin@az-idb02 ~]$ sudo kill -9 23144

The node gets into failed stated and error reported.

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

Failed Actions:
* Db2_HADR_ID2_monitor_20000 on az-idb02 'not running' (7): call=144,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 10:02:09 2019', queued=0ms, exec=0ms

The Db2 instance gets restarted in the secondary role it had assigned before.

Stop DB via db2stop force on the node that runs the


HADR primary database instance
As user db2<sid> execute command db2stop force:

Bash

az-idb01:db2ptr> db2stop force

Failure detected:

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Slaves: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Stopped
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Stopped

Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms

The Db2 HADR secondary database instance got promoted into the primary role.

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:


rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Slaves: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms

Crash the VM that runs the HADR primary database


instance with "halt"
Bash

#Linux kernel panic.


sudo echo b > /proc/sysrq-trigger

In such a case, Pacemaker detects that the node that's running the primary database
instance isn't responding.

Bash

2 nodes configured
5 resources configured

Node az-idb01: UNCLEAN (online)


Online: [ az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ]
Slaves: [ az-idb02 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01

The next step is to check for a Split brain situation. After the surviving node has
determined that the node that last ran the primary database instance is down, a failover
of resources is executed.
Bash

2 nodes configured
5 resources configured

Online: [ az-idb02 ]
OFFLINE: [ az-idb01 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Stopped: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

In the event of a kernel panic, the failed node will be restarted by fencing agent. After
the failed node is back online, you must start pacemaker cluster by

Bash

sudo pcs cluster start

it starts the Db2 instance into the secondary role.

Bash

2 nodes configured
5 resources configured

Online: [ az-idb01 az-idb02 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started az-idb02


Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ]
Slaves: [ az-idb01 ]
Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02

Next steps
High-availability architecture and scenarios for SAP NetWeaver
Setting up Pacemaker on Red Hat Enterprise Linux in Azure
High availability of IBM Db2 LUW on
Azure VMs on SUSE Linux Enterprise
Server with Pacemaker
Article • 01/19/2024

IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery
(HADR) configuration consists of one node that runs a primary database instance and
at least one node that runs a secondary database instance. Changes to the primary
database instance are replicated to a secondary database instance synchronously or
asynchronously, depending on your configuration.

7 Note

This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we'll remove them from this article.

This article describes how to deploy and configure the Azure virtual machines (VMs),
install the cluster framework, and install the IBM Db2 LUW with HADR configuration.

The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP
software installation. To help you accomplish these tasks, we provide references to SAP
and IBM installation manuals. This article focuses on parts that are specific to the Azure
environment.

The supported IBM Db2 versions are 10.5 and later, as documented in SAP note
1928533 .

Before you begin an installation, see the following SAP notes and documentation:

ノ Expand table

SAP note Description

1928533 SAP applications on Azure: Supported products and Azure VM types

2015553 SAP on Azure: Support prerequisites

2178632 Key monitoring metrics for SAP on Azure

2191498 SAP on Linux with Azure: Enhanced monitoring

2243692 Linux on Azure (IaaS) VM: SAP license issues


SAP note Description

1984787 SUSE LINUX Enterprise Server 12: Installation notes

1999351 Troubleshooting enhanced Azure monitoring for SAP

2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux, UNIX, and Windows -
additional information

1612105 DB6: FAQ on Db2 with HADR

ノ Expand table

Documentation

SAP Community Wiki : Has all of the required SAP Notes for Linux

Azure Virtual Machines planning and implementation for SAP on Linux guide

Azure Virtual Machines deployment for SAP on Linux (this article)

Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide

SAP workload on Azure planning and deployment checklist

SUSE Linux Enterprise Server for SAP Applications 12 SP4 best practices guides

SUSE Linux Enterprise High Availability Extension 12 SP4

IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload

IBM Db2 HADR 11.1

IBM Db2 HADR R 10.5

Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure
virtual machines, which are deployed in an virtual machine scale set with flexible
orchestration across availability zones or in an availability set.

The following graphics display a setup of two database server Azure VMs. Both database
server Azure VMs have their own storage attached and are up and running. In HADR,
one database instance in one of the Azure VMs has the role of the primary instance. All
clients are connected to this primary instance. All changes in database transactions are
persisted locally in the Db2 transaction log. As the transaction log records are persisted
locally, the records are transferred via TCP/IP to the database instance on the second
database server, the standby server, or standby instance. The standby instance updates
the local database by rolling forward the transferred transaction log records. In this way,
the standby server is kept in sync with the primary server.

HADR is only a replication functionality. It has no failure detection and no automatic


takeover or failover facilities. A takeover or transfer to the standby server must be
initiated manually by a database administrator. To achieve an automatic takeover and
failure detection, you can use the Linux Pacemaker clustering feature. Pacemaker
monitors the two database server instances. When the primary database server instance
crashes, Pacemaker initiates an automatic HADR takeover by the standby server.
Pacemaker also ensures that the virtual IP address is assigned to the new primary server.

To have SAP application servers connect to primary database, you need a virtual host
name and a virtual IP address. After a failover, the SAP application servers connect to
new primary database instance. In an Azure environment, an Azure load balancer is
required to use a virtual IP address in the way that's required for HADR of IBM Db2.

To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a
highly available SAP system setup, the following image presents an overview of a highly
available setup of an SAP system based on IBM Db2 database. This article covers only
IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.
High-level overview of the required steps
To deploy an IBM Db2 configuration, you need to follow these steps:

Plan your environment.


Deploy the VMs.
Update SUSE Linux and configure file systems.
Install and configure Pacemaker.
Install highly available NFS.
Install ASCS/ERS on a separate cluster.
Install IBM Db2 database with Distributed/High Availability option (SWPM).
Install and create a secondary database node and instance, and configure HADR.
Confirm that HADR is working.
Apply the Pacemaker configuration to control IBM Db2.
Configure Azure Load Balancer.
Install primary and dialog application servers.
Check and adapt the configuration of SAP application servers.
Perform failover and takeover tests.

Plan Azure infrastructure for hosting IBM Db2


LUW with HADR
Complete the planning process before you execute the deployment. Planning builds the
foundation for deploying a configuration of Db2 with HADR in Azure. Key elements that
need to be part of planning for IMB Db2 LUW (database part of SAP environment) are
listed in the following table:

ノ Expand table

Topic Short description

Define Azure resource groups Resource groups where you deploy VM, virtual network, Azure
Load Balancer, and other resources. Can be existing or new.

Virtual network / Subnet Where VMs for IBM Db2 and Azure Load Balancer are being
definition deployed. Can be existing or newly created.

Virtual machines hosting IBM VM size, storage, networking, IP address.


Db2 LUW

Virtual host name and virtual The virtual IP or host name that's used for connection of SAP
IP for IBM Db2 database application servers. db-virt-hostname, db-virt-ip.

Azure fencing Azure fencing or SBD fencing (highly recommended). Method to


avoid split brain situations.

SBD VM SBD virtual machine size, storage, network.

Azure Load Balancer Usage of Standard (recommended), probe port for Db2 database
(our recommendation 62500) probe-port.

Name resolution How name resolution works in the environment. DNS service is
highly recommended. Local hosts file can be used.

For more information about Linux Pacemaker in Azure, see Set up Pacemaker on SUSE
Linux Enterprise Server in Azure.
) Important

For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using
Pacemaker from IBM.

Integrated solution using Pacemaker .


Alternate or additional configurations available on Microsoft Azure .

Deployment on SUSE Linux


The resource agent for IBM Db2 LUW is included in SUSE Linux Enterprise Server for SAP
Applications. For the setup that's described in this document, you must use SUSE Linux
Server for SAP Applications. The Azure Marketplace contains an image for SUSE
Enterprise Server for SAP Applications 12 that you can use to deploy new Azure virtual
machines. Be aware of the various support or service models that are offered by SUSE
through the Azure Marketplace when you choose a VM image in the Azure VM
Marketplace.

Hosts: DNS updates


Make a list of all host names, including virtual host names, and update your DNS servers
to enable proper IP address to host-name resolution. If a DNS server doesn't exist or
you can't update and create DNS entries, you need to use the local host files of the
individual VMs that are participating in this scenario. If you're using host files entries,
make sure that the entries are applied to all VMs in the SAP system environment.
However, we recommend that you use your DNS that, ideally, extends into Azure

Manual deployment
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of
supported OS versions for Azure VMs and Db2 releases is available in SAP note
1928533 . The list of OS releases by individual Db2 release is available in the SAP
Product Availability Matrix. We highly recommend a minimum of SLES 12 SP4 because
of Azure-related performance improvements in this or later SUSE Linux versions.

1. Create or select a resource group.


2. Create or select a virtual network and subnet.
3. Choose a suitable deployment type for SAP virtual machines. Typically a virtual
machine scale set with flexible orchestration.
4. Create Virtual Machine 1.
a. Use SLES for SAP image in the Azure Marketplace.
b. Select the scale set, availability zone or availability set created in step 3.
5. Create Virtual Machine 2.
a. Use SLES for SAP image in the Azure Marketplace.
b. Select the scale set, availability zone or availability set created in step 3 (not the
same zone as in step 4).
6. Add data disks to the VMs, and then check the recommendation of a file system
setup in the article IBM Db2 Azure Virtual Machines DBMS deployment for SAP
workload.

Install the IBM Db2 LUW and SAP environment


Before you start the installation of an SAP environment based on IBM Db2 LUW, review
the following documentation:

Azure documentation
SAP documentation
IBM documentation

Links to this documentation are provided in the introductory section of this article.

Check the SAP installation manuals about installing NetWeaver-based applications on


IBM Db2 LUW.

You can find the guides on the SAP Help portal by using the SAP Installation Guide
Finder .

You can reduce the number of guides displayed in the portal by setting the following
filters:

I want to: "Install a new system"


My Database: "IBM Db2 for Linux, Unix, and Windows"
Additional filters for SAP NetWeaver versions, stack configuration, or operating
system

Installation hints for setting up IBM Db2 LUW with HADR


To set up the primary IBM Db2 LUW database instance:

Use the high availability or distributed option.


Install the SAP ASCS/ERS and Database instance.
Take a backup of the newly installed database.
) Important

Write down the "Database Communication port" that's set during installation. It
must be the same port number for both database instances

To set up the Standby database server by using the SAP homogeneous system copy
procedure, execute these steps:

1. Select the System copy option > Target systems > Distributed > Database
instance.

2. As a copy method, select Homogeneous System so that you can use backup to
restore a backup on the standby server instance.

3. When you reach the exit step to restore the database for homogeneous system
copy, exit the installer. Restore the database from a backup of the primary host. All
subsequent installation phases have already been executed on the primary
database server.

4. Set up HADR for IBM Db2.

7 Note

For installation and configuration that's specific to Azure and Pacemaker:


During the installation procedure through SAP Software Provisioning
Manager, there is an explicit question about high availability for IBM Db2
LUW:

Do not select IBM Db2 pureScale.


Do not select Install IBM Tivoli System Automation for Multiplatforms.
Do not select Generate cluster configuration files.

When you use an SBD device for Linux Pacemaker, set the following Db2 HADR
parameters:

HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 300


HADR timeout value (HADR_TIMEOUT) = 60

When you use an Azure Pacemaker fencing agent, set the following parameters:

HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 900


HADR timeout value (HADR_TIMEOUT) = 60

We recommend the preceding parameters based on initial failover/takeover testing. It's


mandatory that you test for proper functionality of failover and takeover with these
parameter settings. Because individual configurations can vary, the parameters might
require adjustment.

) Important

Specific to IBM Db2 with HADR configuration with normal startup: The secondary
or standby database instance must be up and running before you can start the
primary database instance.

For demonstration purposes and the procedures described in this article, the database
SID is PTR.

IBM Db2 HADR check


After you've configured HADR and the status is PEER and CONNECTED on the primary
and standby nodes, perform the following check:

Bash

Execute command as db2<sid> db2pd -hadr -db <SID>

#Primary output:
# Database Member 0 -- Database PTR -- Active -- Up 1 days 01:51:38 -- Date
2019-02-06-15.35.28.505451
#
# HADR_ROLE = PRIMARY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 1
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.170561
(1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6137
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 13
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000025
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223713
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 374400
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_RECV_REPLAY_GAP(bytes) = 0
# PRIMARY_LOG_TIME = 02/06/2019 15:34:39.000000
(1549467279)
# STANDBY_LOG_TIME = 02/06/2019 15:34:39.000000
(1549467279)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:34:39.000000
(1549467279)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:40:25.000000
(1549467625)
# READS_ON_STANDBY_ENABLED = N

#Secondary output:
# Database Member 0 -- Database PTR -- Standby -- Up 1 days 01:46:43 -- Date
2019-02-06-15.38.25.644168
#
# HADR_ROLE = STANDBY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 0
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.205067
(1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6186
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 5
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000023
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223725
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 372480
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_RECV_REPLAY_GAP(bytes) = 155
# PRIMARY_LOG_TIME = 02/06/2019 15:37:34.000000
(1549467454)
# STANDBY_LOG_TIME = 02/06/2019 15:37:34.000000
(1549467454)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:37:34.000000
(1549467454)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:43:19.000000
(1549467799)
# READS_ON_STANDBY_ENABLED = N

Configure Azure Load Balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow below steps, to set up standard load balancer for high
availability setup of DB2 database.

Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create frontend IP. Select the same virtual


network and subnet same as your DB virtual machines.
2. Backend Pool: Create backend pool and add DB VMs.
3. Inbound rules: Create load balancing rule. Follow the same steps for both
load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 625<instance-no.>]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
another IP address for the VM, deploy a second NIC.

7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health

probes.

Create the Pacemaker cluster


To create a basic Pacemaker cluster for this IBM Db2 server, see Set up Pacemaker on
SUSE Linux Enterprise Server in Azure.

Db2 Pacemaker configuration


When you use Pacemaker for automatic failover in the event of a node failure, you need
to configure your Db2 instances and Pacemaker accordingly. This section describes this
type of configuration.

The following items are prefixed with either:

[A]: Applicable to all nodes


[1]: Applicable only to node 1
[2]: Applicable only to node 2

[A] Prerequisites for Pacemaker configuration:

Shut down both database servers with user db2<sid> with db2stop.
Change the shell environment for db2<sid> user to /bin/ksh. We recommend that
you use the Yast tool.

Pacemaker configuration
) Important

Recent testing revealed situations, where netcat stops responding to requests due
to backlog and its limitation of handling only one connection. The netcat resource
stops listening to the Azure Load balancer requests and the floating IP becomes
unavailable. For existing Pacemaker clusters, we recommended in the past
replacing netcat with socat. Currently we recommend using azure-lb resource
agent, which is part of package resource-agents, with the following package
version requirements:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Note that the change will require brief downtime.


For existing Pacemaker clusters, if the configuration was already changed to use
socat as described in Azure Load-Balancer Detection Hardening , there is no
requirement to switch immediately to azure-lb resource agent.

1. [1] IBM Db2 HADR-specific Pacemaker configuration:

Bash

# Put Pacemaker into maintenance mode


sudo crm configure property maintenance-mode=true

2. [1] Create IBM Db2 resources:

Bash

# Replace **bold strings** with your instance name db2sid, database


SID, and virtual IP address/Azure Load Balancer.
sudo crm configure primitive rsc_Db2_db2ptr_PTR db2 \
params instance="db2ptr" dblist="PTR" \
op start interval="0" timeout="130" \
op stop interval="0" timeout="120" \
op promote interval="0" timeout="120" \
op demote interval="0" timeout="120" \
op monitor interval="30" timeout="60" \
op monitor interval="31" role="Master" timeout="60"

# Configure virtual IP - same as Azure Load Balancer IP


sudo crm configure primitive rsc_ip_db2ptr_PTR IPaddr2 \
op monitor interval="10s" timeout="20s" \
params ip="10.100.0.10"

# Configure probe port for Azure load Balancer


sudo crm configure primitive rsc_nc_db2ptr_PTR azure-lb port=62500 \
op monitor timeout=20s interval=10

sudo crm configure group g_ip_db2ptr_PTR rsc_ip_db2ptr_PTR


rsc_nc_db2ptr_PTR

sudo crm configure ms msl_Db2_db2ptr_PTR rsc_Db2_db2ptr_PTR \


meta target-role="Started" notify="true"

sudo crm configure colocation col_db2_db2ptr_PTR inf:


g_ip_db2ptr_PTR:Started msl_Db2_db2ptr_PTR:Master

sudo crm configure order ord_db2_ip_db2ptr_PTR inf:


msl_Db2_db2ptr_PTR:promote g_ip_db2ptr_PTR:start

sudo crm configure rsc_defaults resource-stickiness=1000


sudo crm configure rsc_defaults migration-threshold=5000

3. [1] Start IBM Db2 resources:

Put Pacemaker out of maintenance mode.

Bash

# Put Pacemaker out of maintenance-mode - that start IBM Db2


sudo crm configure property maintenance-mode=false

4. [1] Make sure that the cluster status is OK and that all of the resources are started.
It's not important which node the resources are running on.

Bash

sudo crm status

# 2 nodes configured
# 5 resources configured

# Online: [ azibmdb01 azibmdb02 ]

# Full list of resources:

# stonith-sbd (stonith:external/sbd): Started azibmdb02


# Resource Group: g_ip_db2ptr_PTR
# rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started
azibmdb02
# rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started
azibmdb02
# Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
# Masters: [ azibmdb02 ]
# Slaves: [ azibmdb01 ]

) Important

You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools.
If you use db2 commands such as db2stop, Pacemaker detects the action as a
failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and
you can then use normal db2 administration commands.

Make changes to SAP profiles to use virtual IP for


connection
To connect to the primary instance of the HADR configuration, the SAP application layer
needs to use the virtual IP address that you defined and configured for the Azure Load
Balancer. The following changes are required:

/sapmnt/<SID>/profile/DEFAULT.PFL

Bash

SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname

/sapmnt/<SID>/global/db6/db2cli.ini

Bash

Hostname=db-virt-hostname

Install primary and dialog application servers


When installing primary and dialog application servers against a Db2 HADR
configuration, use the virtual host name that you picked for the configuration.

If you performed the installation before you created the Db2 HADR configuration, make
the changes as described in the preceding section and as follows for SAP Java stacks.

ABAP+Java or Java stack systems JDBC URL check


Use the J2EE Config tool to check or update the JDBC URL. Because the J2EE Config tool
is a graphical tool, you need to have X server installed:

1. Sign in to the primary application server of the J2EE instance and execute:

Bash

sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh

2. In the left frame, choose security store.

3. In the right frame, choose the key jdbc/pool/<SAPSID>/url.

4. Change the host name in the JDBC URL to the virtual host name.

TEXT

jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0

5. Select Add.

6. To save your changes, select the disk icon at the upper left.

7. Close the configuration tool.

8. Restart the Java instance.

Configure log archiving for HADR setup


To configure the Db2 log archiving for HADR setup, we recommend that you configure
both the primary and the standby database to have automatic log retrieval capability
from all log archive locations. Both the primary and standby database must be able to
retrieve log archive files from all the log archive locations to which either one of the
database instances might archive log files.

The log archiving is performed only by the primary database. If you change the HADR
roles of the database servers or if a failure occurs, the new primary database is
responsible for log archiving. If you've set up multiple log archive locations, your logs
might be archived twice. In the event of a local or remote catch-up, you might also have
to manually copy the archived logs from the old primary server to the active log location
of the new primary server.

We recommend configuring a common NFS share where logs are written from both
nodes. The NFS share has to be highly available.
You can use existing highly available NFS shares for transports or a profile directory. For
more information, see:

High availability for NFS on Azure VMs on SUSE Linux Enterprise Server.
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server
with Azure NetApp Files for SAP Applications.
Azure NetApp Files (to create NFS shares).

Test the cluster setup


This section describes how you can test your Db2 HADR setup. Every test assumes that
you're logged in as user root and the IBM Db2 primary is running on the azibmdb01
virtual machine.

The initial status for all test cases is explained here: (crm_mon -r or crm status)

crm status is a snapshot of Pacemaker status at execution time.


crm_mon -r is continuous output of Pacemaker status.

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): Promoting azibmdb01
Slaves: [ azibmdb02 ]

The original status in an SAP system is documented in Transaction DBACOCKPIT >


Configuration > Overview, as shown in the following image:
Test takeover of IBM Db2

) Important

Before you start the test, make sure that:

Pacemaker doesn't have any failed actions (crm status).

There are no location constraints (leftovers of migration test.

The IBM Db2 HADR synchronization is working. Check with user db2<sid>

Bash

db2pd -hadr -db <DBSID>

Migrate the node that's running the primary Db2 database by executing following
command:

Bash

crm resource migrate msl_Db2_db2ptr_PTR azibmdb02

After the migration is done, the crm status output looks like:

Bash

2 nodes configured
5 resources configured
Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Slaves: [ azibmdb01 ]

The original status in an SAP system is documented in Transaction DBACOCKPIT >


Configuration > Overview, as shown in the following image:

Resource migration with "crm resource migrate" creates location constraints. Location
constraints should be deleted. If location constraints aren't deleted, the resource can't
fail back or you can experience unwanted takeovers.

Migrate the resource back to azibmdb01 and clear the location constraints

Bash

crm resource migrate msl_Db2_db2ptr_PTR azibmdb01


crm resource clear msl_Db2_db2ptr_PTR

crm resource migrate <res_name> <host>: Creates location constraints and can
cause issues with takeover
crm resource clear <res_name>: Clears location constraints
crm resource cleanup <res_name>: Clears all errors of the resource

Test SBD fencing


In this case, we test SBD fencing, which we recommend that you do when you use SUSE
Linux.

Bash

azibmdb01:~ # ps -ef|grep sbd


root 2374 1 0 Feb05 ? 00:00:17 sbd: inquisitor
root 2378 2374 0 Feb05 ? 00:00:40 sbd: watcher:
/dev/disk/by-id/scsi-36001405fbbaab35ee77412dacb77ae36 - slot: 0 - uuid:
27cad13a-0bce-4115-891f-43b22cfabe65
root 2379 2374 0 Feb05 ? 00:01:51 sbd: watcher: Pacemaker
root 2380 2374 0 Feb05 ? 00:00:18 sbd: watcher: Cluster

azibmdb01:~ # kill -9 2374

Cluster node azibmdb01 should be rebooted. The IBM Db2 primary HADR role is going
to be moved to azibmdb02. When azibmdb01 is back online, the Db2 instance is going
to move in the role of a secondary database instance.

If the Pacemaker service doesn't start automatically on the rebooted former primary, be
sure to start it manually with:

Bash

sudo service pacemaker start

Test a manual takeover


You can test a manual takeover by stopping the Pacemaker service on azibmdb01 node:

Bash

service pacemaker stop

status on azibmdb02

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started azibmdb02
Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

After the failover, you can start the service again on azibmdb01.

Bash

service pacemaker start

Kill the Db2 process on the node that runs the HADR
primary database
Bash

#Kill main db2 process - db2sysc


azibmdb01:~ # ps -ef|grep db2s
db2ptr 34598 34596 8 14:21 ? 00:00:07 db2sysc 0

azibmdb01:~ # kill -9 34598

The Db2 instance is going to fail, and Pacemaker will report following status:

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Slaves: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms

Pacemaker restarts the Db2 primary database instance on the same node, or it fails over
to the node that's running the secondary database instance and an error is reported.

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms

Kill the Db2 process on the node that runs the secondary
database instance
Bash

azibmdb02:~ # ps -ef|grep db2s


db2ptr 65250 65248 0 Feb11 ? 00:09:27 db2sysc 0

azibmdb02:~ # kill -9

The node gets into failed stated and error reported

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started azibmdb01
Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): FAILED azibmdb02
Masters: [ azibmdb01 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms

The Db2 instance gets restarted in the secondary role it had assigned before.

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms

Stop DB via db2stop force on the node that runs the


HADR primary database instance
Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started azibmdb01
Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

As user db2<sid> execute command db2stop force:

Bash

azibmdb01:~ # su - db2ptr
azibmdb01:db2ptr> db2stop force

Failure detected

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb01


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): FAILED azibmdb01
Slaves: [ azibmdb02 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=201,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:25 2019', queued=1ms, exec=150ms

The Db2 HADR secondary database instance got promoted into the primary role.

Bash

nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:


stonith-sbd (stonith:external/sbd): Started azibmdb01
Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Stopped: [ azibmdb01 ]

Failed Actions:
* rsc_Db2_db2ptr_PTR_start_0 on azibmdb01 'unknown error' (1): call=205,
stat
us=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:27 2019', queued=0ms, exec=865ms

Crash VM with restart on the node that runs the HADR


primary database instance
Bash

#Linux kernel panic - with OS restart


azibmdb01:~ # echo b > /proc/sysrq-trigger

Pacemaker promotes the secondary instance to the primary instance role. The old
primary instance will move into the secondary role after the VM and all services are fully
restored after the VM reboot.

Bash

nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

Crash the VM that runs the HADR primary database


instance with "halt"
Bash

#Linux kernel panic - halts OS


azibmdb01:~ # echo b > /proc/sysrq-trigger

In such a case, Pacemaker detects that the node that's running the primary database
instance isn't responding.

Bash

2 nodes configured
5 resources configured

Node azibmdb01: UNCLEAN (online)


Online: [ azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
Slaves: [ azibmdb02 ]

The next step is to check for a Split brain situation. After the surviving node has
determined that the node that last ran the primary database instance is down, a failover
of resources is executed.

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Stopped: [ azibmdb01 ]
In the event of a "halting" of the node, the failed node has to be restarted via Azure
Management tools (in the Azure portal, PowerShell, or the Azure CLI). After the failed
node is back online, it starts the Db2 instance into the secondary role.

Bash

2 nodes configured
5 resources configured

Online: [ azibmdb01 azibmdb02 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started azibmdb02


Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
Slaves: [ azibmdb01 ]

Next steps
High-availability architecture and scenarios for SAP NetWeaver
Set up Pacemaker on SUSE Linux Enterprise Server in Azure
High availability for SAP NetWeaver on
VMs on RHEL with NFS on Azure Files
Article • 02/05/2024

This article describes how to deploy and configure virtual machines (VMs), install the
cluster framework, and install a high-availability (HA) SAP NetWeaver system by using
NFS on Azure Files. The example configurations use VMs that run on Red Hat Enterprise
Linux (RHEL).

Prerequisites
Azure Files documentation
SAP Note 1928533 , which has:
A list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
7.x.
SAP Note 2772999 has recommended OS settings for Red Hat Enterprise Linux
8.x.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in Pacemaker cluster
General RHEL documentation:
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP NetWeaver with Standalone Resources in RHEL
7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2)
in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure

Overview
To deploy the SAP NetWeaver application layer, you need shared directories like
/sapmnt/SID and /usr/sap/trans in the environment. Additionally, when you deploy an

HA SAP system, you need to protect and make highly available file systems like
/sapmnt/SID and /usr/sap/SID/ASCS .

Now you can place these file systems on NFS on Azure Files. NFS on Azure Files is an HA
storage solution. This solution offers synchronous zone-redundant storage (ZRS) and is
suitable for SAP ASCS/ERS instances deployed across availability zones. You still need a
Pacemaker cluster to protect single point of failure components like SAP NetWeaver
central services (ASCS/SCS).

The example configurations and installation commands use the following instance
numbers:

ノ Expand table

Instance name Instance number

ABAP SAP central services (ASCS) 00

ERS 01

ABAP SAP central services (ASCS) 02

Additional application server (AAS) 03

SAP system identifier NW1


Prepare the infrastructure
Azure Marketplace contains images qualified for SAP with the High Availability add-on,
which you can use to deploy new VMs by using various versions of Red Hat.

Deploy Linux VMs manually via the Azure


portal
This document assumes that you already deployed an Azure virtual network, subnet, and
resource group.

Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image
that's supported for the SAP system. You can deploy a VM in any one of the availability
options: virtual machine scale set, availability zone, or availability set.
Configure Azure load balancer
During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.
) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Load Balancer limitations. If you need another
IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard instance of Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Azure
Standard Load Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Load Balancer. Enabling
TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health

probes.

Deploy Azure Files storage account and NFS shares


NFS on Azure Files runs on top of Azure Files premium storage. Before you set up NFS
on Azure Files, see How to create an NFS share.

There are two options for redundancy within an Azure region:

Locally redundant storage (LRS), which offers local, in-zone synchronous data
replication.
Zone-redundant storage (ZRS), which replicates your data synchronously across
the three availability zones in the region.

Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate
redundancy. Review the availability of Azure Files by Azure region under Premium
Files Storage. If your scenario benefits from ZRS, verify that premium file shares with
ZRS are supported in your Azure region.
We recommend that you access your Azure Storage account through an Azure private
endpoint. Make sure to deploy the Azure Files storage account endpoint and the VMs,
where you need to mount the NFS shares, in the same Azure virtual network or peered
Azure virtual networks.

1. Deploy an Azure Files storage account named sapafsnfs . In this example, we use
ZRS. If you're not familiar with the process, see Create a storage account for the
Azure portal.

2. On the Basics tab, use these settings:


a. For Storage account name, enter sapafsnfs .
b. For Performance, select Premium.
c. For Premium account type, select FileStorage.
d. For Replication, select zone redundancy (ZRS).

3. Select Next.

4. On the Advanced tab, clear Require secure transfer for REST API Operations. If
you don't clear this option, you can't mount the NFS share to your VM. The mount
operation will time out.

5. Select Next.

6. In the Networking section, configure these settings:


a. Under Networking connectivity, for Connectivity method, select Private
endpoint.
b. Under Private endpoint, select Add private endpoint.

7. On the Create private endpoint pane, select your Subscription, Resource group,
and Location. For Name, enter sapafsnfs_pe . For Storage sub-resource, select file.
Under Networking, for Virtual network, select the virtual network and subnet to
use. Again, you can use the virtual network where your SAP VMs are or a peered
virtual network. Under Private DNS integration, accept the default option Yes for
Integrate with private DNS zone. Make sure to select your Private DNS Zone.
Select OK.

8. On the Networking tab again, select Next.

9. On the Data protection tab, keep all the default settings.

10. Select Review + create to validate your configuration.

11. Wait for the validation to finish. Fix any issues before you continue.

12. On the Review + create tab, select Create.


Next, deploy the NFS shares in the storage account you created. In this example, there
are two NFS shares, sapnw1 and saptrans .

1. Sign in to the Azure portal .


2. Select or search for Storage accounts.
3. On the Storage accounts page, select sapafsnfs.
4. On the resource menu for sapafsnfs, under Data storage, select File shares.
5. On the File shares page, select File share.
a. For Name, enter sapnw1 , saptrans .
b. Select an appropriate share size. For example, 128 GB. Consider the size of the
data stored on the share and IOPS and throughput requirements. For more
information, see Azure file share targets.
c. Select NFS as the protocol.
d. Select No root Squash. Otherwise, when you mount the shares on your VMs,
you can't see the file owner or group.

) Important

The preceding share size is only an example. Make sure to size your shares
appropriately. Size is not only based on the size of the of data stored on the share
but also based on the requirements for IOPS and throughput. For more
information, see Azure file share targets.

The SAP file systems that don't need to be mounted via NFS can also be deployed on
Azure disk storage. In this example, you can deploy /usr/sap/NW1/D02 and
/usr/sap/NW1/D03 on Azure disk storage.

Important considerations for NFS on Azure Files shares


When you plan your deployment with NFS on Azure Files, consider the following
important points:

The minimum share size is 100 GiB. You only pay for the capacity of the
provisioned shares.
Size your NFS shares not only based on capacity requirements but also on IOPS
and throughput requirements. For more information, see Azure file share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues with NFS
on Azure Files, see Troubleshoot Azure file share performance.
For SAP J2EE systems, it's not supported to place /usr/sap/<SID>/J<nr> on NFS on
Azure Files.
If your SAP system has a heavy batch jobs load, you might have millions of job
logs. If the SAP batch job logs are stored in the file system, pay special attention to
the sizing of the sapmnt share. As of SAP_BASIS 7.52, the default behavior for the
batch job logs is to be stored in the database. For more information, see Job log in
the database .
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces, or saptrans .
Don't use the saptrans share for any other activity, such as interfaces, or sapmnt .
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also storage account performance scale targets. Be careful not
to exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than five SAP systems in a single
storage account. This guideline helps avoid exceeding the storage account limits
and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for nonproduction and production SAP
systems in the same storage account.
We recommend that you deploy on RHEL 8.4 or higher to benefit from NFS client
improvements.
Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions
automatically redirect to a healthy zone. You don't have to remount the NFS shares
on your VMs.
If you're deploying your VMs across availability zones, use a storage account with
ZRS in the Azure regions that support ZRS.
Azure Files doesn't currently support automatic cross-region replication for
disaster recovery scenarios.

Set up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create a Pacemaker cluster


Follow the steps in Set up Pacemaker on Red Hat Enterprise Linux in Azure to create a
basic Pacemaker cluster for this (A)SCS server.

Prepare for an SAP NetWeaver installation


The following items are prefixed with:

[A]: Applicable to all nodes


[1]: Only applicable to node 1
[2]: Only applicable to node 2

1. [A] Set up hostname resolution.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.

Bash

# IP address of cluster node 1


10.90.90.7 sap-cl1
# IP address of cluster node 2
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers

2. [A] Install the NFS client and other requirements.

Bash

sudo yum -y install nfs-utils resource-agents resource-agents-sap

3. [1] Create the SAP directories on the NFS share.


Mount the NFS share sapnw1 temporarily on one of the VMs, and create the SAP
directories that will be used as nested mount points.

Bash

# mount temporarily the volume


sudo mkdir -p /saptmp
sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1
/saptmp -o noresvport,vers=4,minorversion=1,sec=sys
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntNW1
sudo mkdir -p usrsapNW1ascs
sudo mkdir -p usrsapNW1ers
sudo mkdir -p usrsapNW1sys
# unmount the volume and delete the temporary directory
cd ..
sudo umount /saptmp
sudo rmdir /saptmp

4. [A] Create the shared directories.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS01

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS01

5. [A] Check the version of resource-agents-sap .

Make sure that the version of the installed resource-agents-sap package is at least
3.9.5-124.el7 .

Bash

sudo yum info resource-agents-sap

6. [A] Add mount entries.

Bash

vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/
/usr/sap/NW1/SYS nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a

7. [A] Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash

sudo service waagent restart

8. [A] Configure RHEL.

Configure RHEL as described in SAP Note 2002167 for RHEL 7.x, SAP Note
2772999 for RHEL 8.x, or SAP Note 3108316 for RHEL 9.x.

Install SAP NetWeaver ASCS/ERS


1. [1] Configure the cluster default properties.

Bash

# If using RHEL 7.x


pcs resource defaults resource-stickiness=1
pcs resource defaults migration-threshold=3
# If using RHEL 8.x or later
pcs resource defaults update resource-stickiness=1
pcs resource defaults update migration-threshold=3

2. [1] Create a virtual IP resource and health probe for the ASCS instance.

Bash
sudo pcs node standby sap-cl2

sudo pcs resource create fs_NW1_ASCS Filesystem


device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ascs' \
directory='/usr/sap/NW1/ASCS00' fstype='nfs' force_unmount=safe
options='noresvport,vers=4,minorversion=1,sec=sys' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-NW1_ASCS

sudo pcs resource create vip_NW1_ASCS IPaddr2 \


ip=10.90.90.10 \
--group g-NW1_ASCS

sudo pcs resource create nc_NW1_ASCS azure-lb port=62000 \


--group g-NW1_ASCS

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Node sap-cl2: standby


# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1

3. [1] Install SAP NetWeaver ASCS.

Install SAP NetWeaver ASCS as the root on the first node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ASCS, for example, sapascs and 10.90.90.10, and the instance number that
you used for the probe of the load balancer, for example, 00.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .
Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/NW1/ASCS00, try setting


the owner and group of the ASCS00 folder and retry.

Bash

sudo chown nw1adm /usr/sap/NW1/ASCS00


sudo chgrp sapsys /usr/sap/NW1/ASCS00

4. [1] Create a virtual IP resource and health probe for the ERS instance.

Bash

sudo pcs node unstandby sap-cl2


sudo pcs node standby sap-cl1

sudo pcs resource create fs_NW1_AERS Filesystem


device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ers' \
directory='/usr/sap/NW1/ERS01' fstype='nfs' force_unmount=safe
options='noresvport,vers=4,minorversion=1,sec=sys' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-NW1_AERS

sudo pcs resource create vip_NW1_AERS IPaddr2 \


ip=10.90.90.9 \
--group g-NW1_AERS

sudo pcs resource create nc_NW1_AERS azure-lb port=62101 \


--group g-NW1_AERS

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Node sap-cl1: standby


# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl2

5. [2] Install SAP NetWeaver ERS.

Install SAP NetWeaver ERS as the root on the second node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ERS, for example, sapers and 10.90.90.9, and the instance number that you
used for the probe of the load balancer, for example, 01.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/NW1/ERS01, try setting the


owner and group of the ERS01 folder and retry.

Bash

sudo chown qaadm /usr/sap/NW1/ERS01


sudo chgrp sapsys /usr/sap/NW1/ERS01

6. [1] Adapt the ASCS/SCS and ERS instance profiles.

ASCS/SCS profile:
Bash

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP Note 1410736 .

ERS profile:

Bash

sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

7. [A] Configure Keep Alive.

The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this
action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using
ENSA1. Change the Linux system keepalive settings on all SAP servers for both
ENSA1 and ENSA2. For more information, see SAP Note 1410736 .

Bash

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

8. [A] Update the /usr/sap/sapservices file.

To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.
Bash

sudo vi /usr/sap/sapservices

# Depending on whether the SAP Startup framework is integrated with


systemd, you will observe one of the two entries on the ASCS node. You
should comment out the line(s).
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW1/ASCS00/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_sapascs -D -u nw1adm
# systemctl --no-ask-password start SAPNW1_00 # sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_sapascs

# Depending on whether the SAP Startup framework is integrated with


systemd, you will observe one of the two entries on the ERS node. You
should comment out the line(s).
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS01/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW1/ERS01/exe/sapstartsrv
pf=/usr/sap/NW1/ERS01/profile/NW1_ERS01_sapers -D -u nw1adm
# systemctl --no-ask-password start SAPNW1_00 # sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_ERS01_sapers

) Important

With the systemd based SAP Startup Framework, SAP instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As described in SAP Note 3115048 , a fresh
installation of a SAP kernel with integrated systemd based SAP Startup
Framework support will always result in a systemd controlled SAP instance.
After an SAP kernel upgrade of an existing SAP installation to a kernel which
has systemd based SAP Startup Framework support, however, some manual
steps have to be performed as documented in SAP Note 3115048 to convert
the existing SAP startup environment to one which is systemd controlled.

When utilizing Red Hat HA services for SAP (cluster configuration) to manage
SAP application server instances such as SAP ASCS and SAP ERS, additional
modifications will be necessary to ensure compatibility between the
SAPInstance resource agent and the new systemd-based SAP startup
framework. So once the SAP application server instances has been installed or
switched to a systemd enabled SAP Kernel as per SAP Note 3115048 , the
steps mentioned in Red Hat KBA 6884531 must be completed successfully
on all cluster nodes.

9. [1] Create the SAP cluster resources.


Depending on whether you are running an ENSA1 or ENSA2 system, select
respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support. See SAP Note 2630416 for
enqueue server 2 support.

If you use enqueue server 2 architecture (ENSA2 ), install resource agent


resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as
shown here:

ENSA1

Bash

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \


InstanceName=NW1_ASCS00_sapascs
START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_ASCS

sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000

sudo pcs resource create rsc_sap_NW1_ERS01 SAPInstance \


InstanceName=NW1_ERS01_sapers
START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_AERS

sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000


sudo pcs constraint location rsc_sap_NW1_ASCS00 rule score=2000
runs_ers_NW1 eq 1
sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS
kind=Optional symmetrical=false

sudo pcs node unstandby sap-cl1


sudo pcs property set maintenance-mode=false

If you're upgrading from an older version and switching to enqueue server 2, see
SAP Note 2641322 .
7 Note

The timeouts in the preceding configuration are only examples and might
need to be adapted to the specific SAP setup.

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Online: [ sap-cl1 sap-cl2 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1

10. [1] Run the following step to configure priority-fencing-delay (applicable only as
of pacemaker-2.0.4-6.el8 or higher).

7 Note

If you have a two-node cluster, you have the option to configure the
priority-fencing-delay cluster property. This property introduces additional

delay in fencing a node that has higher total resource priority when a split-
brain scenario occurs. For more information, see Can Pacemaker fence the
cluster node with the fewest running resources? .
The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8
version or higher. If you set up priority-fencing-delay on an existing cluster,
make sure to clear the pcmk_delay_max setting in the fencing device.

Bash

sudo pcs resource defaults update priority=1


sudo pcs resource update rsc_sap_NW1_ASCS00 meta priority=10

sudo pcs property set priority-fencing-delay=15s

11. [A] Add firewall rules for ASCS and ERS on both nodes.

Bash

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=
{62000,3200,3600,3900,8100,50013,50014,50016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62000,3200,3600,3900,8100,50013,50014,50016}/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=
{62101,3201,3301,50113,50114,50116}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62101,3201,3301,50113,50114,50116}/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation runs on an application
server. Prepare the application server VMs to be able to use them in these cases.

The following steps assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring
hostname resolution) aren't needed.

The following items are prefixed with:

[A]: Applicable to both PAS and AAS


[P]: Only applicable to PAS
[S]: Only applicable to AAS

1. [A] Set up hostname resolution. You can either use a DNS server or modify the
/etc/hosts file on all nodes. This example shows how to use the /etc/hosts file.

Replace the IP address and the hostname in the following commands:


Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.

Bash

10.90.90.7 sap-cl1
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers
10.90.90.12 sapa01
10.90.90.13 sapa02

2. [A] Create the sapmnt directory.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

3. [A] Install the NFS client and other requirements.

Bash

sudo yum -y install nfs-utils uuidd

4. [A] Add mount entries.

Bash

vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a

5. [A] Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash

sudo service waagent restart

Install the database


In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux. For
a list of supported databases, see SAP Note 1928533 .

Install the SAP NetWeaver database instance as a root by using a virtual hostname that
maps to the IP address of the load balancer front-end configuration for the database.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a nonroot user
to connect to sapinst .

Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAP NetWeaver application server installation
Follow these steps to install an SAP application server.

1. [A] Prepare the application server.

Follow the steps in the previous section SAP NetWeaver application server
preparation to prepare the application server.

2. [A] Install the SAP NetWeaver application server.

Install a primary or additional SAP NetWeaver applications server.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [A] Update the SAP HANA secure store.

Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.

Run the following command to list the entries as <sapsid>adm .

Bash

hdbuserstore List

All entries should be listed and look similar to:

Bash

DATA FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.DAT


KEY FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.90.90.5:30313
USER: SAPABAP1
DATABASE: NW1

In this example, the IP address of the default entry points to the VM, not the load
balancer. Change the entry to point to the virtual hostname of the load balancer.
Make sure to use the same port and database name. For example, use 30313 and
NW1 in the sample output.

Bash

su - nw1adm
hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP
schema>

Test cluster setup


Thoroughly test your Pacemaker cluster. For more information, see Execute the typical
failover tests.

Next steps
To deploy a cost-optimization scenario where the PAS and AAS instance is
deployed with SAP NetWeaver HA cluster on RHEL, see Install SAP dialog instance
with SAP ASCS/SCS high-availability VMs on RHEL.
See HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide.
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
(large instances), see SAP HANA (large instances) high availability and disaster
recovery on Azure.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
VMs, see High availability of SAP HANA on Azure Virtual Machines.
Azure Virtual Machines HA for SAP
NetWeaver on RHEL with Azure NetApp
Files for SAP applications
Article • 01/19/2024

This article describes how to deploy virtual machines (VMs), configure the VMs, install
the cluster framework, and install a highly available SAP NetWeaver 7.50 system by
using Azure NetApp Files. In the example configurations and installation commands, the
ASCS instance is number 00, the ERS instance is number 01, the Primary Application
instance (PAS) is 02, and the Application instance (AAS) is 03. The SAP System ID QAS is
used.

The database layer isn't covered in detail in this article.

Prerequisites
Read the following SAP Notes and papers first:

Azure NetApp Files documentation

SAP Note 1928533 , which has:


A list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.

SAP Note 2015553 lists prerequisites for SAP-supported SAP software


deployments in Azure.

SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.

SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.

SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.

SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.

SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.

SAP Community WIKI has all required SAP Notes for Linux.

Azure Virtual Machines planning and implementation for SAP on Linux

Azure Virtual Machines deployment for SAP on Linux

Azure Virtual Machines DBMS deployment for SAP on Linux

SAP NetWeaver in Pacemaker cluster

General Red Hat Enterprise Linux (RHEL) documentation:


High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP NetWeaver with standalone resources in RHEL
7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2)
in Pacemaker on RHEL

Azure-specific RHEL documentation:


Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure

NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

NetApp NFS Best Practices

Overview
High availability (HA) for SAP NetWeaver central services requires shared storage. Until
now to achieve HA on Red Hat Linux, it was necessary to build a separate highly
available GlusterFS cluster.

Now it's possible to achieve SAP NetWeaver HA by using shared storage deployed on
Azure NetApp Files. Using Azure NetApp Files for shared storage eliminates the need for
more GlusterFS clusters. Pacemaker is still needed for HA of the SAP NetWeaver central
services (ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA
database use virtual hostname and virtual IP addresses. On Azure, a load balancer is
required to use a virtual IP address. We recommend using Azure Load Balancer
Standard. The configuration here shows a load balancer with a:

Front-end IP address 192.168.14.9 for ASCS.


Front-end IP address 192.168.14.10 for ERS.
Probe port 62000 for ASCS.
Probe port 62101 for ERS.

Set up the Azure NetApp Files infrastructure


SAP NetWeaver requires shared storage for the transport and profile directory. Before
you proceed with the setup for Azure NetApp Files infrastructure, familiarize yourself
with the Azure NetApp Files documentation. Check if your selected Azure region offers
Azure NetApp Files. For the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files availability by Azure region .

Azure NetApp Files are available in several Azure regions .

Deploy Azure NetApp Files resources


The steps assume that you already deployed Azure Virtual Network. The Azure NetApp
Files resources and the VMs, where the Azure NetApp Files resources will be mounted,
must be deployed in the same Azure virtual network or in peered Azure virtual networks.

1. Create the Azure NetApp Files account in the selected Azure region by following
the instructions to create an Azure NetApp Files account.

2. Set up an Azure NetApp Files capacity pool by following the instructions on how to
set up an Azure NetApp Files capacity pool. The SAP NetWeaver architecture
presented in this article uses a single Azure NetApp Files capacity pool, Premium
SKU. We recommend the Azure NetApp Files Premium SKU for the SAP NetWeaver
application workload on Azure.

3. Delegate a subnet to Azure NetApp Files as described in the instructions on how to


delegate a subnet to Azure NetApp Files.

4. Deploy Azure NetApp Files volumes by following the instructions to create a


volume for Azure NetApp Files. Deploy the volumes in the designated Azure
NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned
automatically. The Azure NetApp Files resources and the Azure VMs must be in the
same Azure virtual network or in peered Azure virtual networks. In this example, we
use two Azure NetApp Files volumes: sapQAS and transSAP. The file paths that are
mounted to the corresponding mount points are /usrsapqas/sapmntQAS and
/usrsapqas/usrsapQASsys.
a. Volume sapQAS (nfs://192.168.24.5/usrsapqas/sapmntQAS)
b. Volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASascs)
c. Volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASsys)
d. Volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASers)
e. Volume transSAP (nfs://192.168.24.4/transSAP)
f. Volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASpas)
g. Volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASaas)
In this example, we used Azure NetApp Files for all SAP NetWeaver file systems to
demonstrate how you can use Azure NetApp Files. The SAP file systems that don't need
to be mounted via NFS can also be deployed as Azure disk storage. In this example, a-e
must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS/D02 and
/usr/sap/QAS/D03) could be deployed as Azure disk storage.

Important considerations
When you consider Azure NetApp Files for the SAP NetWeaver on RHEL HA architecture,
be aware of the following important considerations:

The minimum capacity pool is 4 TiB. You can increase the capacity pool size in 1-
TiB increments.
The minimum volume is 100 GiB.
Azure NetApp Files and all VMs, where Azure NetApp Files volumes will be
mounted, must be in the same Azure virtual network or in peered virtual networks
in the same region. Azure NetApp Files access over virtual network peering in the
same region is supported now. Azure NetApp Files access over global peering isn't
supported yet.
The selected virtual network must have a subnet delegated to Azure NetApp Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level. For more information, see
Service level for Azure NetApp Files. When you size the SAP Azure NetApp
volumes, make sure that the resulting throughput meets the application
requirements.
Azure NetApp Files offers export policy. You can control the allowed clients and
the access type (like Read/Write and Read Only).
The Azure NetApp Files feature isn't zone aware yet. Currently, the Azure NetApp
Files feature isn't deployed in all availability zones in an Azure region. Be aware of
the potential latency implications in some Azure regions.
You can deploy Azure NetApp Files volumes as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP application layer (ASCS/ERS, SAP application
servers).

Prepare the infrastructure


Azure Marketplace contains images qualified for SAP with the High Availability add-on,
which you can use to deploy new VMs by using various versions of Red Hat.
Deploy Linux VMs manually via the Azure
portal
This document assumes that you already deployed an Azure virtual network, subnet, and
resource group.

Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image
that's supported for the SAP system. You can deploy a VM in any one of the availability
options: virtual machine scale set, availability zone, or availability set.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
more IP addresses for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) standard load balancer, there's no outbound internet
connectivity unless more configuration is performed to allow routing to public
endpoints. For more information on how to achieve outbound connectivity, see
Public endpoint connectivity for VMs by using Azure Standard Load Balancer in
SAP high-availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0. For more information, see Load Balancer health
probes.

Disable ID mapping (if you use NFSv4.1)


The instructions in this section are only applicable if you're using Azure NetApp Files
volumes with the NFSv4.1 protocol. Perform the configuration on all VMs where Azure
NetApp Files NFSv4.1 volumes will be mounted.
1. Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, that is, defaultv4iddomain.com , and the
mapping is set to nobody.

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com . If there's a mismatch between the domain

configuration on the NFS client (that is, the VM) and the NFS server (that is,
the Azure NetApp configuration), then the permissions for files on Azure
NetApp volumes that are mounted on the VMs display as nobody .

Bash

sudo cat /etc/idmapd.conf

# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

The following [A] prefix applies to both PAS and AAS.

1. [A] Verify nfs4_disable_idmapping . It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, run the mount command. You
won't be able to manually create the directory under /sys/modules because access
is reserved for the kernel and drivers.

Bash

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping

# If you need to set nfs4_disable_idmapping to Y


mkdir /mnt/tmp
mount 192.168.24.5:/sapQAS
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping

# Make the configuration permanent


echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
Set up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create a Pacemaker cluster


Follow the steps in Set up Pacemaker on Red Hat Enterprise Linux in Azure to create a
basic Pacemaker cluster for this (A)SCS server.

Prepare for the SAP NetWeaver installation


The following items are prefixed with either:

[A]: Applicable to all nodes


[1]: Only applicable to node 1
[2]: Only applicable to node 2

1. [A] Set up hostname resolution.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.

Bash

# IP address of cluster node 1


192.168.14.5 anftstsapcl1
# IP address of cluster node 2
192.168.14.6 anftstsapcl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
192.168.14.10 anftstsapers
2. [1] Create SAP directories in the Azure NetApp Files volume. Mount the Azure
NetApp Files volume temporarily on one of the VMs and create the SAP directories
(file paths).

Bash

# mount temporarily the volume


sudo mkdir -p /saptmp

# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp
192.168.24.5:/sapQAS /saptmp

# If using NFSv4.1
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp
192.168.24.5:/sapQAS /saptmp

# create the SAP directories


sudo cd /saptmp
sudo mkdir -p sapmntQAS
sudo mkdir -p usrsapQASascs
sudo mkdir -p usrsapQASers
sudo mkdir -p usrsapQASsys
sudo mkdir -p usrsapQASpas
sudo mkdir -p usrsapQASaas

# unmount the volume and delete the temporary directory


sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

3. [A] Create the shared directories.

Bash

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/QAS/SYS
sudo mkdir -p /usr/sap/QAS/ASCS00
sudo mkdir -p /usr/sap/QAS/ERS01

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/QAS/SYS
sudo chattr +i /usr/sap/QAS/ASCS00
sudo chattr +i /usr/sap/QAS/ERS01

4. [A] Install the NFS client and other requirements.


Bash

sudo yum -y install nfs-utils resource-agents resource-agents-sap

5. [A] Check the version of resource-agents-sap .

Make sure that the version of the installed resource-agents-sap package is at least
3.9.5-124.el7 .

Bash

sudo yum info resource-agents-sap

# Loaded plugins: langpacks, product-id, search-disabled-repos


# Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache
fast
# Installed Packages
# Name : resource-agents-sap
# Arch : x86_64
# Version : 3.9.5
# Release : 124.el7
# Size : 100 k
# Repo : installed
# From repo : rhel-sap-for-rhel-7-server-rpms
# Summary : SAP cluster resource agents and connector script
# URL : https://github.com/ClusterLabs/resource-agents
# License : GPLv2+
# Description : The SAP resource agents and connector script interface
with
# : Pacemaker to allow SAP instances to be managed in a
cluster
# : environment.

6. [A] Add mount entries.

If you use NFSv3:

Bash

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3
192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3
192.168.24.4:/transSAP /usr/sap/trans nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3
If you use NFSv4.1:

Bash

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
192.168.24.4:/transSAP /usr/sap/trans nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys

7 Note

Make sure to match the NFS protocol version of the Azure NetApp Files
volumes when you mount the volumes. If the Azure NetApp Files volumes are
created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the
Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the
instructions to disable ID mapping and make sure to use the corresponding
NFSv4.1 configuration. In this example, the Azure NetApp Files volumes were
created as NFSv3 volumes.

Mount the new shares.

Bash

sudo mount -a

7. [A] Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by VM size. Make sure that you
do not set a value that is too big. You can check the SWAP space with
command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000
Restart the agent to activate the change.

Bash

sudo service waagent restart

8. [A] Perform RHEL OS configuration.

Based on the RHEL version, perform the configuration mentioned in SAP Note
2002167 , 2772999 , or 3108316 .

Install SAP NetWeaver ASCS/ERS


1. [1] Configure cluster default properties.

Bash

pcs resource defaults resource-stickiness=1


pcs resource defaults migration-threshold=3

2. [1] Create a virtual IP resource and health probe for the ASCS instance.

Bash

sudo pcs node standby anftstsapcl2

# If using NFSv3
sudo pcs resource create fs_QAS_ASCS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-QAS_ASCS

# If using NFSv4.1
sudo pcs resource create fs_QAS_ASCS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe
options='sec=sys,nfsvers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=105 \
--group g-QAS_ASCS

sudo pcs resource create vip_QAS_ASCS IPaddr2 \


ip=192.168.14.9 \
--group g-QAS_ASCS
sudo pcs resource create nc_QAS_ASCS azure-lb port=62000 \
--group g-QAS_ASCS

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Node anftstsapcl2: standby


# Online: [ anftstsapcl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started
anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started
anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started
anftstsapcl1

3. [1] Install SAP NetWeaver ASCS.

Install SAP NetWeaver ASCS as the root on the first node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ASCS, for example, anftstsapvh, 192.168.14.9, and the instance number that
you used for the probe of the load balancer, for example, 00.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/QAS/ASCS00, try setting the


owner and group of the ASCS00 folder and retry.

Bash
sudo chown qasadm /usr/sap/QAS/ASCS00
sudo chgrp sapsys /usr/sap/QAS/ASCS00

4. [1] Create a virtual IP resource and health probe for the ERS instance.

Bash

sudo pcs node unstandby anftstsapcl2


sudo pcs node standby anftstsapcl1

# If using NFSv3
sudo pcs resource create fs_QAS_AERS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-QAS_AERS

# If using NFSv4.1
sudo pcs resource create fs_QAS_AERS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe
options='sec=sys,nfsvers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=105 \
--group g-QAS_AERS

sudo pcs resource create vip_QAS_AERS IPaddr2 \


ip=192.168.14.10 \
--group g-QAS_AERS

sudo pcs resource create nc_QAS_AERS azure-lb port=62101 \


--group g-QAS_AERS

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Node anftstsapcl1: standby


# Online: [ anftstsapcl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started
anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started
anftstsapcl2<
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started
anftstsapcl2
# Resource Group: g-QAS_AERS
# fs_QAS_AERS (ocf::heartbeat:Filesystem): Started
anftstsapcl2
# nc_QAS_AERS (ocf::heartbeat:azure-lb): Started
anftstsapcl2
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started
anftstsapcl2

5. [2] Install SAP NetWeaver ERS.

Install SAP NetWeaver ERS as the root on the second node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ERS, for example, anftstsapers, 192.168.14.10, and the instance number that
you used for the probe of the load balancer, for example, 01.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/QAS/ERS01, try setting the


owner and group of the ERS01 folder and retry.

Bash

sudo chown qaadm /usr/sap/QAS/ERS01


sudo chgrp sapsys /usr/sap/QAS/ERS01

6. [1] Adapt the ASCS/SCS and ERS instance profiles.

ASCS/SCS profile

Bash

sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP Note 1410736 .

ERS profile

Bash

sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

7. [A] Configure Keep Alive.

The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this
action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you use ENSA1,
and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. For more information, see SAP Note 1410736 .

Bash

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

8. [A] Update the /usr/sap/sapservices file.

To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.

Bash

sudo vi /usr/sap/sapservices
# Depending on whether the SAP Startup framework is integrated with
systemd, you will observe one of the two entries on the ASCS node. You
should comment out the line(s).
# LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/QAS/ASCS00/exe/sapstartsrv
pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm
# systemctl --no-ask-password start SAPQAS_00 # sapstartsrv
pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh

# Depending on whether the SAP Startup framework is integrated with


systemd, you will observe one of the two entries on the ASCS node. You
should comment out the line(s).
# LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/QAS/ERS01/exe/sapstartsrv
pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm
# systemctl --no-ask-password start SAPQAS_01 # sapstartsrv
pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers

) Important

With the systemd based SAP Startup Framework, SAP instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As described in SAP Note 3115048 , a fresh
installation of a SAP kernel with integrated systemd based SAP Startup
Framework support will always result in a systemd controlled SAP instance.
After an SAP kernel upgrade of an existing SAP installation to a kernel which
has systemd based SAP Startup Framework support, however, some manual
steps have to be performed as documented in SAP Note 3115048 to convert
the existing SAP startup environment to one which is systemd controlled.

When utilizing Red Hat HA services for SAP (cluster configuration) to manage
SAP application server instances such as SAP ASCS and SAP ERS, additional
modifications will be necessary to ensure compatibility between the
SAPInstance resource agent and the new systemd-based SAP startup
framework. So once the SAP application server instances has been installed or
switched to a systemd enabled SAP Kernel as per SAP Note 3115048 , the
steps mentioned in Red Hat KBA 6884531 must be completed successfully
on all cluster nodes.

9. [1] Create the SAP cluster resources.

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support. See SAP Note 2630416 for
enqueue server 2 support.

If you use enqueue server 2 architecture (ENSA2 ), install resource agent


resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as
shown here:

ENSA1

Bash

sudo pcs property set maintenance-mode=true

# If using NFSv3
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS

# If using NFSv4.1
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=105 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS

sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000

# If using NFSv3
sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_AERS

# If using NFSv4.1
sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=105 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_AERS

sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000


sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000
runs_ers_QAS eq 1
sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS
kind=Optional symmetrical=false

sudo pcs node unstandby anftstsapcl1


sudo pcs property set maintenance-mode=false

If you're upgrading from an older version and switching to enqueue server 2, see
SAP Note 2641322 .

7 Note

The higher timeouts that are suggested when you use NFSv4.1 are necessary
owing to protocol-specific pause, which is related to NFSv4.1 lease renewals.
For more information, see NFS in NetApp best practice . The timeouts in
the preceding configuration are only examples and might need to be adapted
to the specific SAP setup.

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Online: [ anftstsapcl1 anftstsapcl2 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started
anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started
anftstsapcl2
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started
anftstsapcl2
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started
anftstsapcl2
# Resource Group: g-QAS_AERS
# fs_QAS_AERS (ocf::heartbeat:Filesystem): Started
anftstsapcl1
# nc_QAS_AERS (ocf::heartbeat:azure-lb): Started
anftstsapcl1
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started
anftstsapcl1
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started
anftstsapcl1

10. [1] Run the following step to configure priority-fencing-delay (applicable only as
of pacemaker-2.0.4-6.el8 or higher).

7 Note

If you have a two-node cluster, you have the option to configure the
priority-fencing-delay cluster property. This property introduces more delay

in fencing a node that has higher total resource priority when a split-brain
scenario occurs. For more information, see Can Pacemaker fence the cluster
node with the fewest running resources? .

The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8


version or higher. If you're setting up priority-fencing-delay on an existing
cluster, make sure to clear the pcmk_delay_max setting in the fencing device.

Bash

sudo pcs resource defaults update priority=1


sudo pcs resource update rsc_sap_QAS_ASCS00 meta priority=10

sudo pcs property set priority-fencing-delay=15s

11. [A] Add firewall rules for ASCS and ERS on both nodes.

Bash

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=
{62000,3200,3600,3900,8100,50013,50014,50016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62000,3200,3600,3900,8100,50013,50014,50016}/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=
{62101,3201,3301,50113,50114,50116}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62101,3201,3301,50113,50114,50116}/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation runs on an application
server. Prepare the application server VMs to be able to use them in these cases.

The following steps assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring
hostname resolution) aren't needed.

The following items are prefixed with either:

[A]: Applicable to both PAS and AAS


[P]: Only applicable to PAS
[S]: Only applicable to AAS

1. [A] Set up hostname resolution.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.

text

# IP address of the load balancer frontend configuration for SAP


NetWeaver ASCS
192.168.14.9 anftstsapvh
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS ERS
192.168.14.10 anftstsapers
192.168.14.7 anftstsapa01
192.168.14.8 anftstsapa02

2. [A] Create the sapmnt directory.

Bash

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
3. [A] Install the NFS client and other requirements.

Bash

sudo yum -y install nfs-utils uuidd

4. [A] Add mount entries.

If you use NFSv3:

Bash

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3
192.168.24.4:/transSAP /usr/sap/trans nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3

If you use NFSv4.1:

Bash

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
192.168.24.4:/transSAP /usr/sap/trans nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys

Mount the new shares.

Bash

sudo mount -a

5. [P] Create and mount the PAS directory.

If you use NFSv3:

Bash

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3

# Mount
sudo mount -a

If you use NFSv4.1:

Bash

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys

# Mount
sudo mount -a

6. [S] Create and mount the AAS directory.

If you use NFSv3:

Bash

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3

# Mount
sudo mount -a

If you use NFSv4.1:

Bash

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys

# Mount
sudo mount -a

7. [A] Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by VM size. Make sure that you
do not set a value that is too big. You can check the SWAP space with
command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash

sudo service waagent restart

Install the database


In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux. For
a list of supported databases, see SAP Note 1928533 .

Run the SAP database instance installation.

Install the SAP NetWeaver database instance as the root by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the database.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.

1. Prepare the application server.

Follow the steps in the previous section SAP NetWeaver application server
preparation to prepare the application server.

2. Install the SAP NetWeaver application server.

Install a primary or additional SAP NetWeaver applications server.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update the SAP HANA secure store.

Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.

Run the following command to list the entries as <sapsid>adm.

Bash

hdbuserstore List

All entries should be listed and look similar to:

Bash

DATA FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT


KEY FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 192.168.14.4:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the VM
and not to the load balancer's IP address. You need to change this entry to point to
the virtual hostname of the load balancer. Make sure to use the same port (30313
in the preceding output) and database name (QAS in the preceding output).

Bash

su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP
schema>

Test the cluster setup


Thoroughly test your Pacemaker cluster. For more information, see Execute the typical
failover tests.

Next steps
To deploy a cost-optimization scenario where the PAS and AAS instances are
deployed with the SAP NetWeaver HA cluster on RHEL, see Install SAP dialog
instance with SAP ASCS/SCS high availability VMs on RHEL.
See HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide.
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
(large instances), see SAP HANA (large instances) high availability and disaster
recovery on Azure.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
Virtual Machines, see High availability of SAP HANA on Azure Virtual Machines.
Azure Virtual Machines high availability
for SAP NetWeaver on Red Hat
Enterprise Linux
Article • 01/19/2024

This article describes how to deploy virtual machines (VMs), configure the VMs, install
the cluster framework, and install a highly available SAP NetWeaver 7.50 system.

In the example configurations and installation commands, ASCS instance number 00,
ERS instance number 02, and SAP System ID NW1 are used. The names of the resources
(for example, VMs and virtual networks) in the example assume that you used the
ASCS/SCS template with Resource Prefix NW1 to create the resources.

Prerequisites
Read the following SAP Notes and papers first:

SAP Note 1928533 , which has:


A list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.

SAP Note 2015553 lists prerequisites for SAP-supported SAP software


deployments in Azure.

SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
(RHEL).

SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.

SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.

SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.

SAP Note 2243692 has information about SAP licensing on Linux in Azure.

SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.

Azure Virtual Machines planning and implementation for SAP on Linux

Azure Virtual Machines deployment for SAP on Linux

Azure Virtual Machines DBMS deployment for SAP on Linux

Product Documentation for Red Hat Gluster Storage

SAP NetWeaver in Pacemaker cluster

General RHEL documentation:


High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP NetWeaver with Standalone Resources in RHEL
7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2)
in Pacemaker on RHEL

Azure-specific RHEL documentation:


Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure

Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is
configured in a separate cluster and multiple SAP systems can use it.
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA
database use virtual hostname and virtual IP addresses. On Azure, a load balancer is
required to use a virtual IP address. We recommend using Standard Azure Load
Balancer. The configuration here shows a load balancer with:

Front-end IP address 10.0.0.7 for ASCS


Front-end IP address 10.0.0.8 for ERS
Probe port 62000 for ASCS
Probe port 62101 for ERS

Set up GlusterFS
SAP NetWeaver requires shared storage for the transport and profile directory. To see
how to set up GlusterFS for SAP NetWeaver, see GlusterFS on Azure VMs on Red Hat
Enterprise Linux for SAP NetWeaver.

Prepare the infrastructure


Azure Marketplace contains images qualified for SAP with the High Availability add-on,
which you can use to deploy new VMs by using various versions of Red Hat.

Deploy Linux VMs manually via the Azure portal


This document assumes that you already deployed an Azure virtual network, subnet, and
resource group.

Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image
that's supported for the SAP system. You can deploy a VM in any one of the availability
options: virtual machine scale set, availability zone, or availability set.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
Frontend IP address: Select frontend IP
Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
another IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard Azure load balancer, there's no outbound
internet connectivity unless more configuration is performed to allow routing to
public endpoints. For more information on how to achieve outbound connectivity,
see Public endpoint connectivity for VMs using Azure Standard Load Balancer in
SAP high-availability scenarios.

) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health

probes.

Set up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create a Pacemaker cluster


Follow the steps in Set up Pacemaker on Red Hat Enterprise Linux in Azure to create a
basic Pacemaker cluster for this (A)SCS server.

Prepare for the SAP NetWeaver installation


The following items are prefixed with:

[A]: Applicable to all nodes


[1]: Only applicable to node 1
[2]: Only applicable to node 2

1. [A] Set up hostname resolution.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines to the /etc/hosts file. Change the IP address and
hostname to match your environment.

text

# IP addresses of the GlusterFS nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS ERS
10.0.0.8 nw1-aers

2. [A] Create the shared directories.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS02

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS02

3. [A] Install the GlusterFS client and other required packages.

Bash

sudo yum -y install glusterfs-fuse resource-agents resource-agents-sap

4. [A] Check the version of resource-agents-sap .

Make sure that the version of the installed resource-agents-sap package is at least
3.9.5-124.el7.

Bash

sudo yum info resource-agents-sap

# Loaded plugins: langpacks, product-id, search-disabled-repos


# Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache
fast
# Installed Packages
# Name : resource-agents-sap
# Arch : x86_64
# Version : 3.9.5
# Release : 124.el7
# Size : 100 k
# Repo : installed
# From repo : rhel-sap-for-rhel-7-server-rpms
# Summary : SAP cluster resource agents and connector script
# URL : https://github.com/ClusterLabs/resource-agents
# License : GPLv2+
# Description : The SAP resource agents and connector script interface
with
# : Pacemaker to allow SAP instances to be managed in a
cluster
# : environment.

5. [A] Add mount entries.

Bash

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


glust-0:/NW1-sapmnt /sapmnt/NW1 glusterfs backup-volfile-servers=glust-
1:glust-2 0 0
glust-0:/NW1-trans /usr/sap/trans glusterfs backup-volfile-
servers=glust-1:glust-2 0 0
glust-0:/NW1-sys /usr/sap/NW1/SYS glusterfs backup-volfile-
servers=glust-1:glust-2 0 0

Mount the new shares.

Bash

sudo mount -a

6. [A] Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash
sudo service waagent restart

7. [A] Configure RHEL.

Based on the RHEL version, perform the configuration mentioned in SAP Note
2002167 , SAP Note 2772999 , or SAP Note 3108316 .

Install SAP NetWeaver ASCS/ERS


1. [1] Configure the cluster default properties.

Bash

pcs resource defaults resource-stickiness=1


pcs resource defaults migration-threshold=3

2. [1] Create a virtual IP resource and health probe for the ASCS instance.

Bash

sudo pcs node standby nw1-cl-1

sudo pcs resource create fs_NW1_ASCS Filesystem device='glust-0:/NW1-


ascs' \
directory='/usr/sap/NW1/ASCS00' fstype='glusterfs' \
options='backup-volfile-servers=glust-1:glust-2' \
--group g-NW1_ASCS

sudo pcs resource create vip_NW1_ASCS IPaddr2 \


ip=10.0.0.7 \
--group g-NW1_ASCS

sudo pcs resource create nc_NW1_ASCS azure-lb port=62000 \


--group g-NW1_ASCS

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Node nw1-cl-1: standby


# Online: [ nw1-cl-0 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-
cl-0
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-
cl-0
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-
cl-0

3. [1] Install SAP NetWeaver ASCS.

Install SAP NetWeaver ASCS as the root on the first node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ASCS, for example, nw1-ascs and 10.0.0.7, and the instance number that
you used for the probe of the load balancer, for example, 00.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1/ASCS00, try setting


the owner and group of the ASCS00 folder and retry.

Bash

sudo chown nw1adm /usr/sap/NW1/ASCS00


sudo chgrp sapsys /usr/sap/NW1/ASCS00

4. [1] Create a virtual IP resource and health probe for the ERS instance.

Bash

sudo pcs node unstandby nw1-cl-1


sudo pcs node standby nw1-cl-0

sudo pcs resource create fs_NW1_AERS Filesystem device='glust-0:/NW1-


aers' \
directory='/usr/sap/NW1/ERS02' fstype='glusterfs' \
options='backup-volfile-servers=glust-1:glust-2' \
--group g-NW1_AERS
sudo pcs resource create vip_NW1_AERS IPaddr2 \
ip=10.0.0.8 \
--group g-NW1_AERS

sudo pcs resource create nc_NW1_AERS azure-lb port=62102 \


--group g-NW1_AERS

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.

Bash

sudo pcs status

# Node nw1-cl-0: standby


# Online: [ nw1-cl-1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-
cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-
cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-
cl-1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-
cl-1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-
cl-1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-
cl-1

5. [2] Install SAP NetWeaver ERS.

Install SAP NetWeaver ERS as the root on the second node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ERS, for example, nw1-aers and 10.0.0.8, and the instance number that you
used for the probe of the load balancer, for example, 02.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

If the installation fails to create a subfolder in /usr/sap/NW1/ERS02, try setting the


owner and group of the ERS02 folder and retry.

Bash

sudo chown nw1adm /usr/sap/NW1/ERS02


sudo chgrp sapsys /usr/sap/NW1/ERS02

6. [1] Adapt the ASCS/SCS and ERS instance profiles.

ASCS/SCS profile:

Bash

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP Note 1410736 .

ERS profile:

Bash

sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

7. [A] Configure Keep Alive.

The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this
action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using
ENSA1. Change the Linux system keepalive settings on all SAP servers for both
ENSA1 and ENSA2. For more information, see SAP Note 1410736 .

Bash

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

8. [A] Update the /usr/sap/sapservices file.

To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.

Bash

sudo vi /usr/sap/sapservices

# On the node where you installed the ASCS, comment out the following
line
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW1/ASCS00/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_nw1-ascs -D -u nw1adm

# On the node where you installed the ERS, comment out the following
line
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW1/ERS02/exe/sapstartsrv
pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm

9. [1] Create the SAP cluster resources.

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support, see SAP Note 2630416 for
enqueue server 2 support.

If you use enqueue server 2 architecture (ENSA2 ), install resource agent


resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as
shown here:

ENSA1
Bash

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \


InstanceName=NW1_ASCS00_nw1-ascs
START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_ASCS

sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000

sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \


InstanceName=NW1_ERS02_nw1-aers
START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW1_AERS

sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000


sudo pcs constraint location rsc_sap_NW1_ASCS00 rule score=2000
runs_ers_NW1 eq 1
sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS
kind=Optional symmetrical=false

sudo pcs node unstandby nw1-cl-0


sudo pcs property set maintenance-mode=false

7 Note

If you're upgrading from an older version and switching to enqueue server 2,


see SAP Note 2641322 .

7 Note

The timeouts in the preceding configuration are only examples and might
need to be adapted to the specific SAP setup.

Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash

sudo pcs status

# Online: [ nw1-cl-0 nw1-cl-1 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-
cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-
cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-
cl-1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-
cl-1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-
cl-0
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-
cl-0
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-
cl-0
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-
cl-0

10. [A] Add firewall rules for ASCS and ERS on both nodes.

Bash

# Probe Port of ASCS


sudo firewall-cmd --zone=public --add-port=
{62000,3200,3600,3900,8100,50013,50014,50016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62000,3200,3600,3900,8100,50013,50014,50016}/tcp
# Probe Port of ERS
sudo firewall-cmd --zone=public --add-port=
{62102,3202,3302,50213,50214,50216}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62102,3202,3302,50213,50214,50216}/tcp

SAP NetWeaver application server preparation


Some databases require that the database instance installation runs on an application
server. Prepare the application server VMs to be able to use them in these cases.
The following steps assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring
hostname resolution) aren't needed.

1. Set up hostname resolution.

You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.

Bash

# IP addresses of the GlusterFS nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db

2. Create the sapmnt directory.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

3. Install the GlusterFS client and other requirements.

Bash
sudo yum -y install glusterfs-fuse uuidd

4. Add mount entries.

Bash

sudo vi /etc/fstab

# Add the following lines to fstab, save and exit


glust-0:/NW1-sapmnt /sapmnt/NW1 glusterfs backup-volfile-servers=glust-
1:glust-2 0 0
glust-0:/NW1-trans /usr/sap/trans glusterfs backup-volfile-
servers=glust-1:glust-2 0 0

Mount the new shares.

Bash

sudo mount -a

5. Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash

sudo service waagent restart

Install the database


In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux. For
a list of supported databases, see SAP Note 1928533 .

1. Run the SAP database instance installation.

Install the SAP NetWeaver database instance as the root by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the database, for example, nw1-db and 10.0.0.13.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.

1. Prepare the application server.

Follow the steps in the previous section SAP NetWeaver application server
preparation to prepare the application server.

2. Install the SAP NetWeaver application server.

Install a primary or additional SAP NetWeaver applications server.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


nonroot user to connect to sapinst .

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. Update the SAP HANA secure store.

Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.

Run the following command to list the entries as <sapsid>adm:


Bash

hdbuserstore List

All entries should be listed and look similar to:

text

DATA FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.DAT


KEY FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: NW1

The output shows that the IP address of the default entry is pointing to the VM
and not to the load balancer's IP address. This entry needs to be changed to point
to the virtual hostname of the load balancer. Make sure to use the same port
(30313 in the preceding output) and database name (HN1 in the preceding
output).

Bash

su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@NW1 SAPABAP1 <password of ABAP
schema>

Test the cluster setup


1. Manually migrate the ASCS instance.

Resource state before starting the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run the following commands as root to migrate the ASCS instance.

Bash

[root@nw1-cl-0 ~]# pcs resource move rsc_sap_NW1_ASCS00

[root@nw1-cl-0 ~]# pcs resource clear rsc_sap_NW1_ASCS00

# Remove failed actions for the ERS that occurred as part of the
migration
[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

2. Simulate a node crash.

Resource state before starting the test:


text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Run the following command as root on the node where the ASCS instance is
running.

Bash

[root@nw1-cl-1 ~]# echo b > /proc/sysrq-trigger

The status after the node is started again should look like:

text

Online: [ nw1-cl-0 nw1-cl-1 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-0 'not running' (7):
call=45, status=complete, exitreason='',
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms

Use the following command to clean the failed resources.

Bash

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

3. Block network communication.

Resource state before starting the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run a firewall rule to block the communication on one of the nodes.

Bash

# Execute iptable rule on nw1-cl-0 (10.0.0.7) to block the incoming


and outgoing traffic to nw1-cl-1 (10.0.0.8)
iptables -A INPUT -s 10.0.0.8 -j DROP; iptables -A OUTPUT -d 10.0.0.8
-j DROP

When cluster nodes can't communicate with each other, there's a risk of a split-
brain scenario. In such situations, cluster nodes try to simultaneously fence each
other, which results in a fence race. To avoid this situation, we recommend that you
set a priority-fencing-delay property in a cluster configuration (applicable only
for pacemaker-2.0.4-6.el8 or higher).

By enabling the priority-fencing-delay property, the cluster introduces a delay in


the fencing action, specifically on the node hosting ASCS resource, allowing the
node to win the fence race.

Run the following command to delete the firewall rule.

Bash

# If the iptables rule set on the server gets reset after a reboot,
the rules will be cleared out. In case they have not been reset, please
proceed to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.8 -j DROP; iptables -D OUTPUT -d 10.0.0.8
-j DROP

4. Kill the message server process.


Resource state before starting the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run the following commands as root to identify the process of the message server
and kill it.

Bash

[root@nw1-cl-0 ~]# pgrep -f ms.sapNW1 | xargs kill -9

If you kill the message server only once, sapstart restarts it. If you kill it often
enough, Pacemaker eventually moves the ASCS instance to the other node. Run
the following commands as root to clean up the resource state of the ASCS and
ERS instance after the test.

Bash

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ASCS00


[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

Bash

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

5. Kill the enqueue server process.

Resource state before starting the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Run the following commands as root on the node where the ASCS instance is
running to kill the enqueue server.

Bash

#If using ENSA1


[root@nw1-cl-1 ~]# pgrep -f en.sapNW1 | xargs kill -9
#If using ENSA2
[root@nw1-cl-1 ~]# pgrep -f enq.sapNW1 | xargs kill -9

The ASCS instance should immediately fail over to the other node, in the case of
ENSA1. The ERS instance should also fail over after the ASCS instance is started.
Run the following commands as root to clean up the resource state of the ASCS
and ERS instance after the test.

Bash

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ASCS00


[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

6. Kill the enqueue replication server process.

Resource state before starting the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run the following command as root on the node where the ERS instance is
running to kill the enqueue replication server process.

Bash

#If using ENSA1


[root@nw1-cl-1 ~]# pgrep -f er.sapNW1 | xargs kill -9

#If using ENSA2


[root@nw1-cl-1 ~]# pgrep -f enqr.sapNW1 | xargs kill -9

If you run the command only once, sapstart restarts the process. If you run it
often enough, sapstart won't restart the process and the resource is in a stopped
state. Run the following commands as root to clean up the resource state of the
ERS instance after the test.

Bash

[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

7. Kill the enqueue sapstartsrv process.

Resource state before starting the test:

text

rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run the following commands as root on the node where the ASCS is running.

Bash

[root@nw1-cl-0 ~]# pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

[root@nw1-cl-0 ~]# kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource


agent as part of the monitoring. Resource state after the test:

text
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Next steps
To deploy a cost-optimization scenario where the PAS and AAS instance is
deployed with SAP NetWeaver HA cluster on RHEL, see Install SAP dialog instance
with SAP ASCS/SCS high availability VMs on RHEL.
See HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide.
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
(large instances), see SAP HANA (large instances) high availability and disaster
recovery on Azure.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
VMs, see High availability of SAP HANA on Azure Virtual Machines.
GlusterFS on Azure VMs on Red Hat
Enterprise Linux for SAP NetWeaver
Article • 07/04/2023

This article describes how to deploy the virtual machines, configure the virtual machines,
and install a GlusterFS cluster that can be used to store the shared data of a highly
available SAP system. This guide describes how to set up GlusterFS that is used by two
SAP systems, NW1 and NW2. The names of the resources (for example virtual machines,
virtual networks) in the example assume that you have used the SAP file server
template with resource prefix glust.

Be aware that as documented in Red Hat Gluster Storage Life Cycle Red Hat Gluster
Storage will reach end of life at the end of 2024. The configuration will be supported for
SAP on Azure until it reaches end of life stage. GlusterFS should not be used for new
deployments. We recommend to deploy the SAP shared directories on NFS on Azure
Files or Azure NetApp Files volumes as documented in HA for SAP NW on RHEL with
NFS on Azure Files or HA for SAP NW on RHEL with Azure NetApp Files.

Read the following SAP Notes and papers first

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure

SAP Note 2015553 lists prerequisites for SAP-supported SAP software


deployments in Azure.

SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux

SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux

SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.

SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.

SAP Note 2243692 has information about SAP licensing on Linux in Azure.

SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.

Azure Virtual Machines planning and implementation for SAP on Linux

Azure Virtual Machines deployment for SAP on Linux (this article)

Azure Virtual Machines DBMS deployment for SAP on Linux

Product Documentation for Red Hat Gluster Storage

General RHEL documentation


High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Red Hat Gluster Storage Life Cycle

Azure specific RHEL documentation:


Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure

Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is
configured in a separate cluster and can be used by multiple SAP systems.
Set up GlusterFS
In this example, the resources were deployed manually via the Azure portal .

Deploy Linux manually via Azure portal


This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy virtual machines for GlusterFS. Choose a suitable RHEL image that is supported
for Gluster storage. You can deploy VM in any one of the availability options - scale set,
availability zone or availability set.

Configure GlusterFS
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1, [2] - only applicable to node 2, [3] - only applicable to node 3.

1. [A] Setup host name resolution


You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment

text

# IP addresses of the Gluster nodes


10.0.0.40 glust-0
10.0.0.41 glust-1
10.0.0.42 glust-2

2. [A] Register

Register your virtual machines and attach it to a pool that contains repositories for
RHEL 7 and GlusterFS

Bash

sudo subscription-manager register


sudo subscription-manager attach --pool=<pool id>

3. [A] Enable GlusterFS repos

In order to install the required packages, enable the following repositories.

Bash

sudo subscription-manager repos --disable "*"


sudo subscription-manager repos --enable=rhel-7-server-rpms
sudo subscription-manager repos --enable=rh-gluster-3-for-rhel-7-
server-rpms

4. [A] Install GlusterFS packages

Install these packages on all GlusterFS nodes

Bash
sudo yum -y install redhat-storage-server

Reboot the nodes after the installation.

5. [A] Modify Firewall

Add firewall rules to allow client traffic to the GlusterFS nodes.

Bash

# list the available zones


firewall-cmd --get-active-zones

sudo firewall-cmd --zone=public --add-service=glusterfs --permanent


sudo firewall-cmd --zone=public --add-service=glusterfs

6. [A] Enable and start GlusterFS service

Start the GlusterFS service on all nodes.

Bash

sudo systemctl start glusterd


sudo systemctl enable glusterd

7. [1] Create GluserFS

Run the following commands to create the GlusterFS cluster

Bash

sudo gluster peer probe glust-1


sudo gluster peer probe glust-2

# Check gluster peer status


sudo gluster peer status

# Number of Peers: 2
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Accepted peer request (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Accepted peer request (Connected)
8. [2] Test peer status

Test the peer status on the second node

Bash

sudo gluster peer status


# Number of Peers: 2
#
# Hostname: glust-0
# Uuid: 6bc6927b-7ee2-461b-ad04-da123124d6bd
# State: Peer in Cluster (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Peer in Cluster (Connected)

9. [3] Test peer status

Test the peer status on the third node

Bash

sudo gluster peer status


# Number of Peers: 2
#
# Hostname: glust-0
# Uuid: 6bc6927b-7ee2-461b-ad04-da123124d6bd
# State: Peer in Cluster (Connected)
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Peer in Cluster (Connected)

10. [A] Create LVM

In this example, the GlusterFS is used for two SAP systems, NW1 and NW2. Use the
following commands to create LVM configurations for these SAP systems.

Use these commands for NW1

Bash

sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun0


sudo pvscan
sudo vgcreate --physicalextentsize 256K rhgs-NW1
/dev/disk/azure/scsi1/lun0
sudo vgscan
sudo lvcreate -l 50%FREE -n rhgs-NW1/sapmnt
sudo lvcreate -l 20%FREE -n rhgs-NW1/trans
sudo lvcreate -l 10%FREE -n rhgs-NW1/sys
sudo lvcreate -l 50%FREE -n rhgs-NW1/ascs
sudo lvcreate -l 100%FREE -n rhgs-NW1/aers
sudo lvscan

sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/sapmnt


sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/trans
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/sys
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/ascs
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW1/aers

sudo mkdir -p /rhs/NW1/sapmnt


sudo mkdir -p /rhs/NW1/trans
sudo mkdir -p /rhs/NW1/sys
sudo mkdir -p /rhs/NW1/ascs
sudo mkdir -p /rhs/NW1/aers

sudo chattr +i /rhs/NW1/sapmnt


sudo chattr +i /rhs/NW1/trans
sudo chattr +i /rhs/NW1/sys
sudo chattr +i /rhs/NW1/ascs
sudo chattr +i /rhs/NW1/aers

echo -e "/dev/rhgs-
NW1/sapmnt\t/rhs/NW1/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,no
uuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/trans\t/rhs/NW1/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouu
id 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/sys\t/rhs/NW1/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0
2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/ascs\t/rhs/NW1/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/aers\t/rhs/NW1/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab

sudo mount -a

Use these commands for NW2

Bash

sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun1


sudo pvscan
sudo vgcreate --physicalextentsize 256K rhgs-NW2
/dev/disk/azure/scsi1/lun1
sudo vgscan
sudo lvcreate -l 50%FREE -n rhgs-NW2/sapmnt
sudo lvcreate -l 20%FREE -n rhgs-NW2/trans
sudo lvcreate -l 10%FREE -n rhgs-NW2/sys
sudo lvcreate -l 50%FREE -n rhgs-NW2/ascs
sudo lvcreate -l 100%FREE -n rhgs-NW2/aers

sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/sapmnt


sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/trans
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/sys
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/ascs
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-NW2/aers

sudo mkdir -p /rhs/NW2/sapmnt


sudo mkdir -p /rhs/NW2/trans
sudo mkdir -p /rhs/NW2/sys
sudo mkdir -p /rhs/NW2/ascs
sudo mkdir -p /rhs/NW2/aers

sudo chattr +i /rhs/NW2/sapmnt


sudo chattr +i /rhs/NW2/trans
sudo chattr +i /rhs/NW2/sys
sudo chattr +i /rhs/NW2/ascs
sudo chattr +i /rhs/NW2/aers
sudo lvscan

echo -e "/dev/rhgs-
NW2/sapmnt\t/rhs/NW2/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,no
uuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/trans\t/rhs/NW2/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouu
id 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/sys\t/rhs/NW2/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0
2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/ascs\t/rhs/NW2/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/aers\t/rhs/NW2/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab

sudo mount -a

11. [1] Create the distributed volume

Use the following commands to create the GlusterFS volume for NW1 and start it.

Bash

sudo gluster vol create NW1-sapmnt replica 3 glust-0:/rhs/NW1/sapmnt


glust-1:/rhs/NW1/sapmnt glust-2:/rhs/NW1/sapmnt force
sudo gluster vol create NW1-trans replica 3 glust-0:/rhs/NW1/trans
glust-1:/rhs/NW1/trans glust-2:/rhs/NW1/trans force
sudo gluster vol create NW1-sys replica 3 glust-0:/rhs/NW1/sys glust-
1:/rhs/NW1/sys glust-2:/rhs/NW1/sys force
sudo gluster vol create NW1-ascs replica 3 glust-0:/rhs/NW1/ascs glust-
1:/rhs/NW1/ascs glust-2:/rhs/NW1/ascs force
sudo gluster vol create NW1-aers replica 3 glust-0:/rhs/NW1/aers glust-
1:/rhs/NW1/aers glust-2:/rhs/NW1/aers force

sudo gluster volume start NW1-sapmnt


sudo gluster volume start NW1-trans
sudo gluster volume start NW1-sys
sudo gluster volume start NW1-ascs
sudo gluster volume start NW1-aers

Use the following commands to create the GlusterFS volume for NW2 and start it.

Bash

sudo gluster vol create NW2-sapmnt replica 3 glust-0:/rhs/NW2/sapmnt


glust-1:/rhs/NW2/sapmnt glust-2:/rhs/NW2/sapmnt force
sudo gluster vol create NW2-trans replica 3 glust-0:/rhs/NW2/trans
glust-1:/rhs/NW2/trans glust-2:/rhs/NW2/trans force
sudo gluster vol create NW2-sys replica 3 glust-0:/rhs/NW2/sys glust-
1:/rhs/NW2/sys glust-2:/rhs/NW2/sys force
sudo gluster vol create NW2-ascs replica 3 glust-0:/rhs/NW2/ascs glust-
1:/rhs/NW2/ascs glust-2:/rhs/NW2/ascs force
sudo gluster vol create NW2-aers replica 3 glust-0:/rhs/NW2/aers glust-
1:/rhs/NW2/aers glust-2:/rhs/NW2/aers force

sudo gluster volume start NW2-sapmnt


sudo gluster volume start NW2-trans
sudo gluster volume start NW2-sys
sudo gluster volume start NW2-ascs
sudo gluster volume start NW2-aers

Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability
and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on Red Hat Enterprise Linux
for SAP applications multi-SID
Article • 01/18/2024

This article describes how to deploy multiple SAP NetWeaver highly available systems
(multi-SID) in a two node cluster on Azure VMs with Red Hat Enterprise Linux for SAP
applications.

In the example configurations, three SAP NetWeaver 7.50 systems are deployed in a
single, two node high availability cluster. The SAP systems SIDs are:

NW1 : ASCS instance number 00 and virtual host name msnw1ascs . ERS instance

number 02 and virtual host name msnw1ers .


NW2 : ASCS instance number 10 and virtual hostname msnw2ascs . ERS instance

number 12 and virtual host name msnw2ers .


NW3 : ASCS instance number 20 and virtual hostname msnw3ascs . ERS instance

number 22 and virtual host name msnw3ers .

The article doesn't cover the database layer and the deployment of the SAP NFS shares.

The examples in this article use the Azure NetApp Files volume sapMSID for the NFS
shares, assuming that the volume is already deployed. The examples assume that the
Azure NetApp Files volume is deployed with NFSv3 protocol. They use the following file
paths for the cluster resources for the ASCS and ERS instances of SAP systems NW1 , NW2 ,
and NW3 :

volume sapMSID (nfs://10.42.0.4/sapmntNW1)


volume sapMSID (nfs://10.42.0.4/usrsapNW1ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW1sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW1ers)
volume sapMSID (nfs://10.42.0.4/sapmntNW2)
volume sapMSID (nfs://10.42.0.4/usrsapNW2ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW2sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW2ers)
volume sapMSID (nfs://10.42.0.4/sapmntNW3)
volume sapMSID (nfs://10.42.0.4/usrsapNW3ascs)
volume sapMSID (nfs://10.42.0.4/usrsapNW3sys)
volume sapMSID (nfs://10.42.0.4/usrsapNW3ers)
Before you begin, refer to the following SAP Notes and papers:

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software, and operating system (OS) and database
combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.
Azure NetApp Files documentation.
SAP Note 2015553 has prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux.
Azure Virtual Machines deployment for SAP on Linux.
Azure Virtual Machines DBMS deployment for SAP on Linux.
SAP Netweaver in pacemaker cluster .
General RHEL documentation:
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2)
in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
The virtual machines that participate in the cluster must be sized to be able to run all
resources in case failover occurs. Each SAP SID can fail over independently from each
other in the multi-SID high availability cluster.

To achieve high availability, SAP NetWeaver requires highly available shares. This article
shows examples with the SAP shares deployed on Azure NetApp Files NFS volumes. You
could instead host the shares on highly available GlusterFS cluster, which can be used by
multiple SAP systems.

) Important

The support for multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest
operating system in Azure VMs is limited to five SAP SIDs on the same cluster. Each
new SID increases the complexity. A mix of SAP Enqueue Replication Server 1 and
Enqueue Replication Server 2 on the same cluster is not supported. Multi-SID
clustering describes the installation of multiple SAP ASCS/ERS instances with
different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only
supported for ASCS/ERS.
 Tip

The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is


more complex to implement. It also involves higher administrative effort, when
executing maintenance activities, like OS patching. Before you start the actual
implementation, take time to carefully plan out the deployment and all involved
components like VMs, NFS mounts, VIPs, load balancer configurations and so on.

SAP NetWeaver ASCS, SAP NetWeaver SCS, and SAP NetWeaver ERS use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual
IP address. We recommend using Standard load balancer.

Frontend IP addresses for ASCS: 10.3.1.50 (NW1), 10.3.1.52 (NW2), and 10.3.1.54
(NW3)
Frontend IP addresses for ERS: 10.3.1.51 (NW1), 10.3.1.53 (NW2), and 10.3.1.55
(NW3)
Probe port 62000 for NW1 ASCS, 62010 for NW2 ASCS, and 62020 for NW3 ASCS
Probe port 62102 for NW1 ASCS, 62112 for NW2 ASCS, and 62122 for NW3 ASCS

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there is no outbound internet
connectivity, unless additional configuration is performed to allow routing to public
end points. For details on how to achieve outbound connectivity see Public
endpoint connectivity for Virtual Machines using Azure Standard Load Balancer
in SAP high-availability scenarios.

) Important

Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set parameter
net.ipv4.tcp_timestamps to 0. For more information, see Load Balancer health

probes.

SAP shares
SAP NetWeaver requires shared storage for the transport, profile directory, and so on.
For highly available SAP system, it's important to have highly available shares. You need
to decide on the architecture for your SAP shares. One option is to deploy the shares on
Azure NetApp Files NFS volumes. With Azure NetApp Files, you get built-in high
availability for the SAP NFS shares.

Another option is to build GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP
NetWeaver, which can be shared between multiple SAP systems.

Deploy the first SAP system in the cluster


After you decide on the architecture for the SAP shares, deploy the first SAP system in
the cluster, following the corresponding documentation.

If you use Azure NetApp Files NFS volumes, follow Azure VMs high availability for
SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP
applications.
If you use GlusterFS cluster, follow GlusterFS on Azure VMs on Red Hat Enterprise
Linux for SAP NetWeaver.

These articles guide you through the steps to prepare the necessary infrastructure, build
the cluster, prepare the OS for running the SAP application.

 Tip

Always test the failover functionality of the cluster after the first system is deployed,
before adding the additional SAP SIDs to the cluster. That way, you know that the
cluster functionality works, before adding the complexity of additional SAP systems
to the cluster.

Deploy more SAP systems in the cluster


This example assumes that system NW1 was already deployed in the cluster. This
example shows how to deploy SAP systems NW2 and NW3 in the cluster.
The following items are prefixed with:

[A] Applicable to all nodes


[1] Only applicable to node 1
[2] Only applicable to node 2

Prerequisites

) Important

Before following the instructions to deploy additional SAP systems in the cluster,
deploy the first SAP system in the cluster. There are steps which are only necessary
during the first system deployment.

This article assumes that:

The Pacemaker cluster is already configured and running.


At least one SAP system (ASCS / ERS instance) is already deployed and is running
in the cluster.
The cluster failover functionality has been tested.
The NFS shares for all SAP systems are deployed.

Prepare for SAP NetWeaver Installation


1. Add configuration for the newly deployed system (that is, NW2 and NW3 ) to the
existing Azure Load Balancer, following the instructions Deploy Azure Load
Balancer manually via Azure portal. Adjust the IP addresses, health probe ports,
and load-balancing rules for your configuration.

2. [A] Set up name resolution for the more SAP systems. You can either use DNS
server or modify /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Adapt the IP addresses and the host names to your environment.

Windows Command Prompt

sudo vi /etc/hosts
# IP address of the load balancer frontend configuration for NW2 ASCS
10.3.1.52 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.54 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.53 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.55 msnw3ers

3. [A] Create the shared directories for the NW2 and NW3 SAP systems to deploy to
the cluster.

Windows Command Prompt

sudo mkdir -p /sapmnt/NW2


sudo mkdir -p /usr/sap/NW2/SYS
sudo mkdir -p /usr/sap/NW2/ASCS10
sudo mkdir -p /usr/sap/NW2/ERS12
sudo mkdir -p /sapmnt/NW3
sudo mkdir -p /usr/sap/NW3/SYS
sudo mkdir -p /usr/sap/NW3/ASCS20
sudo mkdir -p /usr/sap/NW3/ERS22

sudo chattr +i /sapmnt/NW2


sudo chattr +i /usr/sap/NW2/SYS
sudo chattr +i /usr/sap/NW2/ASCS10
sudo chattr +i /usr/sap/NW2/ERS12
sudo chattr +i /sapmnt/NW3
sudo chattr +i /usr/sap/NW3/SYS
sudo chattr +i /usr/sap/NW3/ASCS20
sudo chattr +i /usr/sap/NW3/ERS22

4. [A] Add the mount entries for the /sapmnt/SID and /usr/sap/SID/SYS file systems
for the other SAP systems that you're deploying to the cluster. In this example, it's
NW2 and NW3 .

Update file /etc/fstab with the file systems for the other SAP systems that you're
deploying to the cluster.

If using Azure NetApp Files, follow the instructions in Azure VMs high
availability for SAP NW on RHEL with Azure NetApp Files.
If using GlusterFS cluster, follow the instructions in Azure VMs high
availability for SAP NW on RHEL.

Install ASCS / ERS


1. Create the virtual IP and health probe cluster resources for the ASCS instances of
the other SAP systems you're deploying to the cluster. This example uses NW2 and
NW3 ASCS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.

Windows Command Prompt


sudo pcs resource create fs_NW2_ASCS Filesystem
device='10.42.0.4:/sapMSIDR/usrsapNW2ascs' \
directory='/usr/sap/NW2/ASCS10' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-NW2_ASCS

sudo pcs resource create vip_NW2_ASCS IPaddr2 \


ip=10.3.1.52 \
--group g-NW2_ASCS

sudo pcs resource create nc_NW2_ASCS azure-lb port=62010 \


--group g-NW2_ASCS

sudo pcs resource create fs_NW3_ASCS Filesystem


device='10.42.0.4:/sapMSIDR/usrsapNW3ascs' \
directory='/usr/sap/NW3/ASCS20' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-NW3_ASCS

sudo pcs resource create vip_NW3_ASCS IPaddr2 \


ip=10.3.1.54 \
--group g-NW3_ASCS

sudo pcs resource create nc_NW3_ASCS azure-lb port=62020 \


--group g-NW3_ASCS

Make sure the cluster status is ok and that all resources are started. It's not
important on which node the resources are running.

2. [1] Install SAP NetWeaver ASCS.

Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS. For example, for
system NW2 , the virtual hostname is msnw2ascs , 10.3.1.52 , and the instance
number that you used for the probe of the load balancer, for example 10 . For
system NW3 , the virtual hostname is msnw3ascs , 10.3.1.54 , and the instance
number that you used for the probe of the load balancer, for example 20 . Note
down on which cluster node you installed ASCS for each SAP SID.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-


root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to
install SAP, using virtual host name.

Windows Command Prompt


# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/<SID>/ASCS<Instance#>,


try setting the owner to <sid>adm and group to sapsys of the ASCS<Instance#>
and retry.

3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of
the other SAP system you're deploying to the cluster. This example is for NW2 and
NW3 ERS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.

Windows Command Prompt

sudo pcs resource create fs_NW2_AERS Filesystem


device='10.42.0.4:/sapMSIDR/usrsapNW2ers' \
directory='/usr/sap/NW2/ERS12' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-NW2_AERS

sudo pcs resource create vip_NW2_AERS IPaddr2 \


ip=10.3.1.53 \
--group g-NW2_AERS

sudo pcs resource create nc_NW2_AERS azure-lb port=62112 \


--group g-NW2_AERS

sudo pcs resource create fs_NW3_AERS Filesystem


device='10.42.0.4:/sapMSIDR/usrsapNW3ers' \
directory='/usr/sap/NW3/ERS22' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-NW3_AERS

sudo pcs resource create vip_NW3_AERS IPaddr2 \


ip=10.3.1.55 \
--group g-NW3_AERS

sudo pcs resource create nc_NW3_AERS azure-lb port=62122 \


--group g-NW3_AERS

Make sure the cluster status is ok and that all resources are started.

Next, make sure that the resources of the newly created ERS group are running on
the cluster node, opposite to the cluster node where the ASCS instance for the
same SAP system was installed. For example, if NW2 ASCS was installed on
rhelmsscl1 , then make sure the NW2 ERS group is running on rhelmsscl2 . You

can migrate the NW2 ERS group to rhelmsscl2 by running the following command
for one of the cluster resources in the group:

Windows Command Prompt

pcs resource move fs_NW2_AERS rhelmsscl2

4. [2] Install SAP NetWeaver ERS.

Install SAP NetWeaver ERS as root on the other node, using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS. For example, for system NW2 , the virtual host name is msnw2ers , 10.3.1.53 ,
and the instance number that you used for the probe of the load balancer, for
example 12 . For system NW3 , the virtual host name msnw3ers , 10.3.1.55 , and the
instance number that you used for the probe of the load balancer, for example 22 .

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-


root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to
install SAP, using virtual host name.

Windows Command Prompt

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname

7 Note

Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions


correctly and the installation fails.

If the installation fails to create a subfolder in /usr/sap/<NW2>/ERS<Instance#>,


try setting the owner to <sid>adm and the group to sapsys of the ERS<Instance#>
folder and retry.

If it was necessary for you to migrate the ERS group of the newly deployed SAP
system to a different cluster node, don't forget to remove the location constraint
for the ERS group. You can remove the constraint by running the following
command. This example is given for SAP systems NW2 and NW3 . Make sure to
remove the temporary constraints for the same resource you used in the command
to move the ERS cluster group.

Windows Command Prompt

pcs resource clear fs_NW2_AERS


pcs resource clear fs_NW3_AERS

5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP
systems. The example shown below is for NW2 . You need to adapt the ASCS/SCS
and ERS profiles for all SAP instances added to the cluster.

ASCS/SCS profile

Windows Command Prompt

sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP note 1410736 .

ERS profile

Windows Command Prompt

sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Update the /usr/sap/sapservices file.

To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from /usr/sap/sapservices file.
The example shown below is for SAP systems NW2 and NW3 .

Windows Command Prompt

# Depending on whether the SAP Startup framework is integrated with


systemd, you may observe below entries on the node for ASCS instances.
You should comment out the line(s).
# LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW2/ASCS10/exe/sapstartsrv
pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm
# LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW3/ASCS20/exe/sapstartsrv
pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm
# systemctl --no-ask-password start SAPNW2_10 # sapstartsrv
pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs
# systemctl --no-ask-password start SAPNW3_20 # sapstartsrv
pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs

# Depending on whether the SAP Startup framework is integrated with


systemd, you may observe below entries on the node for ERS instances.
You should comment out the line(s).
#LD_LIBRARY_PATH=/usr/sap/NW2/ERS12/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW2/ERS12/exe/sapstartsrv
pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers -D -u nw2adm
#LD_LIBRARY_PATH=/usr/sap/NW3/ERS22/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW3/ERS22/exe/sapstartsrv
pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers -D -u nw3adm
# systemctl --no-ask-password start SAPNW2_12 # sapstartsrv
pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers
# systemctl --no-ask-password start SAPNW3_22 # sapstartsrv
pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers

) Important

With the systemd based SAP Startup Framework, SAP instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As described in SAP Note 3115048 , a fresh
installation of a SAP kernel with integrated systemd based SAP Startup
Framework support will always result in a systemd controlled SAP instance.
After an SAP kernel upgrade of an existing SAP installation to a kernel which
has systemd based SAP Startup Framework support, however, some manual
steps have to be performed as documented in SAP Note 3115048 to convert
the existing SAP startup environment to one which is systemd controlled.

When utilizing Red Hat HA services for SAP (cluster configuration) to manage
SAP application server instances such as SAP ASCS and SAP ERS, additional
modifications will be necessary to ensure compatibility between the
SAPInstance resource agent and the new systemd-based SAP startup
framework. So once the SAP application server instances has been installed or
switched to a systemd enabled SAP Kernel as per SAP Note 3115048 , the
steps mentioned in Red Hat KBA 6884531 must be completed successfully
on all cluster nodes.

7. [1] Create the SAP cluster resources for the newly installed SAP system.

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources for SAP systems NW2 and NW3 as follows.
SAP introduced support for ENSA2 , including replication, in SAP NetWeaver 7.52.
Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2
support, see SAP Note 2630416 for enqueue server 2 support.

If you use enqueue server 2 architecture (ENSA2 ), install resource agent


resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources for SAP
systems NW2 and NW3 as follows:

ENSA1

Bash

sudo pcs property set maintenance-mode=true

sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \


InstanceName=NW2_ASCS10_msnw2ascs
START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW2_ASCS

sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000

sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \


InstanceName=NW2_ERS12_msnw2ers
START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW2_AERS

sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000


sudo pcs constraint location rsc_sap_NW2_ASCS10 rule score=2000
runs_ers_NW2 eq 1
sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS
kind=Optional symmetrical=false
sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
InstanceName=NW3_ASCS20_msnw3ascs
START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW3_ASCS

sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000

sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \


InstanceName=NW3_ERS22_msnw3ers
START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-NW3_AERS

sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000


sudo pcs constraint location rsc_sap_NW3_ASCS20 rule score=2000
runs_ers_NW3 eq 1
sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS
kind=Optional symmetrical=false

sudo pcs property set maintenance-mode=false

If you're upgrading from an older version and switching to enqueue server 2, see
SAP note 2641019 .

7 Note

The timeouts in the above configuration are just examples and might need to
be adapted to the specific SAP setup.

Make sure that the cluster status is ok and that all resources are started. It's not
important on which node the resources are running. The following example shows
the cluster resources status, after SAP systems NW2 and NW3 were added to the
cluster.

Bash

sudo pcs status

# Online: [ rhelmsscl1 rhelmsscl2 ]


# Full list of resources:

# rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
# Resource Group: g-NW2_ASCS
# fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
# vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
# nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
# rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
# Resource Group: g-NW2_AERS
# fs_NW2_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
# vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
# nc_NW2_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
# rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
# Resource Group: g-NW3_ASCS
# fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
# vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
# nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
# rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
# Resource Group: g-NW3_AERS
# fs_NW3_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
# vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
# nc_NW3_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
# rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1

8. [A] Add firewall rules for ASCS and ERS on both nodes. The example below shows
the firewall rules for both SAP systems NW2 and NW3 .

Bash

# NW1 - ASCS
sudo firewall-cmd --zone=public --add-port=
{62010,3210,3610,3910,8110,51013,51014,51016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62010,3210,3610,3910,8110,51013,51014,51016}/tcp
# NW2 - ERS
sudo firewall-cmd --zone=public --add-port=
{62112,3212,3312,51213,51214,51216}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62112,3212,3312,51213,51214,51216}/tcp
# NW3 - ASCS
sudo firewall-cmd --zone=public --add-port=
{62020,3220,3620,3920,8120,52013,52014,52016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62020,3220,3620,3920,8120,52013,52014,52016}/tcp
# NW3 - ERS
sudo firewall-cmd --zone=public --add-port=
{62122,3222,3322,52213,52214,52216}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62122,3222,3322,52213,52214,52216}/tcp

Proceed with the SAP installation


Complete your SAP installation by:

Preparing your SAP NetWeaver application servers.


Installing a DBMS instance.
Installing A primary SAP application server.
Installing one or more other SAP application instances.

Test the multi-SID cluster setup


The following tests are a subset of the test cases in the best practices guides of Red Hat.
They're included for your convenience. For the full list of cluster tests, reference the
following documentation:

If you use Azure NetApp Files NFS volumes, follow Azure VMs high availability for
SAP NetWeaver on RHEL with Azure NetApp Files for SAP applications
If you use highly available GlusterFS , follow Azure VMs high availability for SAP
NetWeaver on RHEL for SAP applications.

Always read the Red Hat best practices guides and perform all other tests that might
have been added. The tests that are presented are in a two-node, multi-SID cluster with
three SAP systems installed.

1. Manually migrate the ASCS instance. The example shows migrating the ASCS
instance for SAP system NW3.

Resource state before starting the test:

Windows Command Prompt

Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1

Run the following commands as root to migrate the NW3 ASCS instance.

Windows Command Prompt

pcs resource move rsc_sap_NW3_ASCS200


# Clear temporary migration constraints
pcs resource clear rsc_sap_NW3_ASCS20

# Remove failed actions for the ERS that occurred as part of the
migration
pcs resource cleanup rsc_sap_NW3_ERS22

Resource state after the test:

Windows Command Prompt

Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2

2. Simulate node crash.

Resource state before starting the test:

Windows Command Prompt


Online: [ rhelmsscl1 rhelmsscl2 ]

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2

Run the following command as root on a node where at least one ASCS instance is
running. This example runs the command on rhelmsscl1 , where the ASCS
instances for NW1 , NW2 , and NW3 are running.

Windows Command Prompt

echo c > /proc/sysrq-trigger

The status after the test and after the node that was crashed has started again,
should look like these results:

Windows Command Prompt

Full list of resources:

rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl2


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW2_AERS
fs_NW2_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW2_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
rhelmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
rhelmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
rhelmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
rhelmsscl2
Resource Group: g-NW3_AERS
fs_NW3_AERS (ocf::heartbeat:Filesystem): Started
rhelmsscl1
vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started
rhelmsscl1
nc_NW3_AERS (ocf::heartbeat:azure-lb): Started
rhelmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
rhelmsscl1

If there are messages for failed resources, clean the status of the failed resources.
For example:

Windows Command Prompt

pcs resource cleanup rsc_sap_NW1_ERS02

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP

To learn how to establish high availability and plan for disaster recovery of SAP HANA
on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines (VMs).
High-availability SAP NetWeaver with
simple mount and NFS on SLES for SAP
Applications VMs
Article • 05/06/2024

This article describes how to deploy and configure Azure virtual machines (VMs), install
the cluster framework, and install a high-availability (HA) SAP NetWeaver system with a
simple mount structure. You can implement the presented architecture by using one of
the following Azure native Network File System (NFS) services:

NFS on Azure Files.


Azure NetApp Files.

The simple mount configuration is expected to be the default for new implementations
on SLES for SAP Applications 15.

Prerequisites
The following guides contain all the required information to set up a NetWeaver HA
system:

SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster With Simple
Mount
Use of Filesystem resource for ABAP SAP Central Services (ASCS)/ERS HA setup not
possible
SAP Note 1928533 , which has:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, operating systems (OSs), and combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 , which lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 , which has recommended OS settings for SUSE Linux
Enterprise Server (SLES) for SAP Applications
SAP Note 2178632 , which has detailed information about all monitoring metrics
reported for SAP in Azure
SAP Note 2191498 , which has the required SAP Host Agent version for Linux in
Azure
SAP Note 2243692 , which has information about SAP licensing on Linux in Azure
SAP Note 2578899 , which has general information about SUSE Linux Enterprise
Server 15
SAP Note 1275776 , which has information about preparing SUSE Linux
Enterprise Server for SAP environments
SAP Note 1999351 , which has additional troubleshooting information for the
Azure Enhanced Monitoring Extension for SAP
SAP community wiki , which has all required SAP Notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA best practice guides
SUSE High Availability Extension release notes
Azure Files documentation
NetApp NFS best practices

Overview
This article describes a high-availability configuration for ASCS with a simple mount
structure. To deploy the SAP application layer, you need shared directories like
/sapmnt/SID , /usr/sap/SID , and /usr/sap/trans , which are highly available. You can

deploy these file systems on NFS on Azure Files or Azure NetApp Files.

You still need a Pacemaker cluster to help protect single-point-of-failure components


like SAP Central Services (SCS) and ASCS.

Compared to the classic Pacemaker cluster configuration, with the simple mount
deployment, the cluster doesn't manage the file systems. This configuration is
supported only on SLES for SAP Applications 15 and later. This article doesn't cover the
database layer in detail.

The example configurations and installation commands use the following instance
numbers.

ノ Expand table

Instance name Instance number

ASCS 00

Enqueue Replication Server (ERS) 01

Primary Application Server (PAS) 02


Instance name Instance number

Additional Application Server (AAS) 03

SAP system identifier NW1

) Important

The configuration with simple mount structure is supported only on SLES for SAP
Applications 15 and later releases.

Prepare the infrastructure


The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.

Deploy Linux VMs manually via Azure portal


This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note
Health probe configuration property numberOfProbes, otherwise known as
"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

A floating IP address isn't supported on a network interface card (NIC) secondary IP


configuration in load-balancing scenarios. For details, see Azure Load Balancer
limitations. If you need another IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard Azure load balancer, there will be no
outbound internet connectivity unless you perform additional configuration to
allow routing to public endpoints. For details on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Azure
Standard Load Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer

health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more information, see saptune 3.1.1 – Do I Need to Update? .

Deploy NFS
There are two options for deploying Azure native NFS to host the SAP shared
directories. You can either deploy an NFS file share on Azure Files or deploy an NFS
volume on Azure NetApp Files. NFS on Azure Files supports the NFSv4.1 protocol. NFS
on Azure NetApp Files supports both NFSv4.1 and NFSv3.

The next sections describe the steps to deploy NFS. Select only one of the options.

Deploy an Azure Files storage account and NFS shares


NFS on Azure Files runs on top of Azure Files premium storage. Before you set up NFS
on Azure Files, see How to create an NFS share.

There are two options for redundancy within an Azure region:

Locally redundant storage (LRS) offers local, in-zone synchronous data replication.
Zone-redundant storage (ZRS) replicates your data synchronously across the three
availability zones in the region.

Check if your selected Azure region offers NFSv4.1 on Azure Files with the appropriate
redundancy. Review the availability of Azure Files by Azure region for Premium Files
Storage. If your scenario benefits from ZRS, verify that premium file shares with ZRS are
supported in your Azure region.

We recommend that you access your Azure storage account through an Azure private
endpoint. Be sure to deploy the Azure Files storage account endpoint, and the VMs
where you need to mount the NFS shares, in the same Azure virtual network or in
peered Azure virtual networks.

1. Deploy an Azure Files storage account named sapnfsafs. This example uses ZRS. If
you're not familiar with the process, see Create a storage account for the Azure
portal.
2. On the Basics tab, use these settings:
a. For Storage account name, enter sapnfsafs.
b. For Performance, select Premium.
c. For Premium account type, select FileStorage.
d. For Replication, select Zone redundancy (ZRS).
3. Select Next.
4. On the Advanced tab, clear Require secure transfer for REST API. If you don't clear
this option, you can't mount the NFS share to your VM. The mount operation will
time out.
5. Select Next.
6. In the Networking section, configure these settings:
a. Under Networking connectivity, for Connectivity method, select Private
endpoint.
b. Under Private endpoint, select Add private endpoint.
7. On the Create private endpoint pane, select your subscription, resource group,
and location. Then make the following selections:
a. For Name, enter sapnfsafs_pe.
b. For Storage sub-resource, select file.
c. Under Networking, for Virtual network, select the virtual network and subnet to
use. Again, you can use either the virtual network where your SAP VMs are or a
peered virtual network.
d. Under Private DNS integration, accept the default option of Yes for Integrate
with private DNS zone. Be sure to select your private DNS zone.
e. Select OK.
8. On the Networking tab again, select Next.
9. On the Data protection tab, keep all the default settings.
10. Select Review + create to validate your configuration.
11. Wait for the validation to finish. Fix any issues before continuing.
12. On the Review + create tab, select Create.

Next, deploy the NFS shares in the storage account that you created. In this example,
there are two NFS shares, sapnw1 and saptrans .

1. Sign in to the Azure portal .


2. Select or search for Storage accounts.
3. On the Storage accounts page, select sapnfsafs.
4. On the resource menu for sapnfsafs, select File shares under Data storage.
5. On the File shares page, select File share, and then:
a. For Name, enter sapnw1, saptrans.
b. Select an appropriate share size. Consider the size of the data stored on the
share, I/O per second (IOPS), and throughput requirements. For more
information, see Azure file share targets.
c. Select NFS as the protocol.
d. Select No root Squash. Otherwise, when you mount the shares on your VMs,
you can't see the file owner or group.

The SAP file systems that don't need to be mounted via NFS can also be deployed on
Azure disk storage. In this example, you can deploy /usr/sap/NW1/D02 and
/usr/sap/NW1/D03 on Azure disk storage.

Important considerations for NFS on Azure Files shares


When you plan your deployment with NFS on Azure Files, consider the following
important points:
The minimum share size is 100 gibibytes (GiB). You pay for only the capacity of the
provisioned shares.
Size your NFS shares not only based on capacity requirements, but also on IOPS
and throughput requirements. For details, see Azure file share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues with NFS
on Azure Files, consult Troubleshoot Azure file share performance.
For SAP J2EE systems, placing /usr/sap/<SID>/J<nr> on NFS on Azure Files isn't
supported.
If your SAP system has a heavy load of batch jobs, you might have millions of job
logs. If the SAP batch job logs are stored in the file system, pay special attention to
the sizing of the sapmnt share. As of SAP_BASIS 7.52, the default behavior for the
batch job logs is to be stored in the database. For details, see Job log in the
database .
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces.
Don't use the saptrans share for any other activity, such as interfaces.
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also scalability and performance targets for storage accounts.
Be careful to not exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than five SAP systems in a single
storage account. This guideline helps you avoid exceeding the storage account
limits and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for nonproduction and production SAP
systems in the same storage account.
We recommend that you deploy on SLES 15 SP2 or later to benefit from NFS client
improvements.
Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions
automatically redirect to a healthy zone. You don't have to remount the NFS shares
on your VMs.
If you're deploying your VMs across availability zones, use a storage account with
ZRS in the Azure regions that supports ZRS.
Azure Files doesn't currently support automatic cross-region replication for
disaster recovery scenarios.

Deploy Azure NetApp Files resources


1. Check that the Azure NetApp Files service is available in your Azure region of
choice .
2. Create the NetApp account in the selected Azure region. Follow these instructions.

3. Set up the Azure NetApp Files capacity pool. Follow these instructions.

The SAP NetWeaver architecture presented in this article uses a single Azure
NetApp Files capacity pool, Premium SKU. We recommend Azure NetApp Files
Premium SKU for SAP NetWeaver application workloads on Azure.

4. Delegate a subnet to Azure NetApp Files, as described in these instructions.

5. Deploy Azure NetApp Files volumes by following these instructions. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp volumes are assigned automatically.

Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. This example
uses two Azure NetApp Files volumes: sapnw1 and trans . The file paths that are
mounted to the corresponding mount points are:

Volume sapnw1 ( nfs://10.27.1.5/sapnw1/sapmntNW1 )


Volume sapnw1 ( nfs://10.27.1.5/sapnw1/usrsapNW1 )
Volume trans ( nfs://10.27.1.5/trans )

The SAP file systems that don't need to be shared can also be deployed on Azure disk
storage. For example, /usr/sap/NW1/D02 and /usr/sap/NW1/D03 could be deployed as
Azure disk storage.

Important considerations for NFS on Azure NetApp Files

When you're considering Azure NetApp Files for the SAP NetWeaver high-availability
architecture, be aware of the following important considerations:

The minimum capacity pool is 4 tebibytes (TiB). You can increase the size of the
capacity pool in 1-TiB increments.
The minimum volume is 100 GiB.
Azure NetApp Files and all virtual machines where Azure NetApp Files volumes are
mounted must be in the same Azure virtual network or in peered virtual networks
in the same region. Azure NetApp Files access over virtual network peering in the
same region is supported. Azure NetApp Files access over global peering isn't yet
supported.
The selected virtual network must have a subnet that's delegated to Azure NetApp
Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. When you're sizing the Azure NetApp Files volumes for
SAP, make sure that the resulting throughput meets the application's
requirements.
Azure NetApp Files offers an export policy. You can control the allowed clients and
the access type (for example, read/write or read-only).
Azure NetApp Files isn't zone aware yet. Currently, Azure NetApp Files isn't
deployed in all availability zones in an Azure region. Be aware of the potential
latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP application layer (ASCS/ERS, SAP application
servers).

Set up ASCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create a Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to
create a basic Pacemaker cluster for SAP ASCS.

Prepare for installation


The following items are prefixed with:

[A]: Applicable to all nodes.


[1]: Applicable to only node 1.
[2]: Applicable to only node 2.

1. [A] Install the latest version of the SUSE connector.

Bash

sudo zypper install sap-suse-cluster-connector

2. [A] Install the sapstartsrv resource agent.

Bash
sudo zypper install sapstartsrv-resource-agents

3. [A] Update SAP resource agents.

To use the configuration that this article describes, you need a patch for the
resource-agents package. To check if the patch is already installed, use the
following command.

Bash

sudo grep 'parameter name="IS_ERS"'


/usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to the following example.

Bash

<parameter name="IS_ERS" unique="0" required="0">;

If the grep command doesn't find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page .

) Important

You need to install at least sapstartsrv-resource-agents version 0.91 and


resource-agents 4.x from November 2021.

4. [A] Set up host name resolution.

You can either use a DNS server or modify /etc/hosts on all nodes. This example
shows how to use the /etc/hosts file.

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and host name to
match your environment.

Bash
# IP address of cluster node 1
10.27.0.6 sap-cl1
# IP address of cluster node 2
10.27.0.7 sap-cl2
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ASCS
10.27.0.9 sapascs
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ERS
10.27.0.10 sapers

5. [A] Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Check if the ResourceDisk.Format property is already set to y, and if


not, set it.
ResourceDisk.Format=y

# Set the ResourceDisk.EnableSwap property to y.


# Create and use the SWAP file on the resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with the ResourceDisk.SwapSizeMB


property.
# The free space of resource disk varies by virtual machine size. Don't
set a value that's too big. You can check the SWAP space by using the
swapon command.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash

sudo service waagent restart

Prepare SAP directories if you're using NFS on Azure Files


1. [1] Create the SAP directories on the NFS share.

Temporarily mount the NFS share sapnw1 to one of the VMs and create the SAP
directories that will be used as nested mount points.

Bash
# Temporarily mount the volume.
sudo mkdir -p /saptmp
sudo mount -t nfs sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1
/saptmp -o noresvport,vers=4,minorversion=1,sec=sys
# Create the SAP directories.
sudo cd /saptmp
sudo mkdir -p sapmntNW1
sudo mkdir -p usrsapNW1
# Unmount the volume and delete the temporary directory.
cd ..
sudo umount /saptmp
sudo rmdir /saptmp

2. [A] Create the shared directories.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/NW1
sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/NW1
sudo chattr +i /usr/sap/trans

3. [A] Mount the file systems.

With the simple mount configuration, the Pacemaker cluster doesn't control the
file systems.

Bash

echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1
/sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/
/usr/sap/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans
/usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
# Mount the file systems.
mount -a

Prepare SAP directories if you're using NFS on Azure


NetApp Files
The instructions in this section are applicable only if you're using Azure NetApp Files
volumes with the NFSv4.1 protocol. Perform the configuration on all VMs where Azure
NetApp Files NFSv4.1 volumes will be mounted.

1. [A] Disable ID mapping.

a. Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, defaultv4iddomain.com . Also verify that the
mapping is set to nobody .

Bash

sudo cat /etc/idmapd.conf


# Examplepython-azure-mgmt-compute
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

b. Verify nfs4_disable_idmapping . It should be set to Y .

To create the directory structure where nfs4_disable_idmapping is located, run


the mount command. You won't be able to manually create the directory under
/sys/modules , because access is reserved for the kernel and drivers.

Bash

# Check nfs4_disable_idmapping.
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y:
mkdir /mnt/tmp
mount 10.27.1.5:/sapnw1 /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent.
echo "options nfs nfs4_disable_idmapping=Y" >>
/etc/modprobe.d/nfs.conf

2. [1] Temporarily mount the Azure NetApp Files volume on one of the VMs and
create the SAP directories (file paths).

Bash
# Temporarily mount the volume.
sudo mkdir -p /saptmp
# If you're using NFSv3:
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp
10.27.1.5:/sapnw1 /saptmp
# If you're using NFSv4.1:
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp
10.27.1.5:/sapnw1 /saptmp
# Create the SAP directories.
sudo cd /saptmp
sudo mkdir -p sapmntNW1
sudo mkdir -p usrsapNW1
# Unmount the volume and delete the temporary directory.
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

3. [A] Create the shared directories.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/NW1
sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/NW1
sudo chattr +i /usr/sap/trans

4. [A] Mount the file systems.

With the simple mount configuration, the Pacemaker cluster doesn't control the
file systems.

Bash

# If you're using NFSv3:


echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs nfsvers=3,hard 0 0"
>> /etc/fstab
echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs nfsvers=3,hard 0 0"
>> /etc/fstab
echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=3,hard 0 0" >>
/etc/fstab
# If you're using NFSv4.1:
echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs
nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs
nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=4.1,sec=sys,hard 0
0" >> /etc/fstab
# Mount the file systems.
mount -a

Install SAP NetWeaver ASCS and ERS


1. [1] Create a virtual IP resource and health probe for the ASCS instance.

) Important

We recommend using the azure-lb resource agent, which is part of the


resource-agents package with a minimum version of resource-agents-
4.3.0184.6ee15eb2-4.13.1 .

Bash

sudo crm node standby sap-cl2


sudo crm configure primitive vip_NW1_ASCS IPaddr2 \
params ip=10.27.0.9 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW1_ASCS nc_NW1_ASCS vip_NW1_ASCS \


meta resource-stickiness=3000

Make sure that the cluster status is OK and that all resources are started. It isn't
important which node the resources are running on.

Bash

sudo crm_mon -r
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl1
# Resource Group: g-NW1_ASCS
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1

2. [1] Install SAP NetWeaver ASCS as root on the first node.


Use a virtual host name that maps to the IP address of the load balancer's front-
end configuration for ASCS (for example, sapascs , 10.27.0.9 ) and the instance
number that you used for the probe of the load balancer (for example, 00 ).

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-


root user to connect to sapinst . You can use the SAPINST_USE_HOSTNAME parameter
to install SAP by using a virtual host name.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/NW1/ASCS00 , set the owner


and group of the ASCS00 folder and retry.

Bash

chown nw1adm /usr/sap/NW1/ASCS00


chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health probe for the ERS instance.

Bash

sudo crm node online sap-cl2


sudo crm node standby sap-cl1

sudo crm configure primitive vip_NW1_ERS IPaddr2 \


params ip=10.27.0.10 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ERS azure-lb port=62101 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW1_ERS nc_NW1_ERS vip_NW1_ERS

Make sure that the cluster status is OK and that all resources are started. It isn't
important which node the resources are running on.

Bash

sudo crm_mon -r

# Node sap-cl1: standby


# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl2
# Resource Group: g-NW1_ASCS
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# Resource Group: g-NW1_ERS
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2

4. [2] Install SAP NetWeaver ERS as root on the second node.

Use a virtual host name that maps to the IP address of the load balancer's front-
end configuration for ERS (for example, sapers , 10.27.0.10 ) and the instance
number that you used for the probe of the load balancer (for example, 01 ).

You can use the SAPINST_REMOTE_ACCESS_USER parameter to allow a non-root user


to connect to sapinst . You can use the SAPINST_USE_HOSTNAME parameter to install
SAP by using a virtual host name.

Bash

<swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname

7 Note

Use SWPM SP 20 PL 05 or later. Earlier versions don't set the permissions


correctly, and they cause the installation to fail.

If the installation fails to create a subfolder in /usr/sap/NW1/ERS01 , set the owner


and group of the ERS01 folder and retry.

Bash

chown nw1adm /usr/sap/NW1/ERS01


chgrp sapsys /usr/sap/NW1/ERS01

5. [1] Adapt the ASCS instance profile.

Bash
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs

# Change the restart command to a start command.


# Restart_Program_01 = local $(_EN) pf=$(_PF).
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines.


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keepalive parameter, if you're using ENSA1.


enque/encni/set_so_keepalive = true

For Standalone Enqueue Server 1 and 2 (ENSA1 and ENSA2), make sure that the
keepalive OS parameters are set as described in SAP Note 1410736 .

Now adapt the ERS instance profile.

Bash

sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers

# Change the restart command to a start command.


# Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID).
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines.


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Remove Autostart from the ERS profile.


# Autostart = 1

6. [A] Configure keepalive .

Communication between the SAP NetWeaver application server and ASCS is


routed through a software load balancer. The load balancer disconnects inactive
connections after a configurable timeout.

To prevent this disconnection, you need to set a parameter in the SAP NetWeaver
ASCS profile, if you're using ENSA1. Change the Linux system keepalive settings
on all SAP servers for both ENSA1 and ENSA2. For more information, read SAP
Note1410736 .

Bash

# Change the Linux system configuration.


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation.

Bash

# Add sidadm to the haclient group.


sudo usermod -aG haclient nw1adm

8. [1] Add the ASCS and ERS SAP services to the sapservice file.

Add the ASCS service entry to the second node, and copy the ERS service entry to
the first node.

Bash

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh sap-cl2 "cat


>>/usr/sap/sapservices"
sudo ssh sap-cl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a
/usr/sap/sapservices

9. [A] Enable sapping and sappong . The sapping agent runs before sapinit to hide
the /usr/sap/sapservices file. The sappong agent runs after sapinit to unhide the
sapservices file during VM boot. SAPStartSrv isn't started automatically for an

SAP instance at boot time, because the Pacemaker cluster manages it.

Bash

sudo systemctl enable sapping


sudo systemctl enable sappong

10. [1] Create SAPStartSrv resource for ASCS and ERS by creating a file and then load
the file.

Bash

vi crm_sapstartsrv.txt

Enter below primitive in crm_sapstartsrv.txt file and save

Bash

primitive rsc_sapstartsrv_NW1_ASCS00 ocf:suse:SAPStartSrv \


params InstanceName=NW1_ASCS00_sapascs
primitive rsc_sapstartsrv_NW1_ERS01 ocf:suse:SAPStartSrv \
params InstanceName=NW1_ERS01_sapers

Load the file using below command.

Bash

sudo crm configure load update crm_sapstartsrv.txt

7 Note

If you’ve set up a SAPStartSrv resource using the "crm configure primitive…"


command on crmsh version 4.4.0+20220708.6ed6b56f-150400.3.3.1 or later,
it’s important to review the configuration of the SAPStartSrv resource
primitives. If a monitor operation is present, it should be removed. While SUSE
also suggests removing the start and stop operations, but these are not as
crucial as the monitor operation. For more information, see recent changes to
crmsh package can result in unsupported configuration of SAPStartSrv
resource Agent in a SAP NetWeaver HA cluster .

11. [1] Create the SAP cluster resources.

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support, see SAP Note 2630416 .

ENSA1

Bash

sudo crm configure property maintenance-mode="true"

# If you're using NFS on Azure Files or NFSv3 on Azure NetApp


Files:
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_sapascs
START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \
AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

# If you're using NFS on Azure Files or NFSv3 on Azure NetApp


Files:
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS01_sapers
START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \
AUTOMATIC_RECOVER=false IS_ERS=true MINIMAL_PROBE=true \
meta priority=1000

# If you're using NFSv4.1 on Azure NetApp Files:


sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
op monitor interval=11 timeout=105 on-fail=restart \
params InstanceName=NW1_ASCS00_sapascs
START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \
AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

# If you're using NFSv4.1 on Azure NetApp Files:


sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
op monitor interval=11 timeout=105 on-fail=restart \
params InstanceName=NW1_ERS01_sapers
START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \
AUTOMATIC_RECOVER=false IS_ERS=true MINIMAL_PROBE=true \
meta priority=1000

sudo crm configure modgroup g-NW1_ASCS add


rsc_sapstartsrv_NW1_ASCS00
sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
sudo crm configure modgroup g-NW1_ERS add rsc_sapstartsrv_NW1_ERS01
sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS


g-NW1_ASCS
sudo crm configure location loc_sap_NW1_failover_to_ers
rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
sudo crm configure order ord_sap_NW1_first_start_ascs Optional:
rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false

sudo crm_attribute --delete --name priority-fencing-delay

sudo crm node online sap-cl1


sudo crm configure property maintenance-mode="false"

If you're upgrading from an older version and switching to ENSA2, see SAP Note
2641019 .

Make sure that the cluster status is OK and that all resources are started. It isn't
important which node the resources are running on.

Bash
sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl2
# Resource Group: g-NW1_ASCS
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
# rsc_sapstartsrv_NW1_ASCS00 (ocf::suse:SAPStartSrv): Started
sap-cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
# Resource Group: g-NW1_ERS
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
# rsc_sapstartsrv_NW1_ERS01 (ocf::suse:SAPStartSrv): Started
sap-cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1

Prepare the SAP application server


Some databases require you to execute the database installation on an application
server. Prepare the application server VMs to be able to execute the database
installation.

The following common steps assume that you install the application server on a server
that's different from the ASCS and HANA servers:

1. Set up host name resolution.

You can either use a DNS server or modify /etc/hosts on all nodes. This example
shows how to use the /etc/hosts file.

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts . Change the IP address and host name to
match your environment.

Bash

10.27.0.6 sap-cl1
10.27.0.7 sap-cl2
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ASCS
10.27.0.9 sapascs
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ERS
10.27.0.10 sapers
10.27.0.8 sapa01
10.27.0.12 sapa02

2. Configure the SWAP file.

Bash

sudo vi /etc/waagent.conf

# Set the ResourceDisk.EnableSwap property to y.


# Create and use the SWAP file on the resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file by using the ResourceDisk.SwapSizeMB


property.
# The free space of the resource disk varies by virtual machine size.
Don't set a value that's too big. You can check the SWAP space by using
the swapon command.
ResourceDisk.SwapSizeMB=2000

Restart the agent to activate the change.

Bash

sudo service waagent restart

Prepare SAP directories


If you're using NFS on Azure Files, use the following instructions to prepare the SAP
directories on the SAP application server VMs:

1. Create the mount points.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

2. Mount the file systems.

Bash
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1
/sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans
/usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
# Mount the file systems.
mount -a

If you're using NFS on Azure NetApp Files, use the following instructions to prepare the
SAP directories on the SAP application server VMs:

1. Create the mount points.

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

2. Mount the file systems.

Bash

# If you're using NFSv3:


echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs nfsvers=3,hard 0 0"
>> /etc/fstab
echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=3, hard 0 0" >>
/etc/fstab
# If you're using NFSv4.1:
echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs
nfsvers=4.1,sec=sys,hard 0 0" >> /etc/fstab
echo "10.27.1.5:/saptrans /usr/sap/trans nfs nfsvers=4.1,sec=sys,hard 0
0" >> /etc/fstab
# Mount the file systems.
mount -a

Install the database


In this example, SAP NetWeaver is installed on SAP HANA. You can use any supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High availability of SAP HANA on Azure virtual machines. For a list of
supported databases, see SAP Note 1928533 .
Install the SAP NetWeaver database instance as root by using a virtual host name that
maps to the IP address of the load balancer's front-end configuration for the database.
You can use the SAPINST_REMOTE_ACCESS_USER parameter to allow a non-root user to
connect to sapinst .

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

Install the SAP NetWeaver application server


Follow these steps to install an SAP application server:

1. [A] Prepare the application server.

Follow the steps in SAP NetWeaver application server preparation.

2. [A] Install a primary or additional SAP NetWeaver application server.

You can use the SAPINST_REMOTE_ACCESS_USER parameter to allow a non-root user


to connect to sapinst .

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [A] Update the SAP HANA secure store to point to the virtual name of the SAP
HANA system replication setup.

Run the following command to list the entries.

Bash

hdbuserstore List

The command should list all entries and should look similar to this example.

Bash

DATA FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.DAT


KEY FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.27.0.4:30313
USER: SAPABAP1
DATABASE: NW1

In this example, the IP address of the default entry points to the VM, not the load
balancer. Change the entry to point to the virtual host name of the load balancer.
Be sure to use the same port and database name. For example, use 30313 and NW1
in the sample output.

Bash

su - nw1adm
hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP
schema>

Test your cluster setup


Thoroughly test your Pacemaker cluster. Run the typical failover tests.

Next steps
HA for SAP NetWeaver on Azure VMs on SLES for SAP applications multi-SID guide
SAP workload configurations with Azure availability zones
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High Availability of SAP HANA on Azure VMs
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Article • 02/05/2024

This article describes how to deploy and configure VMs, install the cluster framework,
and install an HA SAP NetWeaver system, using NFS on Azure Files. The example
configurations use VMs that run on SUSE Linux Enterprise Server (SLES).

For new implementations on SLES for SAP Applications 15, we recommended deploying
high availability for SAP ASCS/ERS in simple mount configuration. The classic Pacemaker
configuration, based on cluster-controlled file systems for the SAP central services
directories, described in this article is still supported .

Prerequisites
Azure Files documentation.
SAP Note 1928533 , which has:
List of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software, and operating system (OS) and database
combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 2578899 has general information about SUSE Linux Enterprise Server
15
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux.
Azure Virtual Machines deployment for SAP on Linux.
Azure Virtual Machines DBMS deployment for SAP on Linux.
SUSE SAP HA Best Practice Guides . The guides contain all required information
to set up Netweaver HA and SAP HANA System Replication on-premises. Use these
guides as a general baseline. They provide much more detailed information.
SUSE High Availability Extension Release Notes .

Overview
To deploy the SAP NetWeaver application layer, you need shared directories like
/sapmnt/SID and /usr/sap/trans in the environment. Additionally, when deploying an

HA SAP system, you need to protect and make highly available file systems like
/sapmnt/SID and /usr/sap/SID/ASCS .

Now you can place these file systems on NFS on Azure Files. NFS on Azure Files is an HA
storage solution. This solution offers synchronous Zone redundant storage (ZRS) and is
suitable for SAP ASCS/ERS instances deployed across Availability Zones. You still need a
Pacemaker cluster to protect single point of failure components like SAP Netweaver
central services(ASCS/SCS).

The example configurations and installation commands use the following instance
numbers:

ノ Expand table

Instance name Instance number

ABAP SAP Central Services (ASCS) 00

ERS 01

Primary Application Server (PAS) 02

Additional Application Server (AAS) 03

SAP system identifier NW1


Prepare infrastructure
The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.

Deploy Linux VMs manually via Azure portal


This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.
Configure Azure load balancer
During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.
) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer

health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .

Deploy Azure Files storage account and NFS shares


NFS on Azure Files, runs on top of Azure Files Premium storage. Before setting up NFS
on Azure Files, see How to create an NFS share.

There are two options for redundancy within an Azure region:

Locally redundant storage (LRS), which offers local, in-zone synchronous data
replication.
Zone redundant storage (ZRS), which replicates your data synchronously across the
three availability zones in the region.

Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate
redundancy. Review the availability of Azure Files by Azure region under Premium
Files Storage. If your scenario benefits from ZRS, verify that Premium File shares with
ZRS are supported in your Azure region.

It's recommended to access your Azure Storage account through an Azure Private
Endpoint. Make sure to deploy the Azure Files storage account endpoint and the VMs,
where you need to mount the NFS shares, in the same Azure VNet or peered Azure
VNets.

1. Deploy a File Storage account named sapafsnfs . In this example, we use ZRS. If
you're not familiar with the process, see Create a storage account for the Azure
portal.
2. In the Basics tab, use these settings:
a. For Storage account name, enter sapafsnfs .
b. For Performance, select Premium.
c. For Premium account type, select FileStorage.
d. For Replication, select zone redundancy (ZRS).
3. Select Next.
4. In the Advanced tab, deselect Require secure transfer for REST API Operations. If
you don't deselect this option, you can't mount the NFS share to your VM. The
mount operation will time out.
5. Select Next.
6. In the Networking section, configure these settings:
a. Under Networking connectivity, for Connectivity method, select Private
endpoint.
b. Under Private endpoint, select Add private endpoint.
7. In the Create private endpoint pane, select your Subscription, Resource group,
and Location. For Name, enter sapafsnfs_pe . For Storage sub-resource, select file.
Under Networking, for Virtual network, select the VNet and subnet to use. Again,
you can use the VNet where your SAP VMs are, or a peered VNet. Under Private
DNS integration, accept the default option Yes for Integrate with private DNS
zone. Make sure to select your Private DNS Zone. Select OK.
8. On the Networking tab again, select Next.
9. On the Data protection tab, keep all the default settings.
10. Select Review + create to validate your configuration.
11. Wait for the validation to finish. Fix any issues before continuing.
12. On the Review + create tab, select Create.

Next, deploy the NFS shares in the storage account you created. In this example, there
are two NFS shares, sapnw1 and saptrans .

1. Sign in to the Azure portal .


2. Select or search for Storage accounts.

3. On the Storage accounts page, select sapafsnfs.

4. On the resource menu for sapafsnfs, select File shares under Data storage.

5. On the File shares page, select File share.


a. For Name, enter sapnw1 , saptrans .
b. Select an appropriate share size. For example, 128 GB. Consider the size of the
data stored on the share, IOPs and throughput requirements. For more
information, see Azure file share targets.
c. Select NFS as the protocol.
d. Select No root Squash. Otherwise, when you mount the shares on your VMs,
you can't see the file owner or group.

) Important

The share size above is just an example. Make sure to size your shares
appropriately. Size not only based on the size of the of data stored on the
share, but also based on the requirements for IOPS and throughput. For
details see Azure file share targets.

The SAP file systems that don't need to be mounted via NFS can also be deployed
on Azure disk storage. In this example, you can deploy /usr/sap/NW1/D02 and
/usr/sap/NW1/D03 on Azure disk storage.

Important considerations for NFS on Azure Files shares


When you plan your deployment with NFS on Azure Files, consider the following
important points:

The minimum share size is 100 GiB. You only pay for the capacity of the
provisioned shares.
Size your NFS shares not only based on capacity requirements, but also on IOPS
and throughput requirements. For details see Azure file share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues on Azure
Files, consult Troubleshoot Azure file shares performance.
For SAP J2EE systems, it's not supported to place /usr/sap/<SID>/J<nr> on NFS on
Azure Files.
If your SAP system has a heavy batch jobs load, you may have millions of job logs.
If the SAP batch job logs are stored in the file system, pay special attention to the
sizing of the sapmnt share. As of SAP_BASIS 7.52 the default behavior for the batch
job logs is to be stored in the database. For details see Job log in the database .
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces, or saptrans .
Don't use the saptrans share for any other activity, such as interfaces, or sapmnt .
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also Storage account performance scale targets. Be careful to
not exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than 5 SAP systems in a single
storage account. This guideline helps avoid exceeding the storage account limits
and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for non-production and production
SAP systems in the same storage account.
We recommend deploying on SLES 15 SP2 or higher to benefit from NFS client
improvements.
Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions
automatically redirect to a healthy zone. You don't have to remount the NFS shares
on your VMs.
If you're deploying your VMs across Availability Zones, use Storage account with
ZRS in the Azure regions that supports ZRS.
Azure Files doesn't currently support automatic cross-region replication for
disaster recovery scenarios.

Setting up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to
create a basic Pacemaker cluster for SAP (A)SCS.

Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.

1. [A] Install the latest version of SUSE Connector


Bash

sudo zypper install sap-suse-cluster-connector

7 Note

The known issue with using a dash in host names is fixed with version 3.1.1 of
package sap-suse-cluster-connector. Make sure that you are using at least
version 3.1.1 of package sap-suse-cluster-connector, if using cluster nodes
with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector.
The old one was called sap_suse_cluster_connector and the new one is called sap-
suse-cluster-connector.

2. [A] Update SAP resource agents

A patch for the resource-agents package is required to use the new configuration
that is described in this article. You can check, if the patch is already installed with
the following command

Bash

sudo grep 'parameter name="IS_ERS"'


/usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

Bash

<parameter name="IS_ERS" unique="0" required="0">;

If the grep command does not find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page

3. [A] Set up host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash
sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment

Bash

# IP address of cluster node 1


10.90.90.7 sap-cl1
# IP address of cluster node 2
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers

4. [1] Create the SAP directories on the NFS share.


Mount temporarily the NFS share sapnw1 one of the VMs and create the SAP
directories that will be used as nested mount points.

Bash

# mount temporarily the volume


sudo mkdir -p /saptmp
sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1
/saptmp -o noresvport,vers=4,minorversion=1,sec=sys
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntNW1
sudo mkdir -p usrsapNW1ascs
sudo mkdir -p usrsapNW1ers
sudo mkdir -p usrsapNW1sys
# unmount the volume and delete the temporary directory
cd ..
sudo umount /saptmp
sudo rmdir /saptmp

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

Bash
sudo mkdir -p /sapmnt/NW1
sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS01

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS01

2. [A] Mount the file systems that will not be controlled by the Pacemaker cluster.

Bash

vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/
/usr/sap/NW1/SYS nfs noresvport,vers=4,minorversion=1,sec=sys 0 0

# Mount the file systems


mount -a

3. [A] Configure SWAP file

Bash

sudo vi /etc/waagent.conf

# Check if property ResourceDisk.Format is already set to y and if not,


set it
ResourceDisk.Format=y

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change


Bash

sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance

) Important

We recommend using azure-lb resource agent, which is part of package


resource-agents, with the following package version requirements:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15 and above, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Bash

sudo crm node standby sap-cl2


sudo crm configure primitive fs_NW1_ASCS Filesystem
device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ascs'
directory='/usr/sap/NW1/ASCS00' fstype='nfs'
options='noresvport,vers=4,minorversion=1,sec=sys' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ASCS IPaddr2 \


params ip=10.90.90.10 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW1_ASCS fs_NW1_ASCS nc_NW1_ASCS


vip_NW1_ASCS \
meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It is not
important on which node the resources are running.

Bash
sudo crm_mon -r
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1

2. [1] Install SAP NetWeaver ASCS

Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the ASCS,
for example sapascs, 10.90.90.10 and the instance number that you used for the
probe of the load balancer, for example 00.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<virtual_hostname>

If the installation fails to create a subfolder in /usr/sap/NW1/ASCS00, try setting


the owner and group of the ASCS00 folder and retry.

Bash

chown nw1adm /usr/sap/NW1/ASCS00


chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

Bash

sudo crm node online sap-cl2


sudo crm node standby sap-cl1
sudo crm configure primitive fs_NW1_ERS Filesystem
device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ers'
directory='/usr/sap/NW1/ERS01' fstype='nfs'
options='noresvport,vers=4,minorversion=1,sec=sys' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ERS IPaddr2 \


params ip=10.90.90.9 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ERS azure-lb port=62101 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW1_ERS fs_NW1_ERS nc_NW1_ERS vip_NW1_ERS

Make sure that the cluster status is ok and that all resources are started. It is not
important on which node the resources are running.

Bash

sudo crm_mon -r

# Node sap-cl1: standby


# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started sap-cl2
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2

4. [2] Install SAP NetWeaver ERS

Install SAP NetWeaver ERS as root on the second node using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS, for example sapers, 10.90.90.9 and the instance number that you used for the
probe of the load balancer, for example 01.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.
Bash

<swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname

7 Note

Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions


correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/NW1/ERS01, try setting the


owner and group of the ERS01 folder and retry.

Bash

chown nw1adm /usr/sap/NW1/ERS01


chgrp sapsys /usr/sap/NW1/ERS01

5. [1] Adapt the ASCS/SCS and ERS instance profiles

ASCS/SCS profile

Bash

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set
as described in SAP note 1410736 .

ERS profile

Bash

sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
# Change the restart command to a start command
#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive

The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this you
need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1.
Change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.

Bash

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

Bash

# Add sidadm to the haclient group


sudo usermod -aG haclient nw1adm

8. [1] Add the ASCS and ERS SAP services to the sapservice file

Add the ASCS service entry to the second node and copy the ERS service entry to
the first node.

Bash

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh sap-cl2 "cat


>>/usr/sap/sapservices"
sudo ssh sap-cl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a
/usr/sap/sapservices

9. [1] Create the SAP cluster resources


Depending on whether you are running an ENSA1 or ENSA2 system, select
respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support, see SAP Note 2630416 .

ENSA1

Bash

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \


operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_sapascs
START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \


operations \$id=rsc_sap_NW1_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS01_sapers
START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00


sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS


g-NW1_ASCS
sudo crm configure location loc_sap_NW1_failover_to_ers
rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
sudo crm configure order ord_sap_NW1_first_start_ascs Optional:
rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false

sudo crm_attribute --delete --name priority-fencing-delay

sudo crm node online sap-cl1


sudo crm configure property maintenance-mode="false"

If you are upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .

Make sure that the cluster status is ok and that all resources are started. It is not
important on which node the resources are running.
Bash

sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started sap-cl2
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an
application server. Prepare the application server virtual machines to be able to use
them in these cases.

The steps below assume that you install the application server on a server different from
the ASCS/SCS and HANA servers. Otherwise some of the steps below (like configuring
host name resolution) are not needed.

The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] -
only applicable to PAS or [S] - only applicable to AAS.

1. [A] Configure operating system

Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .

Bash

sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. [A] Set up host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment

Bash

10.90.90.7 sap-cl1
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers
10.90.90.12 sapa01
10.90.90.13 sapa02

3. [A] Create the sapmnt directory

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans

4. [A] Mount the file systems

Bash

vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0

# Mount the file systems


mount -a

5. [A] Configure SWAP file


Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

Bash

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). For a list of
supported databases, see SAP Note 1928533 .

Install the SAP NetWeaver database instance as root using a virtual hostname that maps
to the IP address of the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root
user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.

1. [A] Prepare application server Follow the steps in the chapter SAP NetWeaver
application server preparation above to prepare the application server.
2. [A] Install SAP NetWeaver application server.
Install a primary or additional SAP NetWeaver applications server.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [A] Update SAP HANA secure store

Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.

Run the following command to list the entries

Bash

hdbuserstore List

The command should list all entries and should look similar to

Bash

DATA FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.DAT


KEY FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.90.90.5:30313
USER: SAPABAP1
DATABASE: NW1

In this example, the IP address of the default entry points to the VM, not the load
balancer. Change the entry to point to the virtual hostname of the load balancer.
Make sure to use the same port and database name. For example, 30313 and NW1
in the sample output.

Bash

su - nw1adm
hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP
schema>
Test cluster setup
Thoroughly test your Pacemaker cluster. Execute the typical failover tests.

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server with Azure NetApp Files for SAP
applications
Article • 01/18/2024

This article explains how to configure high availability for SAP NetWeaver application
with Azure NetApp Files.

For new implementations on SLES for SAP Applications 15, we recommended deploying
high availability for SAP ASCS/ERS in simple mount configuration. The classic Pacemaker
configuration, based on cluster-controlled file systems for the SAP central services
directories, described in this article is still supported .

In the example configurations, installation commands etc., the ASCS instance is number
00, the ERS instance number 01, the Primary Application instance (PAS) is 02 and the
Application instance (AAS) is 03. SAP System ID QAS is used. The database layer isn't
covered in detail in this article.

Read the following SAP Notes and papers first:

Azure NetApp Files documentation


SAP Note 1928533 , which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server
for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to
set up Netweaver HA and SAP HANA System Replication on-premises. Use these
guides as a general baseline. They provide much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files
NetApp NFS Best Practices

Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To
achieve that on SUSE Linux so far it was necessary to build separate highly available NFS
cluster.

Now it's possible to achieve SAP Netweaver HA by using shared storage, deployed on
Azure NetApp Files. Using Azure NetApp Files for the shared storage eliminates the
need for additional NFS cluster. Pacemaker is still needed for HA of the SAP Netweaver
central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA
database use virtual hostname and virtual IP addresses. On Azure, a load balancer is
required to use a virtual IP address. We recommend using Standard load balancer. The
presented configuration shows a load balancer with:

Frontend IP address 10.1.1.20 for ASCS


Frontend IP address 10.1.1.21 for ERS
Probe port 62000 for ASCS
Probe port 62101 for ERS
Setting up the Azure NetApp Files
infrastructure
SAP NetWeaver requires shared storage for the transport and profile directory. Before
proceeding with the setup for Azure NetApp files infrastructure, familiarize yourself with
the Azure NetApp Files documentation. Check if your selected Azure region offers Azure
NetApp Files. The following link shows the availability of Azure NetApp Files by Azure
region: Azure NetApp Files Availability by Azure Region .

Azure NetApp files is available in several Azure regions .

Deploy Azure NetApp Files resources


The steps assume that you have already deployed Azure Virtual Network. The Azure
NetApp Files resources and the VMs, where the Azure NetApp Files resources will be
mounted must be deployed in the same Azure Virtual Network or in peered Azure
Virtual Networks.

1. Create the NetApp account in the selected Azure region, following the instructions
to create NetApp Account.
2. Set up Azure NetApp Files capacity pool, following the instructions on how to set
up Azure NetApp Files capacity pool.
The SAP Netweaver architecture presented in this article uses single Azure NetApp
Files capacity pool, Premium SKU. We recommend Azure NetApp Files Premium
SKU for SAP Netweaver application workload on Azure.
3. Delegate a subnet to Azure NetApp files as described in the instructions Delegate
a subnet to Azure NetApp Files.
4. Deploy Azure NetApp Files volumes, following the instructions to create a volume
for Azure NetApp Files. Deploy the volumes in the designated Azure NetApp Files
subnet. The IP addresses of the Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure Virtual Network or in peered Azure Virtual Networks. In this
example we use two Azure NetApp Files volumes: sapQAS and trans. The file paths
that are mounted to the corresponding mount points are /usrsapqas/sapmntQAS,
/usrsapqas/usrsapQASsys, etc.
a. volume sapQAS (nfs://10.1.0.4/usrsapqas/sapmntQAS)
b. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASascs)
c. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASsys)
d. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASers)
e. volume trans (nfs://10.1.0.4/trans)
f. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASpas)
g. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASaas)

In this example, we used Azure NetApp Files for all SAP Netweaver file systems to
demonstrate how Azure NetApp Files can be used. The SAP file systems that don't need
to be mounted via NFS can also be deployed as Azure disk storage . In this example a-e
must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS/D02, /usr/sap/QAS/D03)
could be deployed as Azure disk storage.

Important considerations
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability
architecture, be aware of the following important considerations:

The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1-
TiB increments.
The minimum volume is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will
be mounted, must be in the same Azure Virtual Network or in peered virtual
networks in the same region. Azure NetApp Files access over VNET peering in the
same region is supported now. Azure NetApp access over global peering is not yet
supported.
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. While sizing the SAP Azure NetApp volumes, make sure
that the resulting throughput meets the application requirements.
Azure NetApp Files offers export policy: you can control the allowed clients, the
access type (Read&Write, Read Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files
feature isn't deployed in all Availability zones in an Azure region. Be aware of the
potential latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP application layer (ASCS/ERS, SAP application
servers).

Prepare infrastructure
The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.
Deploy Linux VMs manually via Azure portal
This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"
7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer

health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .

Disable ID mapping (if using NFSv4.1)


The instructions in this section are only applicable, if using Azure NetApp Files volumes
with NFSv4.1 protocol. Perform the configuration on all VMs, where Azure NetApp Files
NFSv4.1 volumes will be mounted.

1. Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain that is, defaultv4iddomain.com and the
mapping is set to nobody.

) Important

Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match


the default domain configuration on Azure NetApp Files:
defaultv4iddomain.com . If there's a mismatch between the domain

configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure
NetApp configuration, then the permissions for files on Azure NetApp
volumes that are mounted on the VMs will be displayed as nobody .

Bash

sudo cat /etc/idmapd.conf

# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody

2. [A] Verify nfs4_disable_idmapping . It should be set to Y. To create the directory


structure where nfs4_disable_idmapping is located, execute the mount command.
You won't be able to manually create the directory under /sys/modules, because
access is reserved for the kernel / drivers.

Bash

# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping

# If you need to set nfs4_disable_idmapping to Y


mkdir /mnt/tmp
mount 10.1.0.4:/sapmnt/qas /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf

Setting up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to
create a basic Pacemaker cluster for this (A)SCS server.

Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.

1. [A] Install SUSE Connector

Bash

sudo zypper install sap-suse-cluster-connector

7 Note

The known issue with using a dash in host names is fixed with version 3.1.1 of
package sap-suse-cluster-connector. Make sure that you are using at least
version 3.1.1 of package sap-suse-cluster-connector, if using cluster nodes
with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector.
The old one was called sap_suse_cluster_connector and the new one is called sap-
suse-cluster-connector.

Bash

sudo zypper info sap-suse-cluster-connector

# Information for package sap-suse-cluster-connector:


# ---------------------------------------------------
# Repository : SLE-12-SP3-SAP-Updates
# Name : sap-suse-cluster-connector
# Version : 3.1.0-8.1
# Arch : noarch
# Vendor : SUSE LLC <https://www.suse.com/>
# Support Level : Level 3
# Installed Size : 45.6 KiB
# Installed : Yes
# Status : up-to-date
# Source package : sap-suse-cluster-connector-3.1.0-8.1.src
# Summary : SUSE High Availability Setup for SAP Products

2. [A] Update SAP resource agents

A patch for the resource-agents package is required to use the new configuration
that is described in this article. You can check, if the patch is already installed with
the following command

Bash

sudo grep 'parameter name="IS_ERS"'


/usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

text

<parameter name="IS_ERS" unique="0" required="0">

If the grep command doesn't find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page

Bash

# example for patch for SLES 12 SP1


sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1

# example for patch for SLES 12 SP2


sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

3. [A] Setup host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash
sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment

text

# IP address of cluster node 1


10.1.1.18 anftstsapcl1
# IP address of cluster node 2
10.1.1.6 anftstsapcl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.1.1.21 anftstsapers

4. [1] Create SAP directories in the Azure NetApp Files volume.

Mount temporarily the Azure NetApp Files volume on one of the VMs and create
the SAP directories(file paths).

Bash

# mount temporarily the volume


sudo mkdir -p /saptmp
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp
10.1.0.4:/sapQAS /saptmp
# If using NFSv4.1
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp
10.1.0.4:/sapQAS /saptmp
# create the SAP directories
sudo cd /saptmp
sudo mkdir -p sapmntQAS
sudo mkdir -p usrsapQASascs
sudo mkdir -p usrsapQASers
sudo mkdir -p usrsapQASsys
sudo mkdir -p usrsapQASpas
sudo mkdir -p usrsapQASaas
# unmount the volume and delete the temporary directory
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp
Prepare for SAP NetWeaver installation
1. [A] Create the shared directories

Bash

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/QAS/SYS
sudo mkdir -p /usr/sap/QAS/ASCS00
sudo mkdir -p /usr/sap/QAS/ERS01

sudo chattr +i /sapmnt/QAS


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/QAS/SYS
sudo chattr +i /usr/sap/QAS/ASCS00
sudo chattr +i /usr/sap/QAS/ERS01

2. [A] Configure autofs

Bash

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct

If using NFSv3, create a file with:

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASsys

If using NFSv4.1, create a file with:

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/SYS -nfsvers=4.1,nobind,sec=sys
10.1.0.4:/usrsapqas/usrsapQASsys

7 Note

Make sure to match the NFS protocol version of the Azure NetApp Files
volumes, when mounting the volumes. If the Azure NetApp Files volumes are
created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the
Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the
instructions to disable ID mapping and make sure to use the corresponding
NFSv4.1 configuration. In this example the Azure NetApp Files volumes were
created as NFSv3 volumes.

Restart autofs to mount the new shares

Bash

sudo systemctl enable autofs


sudo service autofs restart

3. [A] Configure SWAP file

Bash

sudo vi /etc/waagent.conf

# Check if property ResourceDisk.Format is already set to y and if not,


set it
ResourceDisk.Format=y

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

Bash
sudo service waagent restart

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance

) Important

Recent testing revealed situations, where netcat stops responding to requests


due to backlog and its limitation of handling only one connection. The netcat
resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat
with socat. Currently we recommend using azure-lb resource agent, which is
part of package resource-agents, with the following package version
requirements:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Note that the change will require brief downtime.


For existing Pacemaker clusters, if the configuration was already changed to
use socat as described in Azure Load-Balancer Detection Hardening , there
is no requirement to switch immediately to azure-lb resource agent.

Bash

sudo crm node standby anftstsapcl2

# If using NFSv3
sudo crm configure primitive fs_QAS_ASCS Filesystem
device='10.1.0.4/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

# If using NFSv4.1
sudo crm configure primitive fs_QAS_ASCS Filesystem
device='10.1.0.4:/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs'
options='sec=sys,nfsvers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=105s

sudo crm configure primitive vip_QAS_ASCS IPaddr2 \


params ip=10.1.1.20 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_QAS_ASCS azure-lb port=62000 \


op monitor timeout=20s interval=10

sudo crm configure group g-QAS_ASCS fs_QAS_ASCS nc_QAS_ASCS


vip_QAS_ASCS \
meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

Bash

sudo crm_mon -r

# Node anftstsapcl2: standby


# Online: [ anftstsapcl1 ]
#
# Full list of resources:
#
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started
anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started
anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started
anftstsapcl1
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2

2. [1] Install SAP NetWeaver ASCS

Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the ASCS,
for example anftstsapvh, 10.1.1.20 and the instance number that you used for the
probe of the load balancer, for example 00.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

Bash
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/QAS/ASCS00, try setting the


owner and group of the ASCS00 folder and retry.

Bash

chown qasadm /usr/sap/QAS/ASCS00


chgrp sapsys /usr/sap/QAS/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance.

Bash

sudo crm node online anftstsapcl2


sudo crm node standby anftstsapcl1

# If using NFSv3
sudo crm configure primitive fs_QAS_ERS Filesystem
device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

# If using NFSv4.1
sudo crm configure primitive fs_QAS_ERS Filesystem
device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs'
options='sec=sys,nfsvers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=105s

sudo crm configure primitive vip_QAS_ERS IPaddr2 \


params ip=10.1.1.21 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_QAS_ERS azure-lb port=62101 \


op monitor timeout=20s interval=10

sudo crm configure group g-QAS_ERS fs_QAS_ERS nc_QAS_ERS vip_QAS_ERS

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

Bash
sudo crm_mon -r

# Node anftstsapcl1: standby


# Online: [ anftstsapcl2 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started
anftstsapcl2
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started
anftstsapcl2
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started
anftstsapcl2
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2

4. [2] Install SAP NetWeaver ERS

Install SAP NetWeaver ERS as root on the second node using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS, for example anftstsapers, 10.1.1.21 and the instance number that you used for
the probe of the load balancer, for example 01.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

7 Note

Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions


correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/QAS/ERS01, try setting the


owner and group of the ERS01 folder and retry.

Bash
chown qasadm /usr/sap/QAS/ERS01
chgrp sapsys /usr/sap/QAS/ERS01

5. [1] Adapt the ASCS/SCS and ERS instance profiles

ASCS/SCS profile

Bash

sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector =
/usr/bin/sap_suse_cluster_connector

# Add the keep alive parameter, if using ENSA1


enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP note 1410736 .

ERS profile

Bash

sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector =
/usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive

The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this you
need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1,
and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.

Bash

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

Bash

# Add sidadm to the haclient group


sudo usermod -aG haclient qasadm

8. [1] Add the ASCS and ERS SAP services to the sapservice file

Add the ASCS service entry to the second node and copy the ERS service entry to
the first node.

Bash

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh anftstsapcl2 "cat


>>/usr/sap/sapservices"
sudo ssh anftstsapcl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo
tee -a /usr/sap/sapservices

9. [1] Create the SAP cluster resources.

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support, see SAP Note 2630416 .

ENSA1

Bash

sudo crm configure property maintenance-mode="true"

# If using NFSv3
sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

# If using NFSv4.1
sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=105 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=105 migration-
threshold=1 priority=10

# If using NFSv3
sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

# If using NFSv4.1
sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=105 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00


sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01

sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS


g-QAS_ASCS
sudo crm configure location loc_sap_QAS_failover_to_ers
rsc_sap_QAS_ASCS00 rule 2000: runs_ers_QAS eq 1
sudo crm configure order ord_sap_QAS_first_start_ascs Optional:
rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false

sudo crm_attribute --delete --name priority-fencing-delay

sudo crm node online anftstsapcl1


sudo crm configure property maintenance-mode="false"

If you're upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .
7 Note

The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-
specific pause, related to NFSv4.1 lease renewals. For more information, see NFS in
NetApp Best practice .

The timeouts in the above configuration may need to be adapted to the specific
SAP setup.

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

Bash

sudo crm_mon -r

# Full list of resources:


#
# stonith-sbd (stonith:external/sbd): Started anftstsapcl2
# Resource Group: g-QAS_ASCS
# fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started
anftstsapcl1
# nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started
anftstsapcl1
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started
anftstsapcl1
# rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started
anftstsapcl1
# Resource Group: g-QAS_ERS
# fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
# nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
# vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started
anftstsapcl2
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started
anftstsapcl2

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an
application server. Prepare the application server virtual machines to be able to use
them in these cases.

The steps bellow assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like
configuring host name resolution) aren't needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] -
only applicable to PAS or [S] - only applicable to AAS.

1. [A] Configure operating system

Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .

Bash

sudo vi /etc/sysctl.conf

# Change/set the following settings


vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. [A] Setup host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment

text

# IP address of the load balancer frontend configuration for SAP


NetWeaver ASCS/SCS
10.1.1.20 anftstsapvh
# IP address of the load balancer frontend configuration for SAP
NetWeaver ERS
10.1.1.21 anftstsapers
# IP address of all application servers
10.1.1.15 anftstsapa01
10.1.1.16 anftstsapa02

3. [A] Create the sapmnt directory

Bash

sudo mkdir -p /sapmnt/QAS


sudo mkdir -p /usr/sap/trans
sudo chattr +i /sapmnt/QAS
sudo chattr +i /usr/sap/trans

4. [P] Create the PAS directory

Bash

sudo mkdir -p /usr/sap/QAS/D02


sudo chattr +i /usr/sap/QAS/D02

5. [S] Create the AAS directory

Bash

sudo mkdir -p /usr/sap/QAS/D03


sudo chattr +i /usr/sap/QAS/D03

6. [P] Configure autofs on PAS

Bash

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct

If using NFSv3, create a new file with:

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASpas

If using NFSv4.1, create a new file with:

Bash

sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=4.1,nobind,sec=sys
10.1.0.4:/usrsapqas/usrsapQASpas

Restart autofs to mount the new shares

Bash

sudo systemctl enable autofs


sudo service autofs restart

7. [P] Configure autofs on AAS

Bash

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


/- /etc/auto.direct

If using NFSv3, create a new file with:

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/QAS -nfsvers=3,nobind 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=3,nobind 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=3,nobind 10.1.0.4:/usrsapqas/usrsapQASaas

If using NFSv4.1, create a new file with:

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D03 -nfsvers=4.1,nobind,sec=sys
10.1.0.4:/usrsapqas/usrsapQASaas

Restart autofs to mount the new shares

Bash
sudo systemctl enable autofs
sudo service autofs restart

8. [A] Configure SWAP file

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

Bash

sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). For a list of
supported databases, see SAP Note 1928533 .

Run the SAP database instance installation

Install the SAP NetWeaver database instance as root using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the
database.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.

1. [A] Prepare application server Follow the steps in the chapter SAP NetWeaver
application server preparation above to prepare the application server.

2. [A] Install SAP NetWeaver application server Install a primary or additional SAP
NetWeaver applications server.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin

3. [A] Update SAP HANA secure store

Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.

Run the following command to list the entries

Bash

hdbuserstore List

This should list all entries and should look similar to

text

DATA FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT


KEY FILE : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.1.1.5:30313
USER: SAPABAP1
DATABASE: QAS

The output shows that the IP address of the default entry is pointing to the virtual
machine and not to the load balancer's IP address. This entry needs to be changed
to point to the virtual hostname of the load balancer. Make sure to use the same
port (30313 in the output above) and database name (QAS in the output above)!

Bash

su - qasadm

hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP


schema>

Test the cluster setup


Thoroughly test your Pacemaker cluster. Execute the typical failover tests.

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server for SAP applications
Article • 01/18/2024

This article describes how to deploy the virtual machines, configure the virtual machines,
install the cluster framework, and install a highly available SAP NetWeaver or SAP ABAP
platform based system. In the example configurations, ASCS instance number 00, ERS
instance number 02, and SAP System ID NW1 is used.

For new implementations on SLES for SAP Applications 15, we recommended deploying
high availability for SAP ASCS/ERS in simple mount configuration. The classic Pacemaker
configuration, based on cluster-controlled file systems for the SAP central services
directories, described in this article is still supported .

Read the following SAP Notes and papers first

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server
for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides The guides contain all required information to
set up Netweaver HA and SAP HANA System Replication on-premises. Use these
guides as a general baseline. They provide much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes

Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is
configured in a separate cluster and can be used by multiple SAP systems.

The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and
the SAP HANA database use virtual hostname and virtual IP addresses. On Azure, a load
balancer is required to use a virtual IP address. We recommend using Standard load
balancer. The presented configuration shows a load balancer with:
Frontend IP address 10.0.0.7 for ASCS
Frontend IP address 10.0.0.8 for ERS
Probe port 62000 for ASCS
Probe port 62101 for ERS

Setting up a highly available NFS server

7 Note

We recommend deploying one of the Azure first-party NFS services: NFS on Azure
Files or NFS ANF volumes for storing shared data in a highly available SAP system.
Be aware that we are de-emphasizing SAP reference architectures, utilizing NFS
clusters.
The SAP configuration guides for SAP NW highly available SAP system with native
NFS services are:

High availability SAP NW on Azure VMswith simple mount and NFS on SLES
for SAP Applications
High availability for SAP NW on Azure VMs with NFS on Azure Files on SLES
for SAP Applications
High availability for SAP NW on Azure VMs with NFS on Azure NetApp Files
on SLES for SAP Applications

SAP NetWeaver requires shared storage for the transport and profile directory. Read
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server on how to set
up an NFS server for SAP NetWeaver.

Prepare infrastructure
The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.

Deploy Linux VMs manually via Azure portal


This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.
Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.

Configure Azure load balancer


During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.

Azure portal

Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.

1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer

health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .

Setting up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to
create a basic Pacemaker cluster for this (A)SCS server.

Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1, or [2] - only applicable to node 2.

1. [A] Install SUSE Connector

Bash

sudo zypper install sap-suse-cluster-connector

7 Note

The known issue with using a dash in host names is fixed with version 3.1.1 of
package sap-suse-cluster-connector. Make sure that you are using at least
version 3.1.1 of package sap-suse-cluster-connector, if using cluster nodes
with dash in the host name. Otherwise your cluster will not work.

Make sure that you installed the new version of the SAP SUSE cluster connector.
The old one was called sap_suse_cluster_connector and the new one is called sap-
suse-cluster-connector.

Bash

sudo zypper info sap-suse-cluster-connector

Information for package sap-suse-cluster-connector:


---------------------------------------------------
Repository : SLE-12-SP3-SAP-Updates
Name : sap-suse-cluster-connector
<b>Version : 3.0.0-2.2</b>
Arch : noarch
Vendor : SUSE LLC <https://www.suse.com/>
Support Level : Level 3
Installed Size : 41.6 KiB
<b>Installed : Yes</b>
Status : up-to-date
Source package : sap-suse-cluster-connector-3.0.0-2.2.src
Summary : SUSE High Availability Setup for SAP Products

2. [A] Update SAP resource agents


A patch for the resource-agents package is required to use the new configuration
that is described in this article. You can check, if the patch is already installed with
the following command

Bash

sudo grep 'parameter name="IS_ERS"'


/usr/lib/ocf/resource.d/heartbeat/SAPInstance

The output should be similar to

text

<parameter name="IS_ERS" unique="0" required="0">

If the grep command doesn't find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page .

Bash

# example for patch for SLES 12 SP1


sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1

3. [A] Setup host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands.

Bash

sudo vi /etc/hosts

# Insert the following lines to /etc/hosts. Change the IP address and


hostname to match your environment
# IP address of the load balancer frontend configuration for NFS
10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db

Prepare for SAP NetWeaver installation


1. [A] Create the shared directories

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS02

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
sudo chattr +i /usr/sap/NW1/SYS
sudo chattr +i /usr/sap/NW1/ASCS00
sudo chattr +i /usr/sap/NW1/ERS02

2. [A] Configure autofs

Bash

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a file with

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NW1 -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/trans
/usr/sap/NW1/SYS -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sidsys

Restart autofs to mount the new shares

Bash
sudo systemctl enable autofs
sudo service autofs restart

3. [A] Configure SWAP file

Create a swap file as defined in Create a SWAP file for an Azure Linux VM

Bash

#!/bin/sh

# Percent of space on the ephemeral disk to dedicate to swap. Here 30%


is being used. Modify as appropriate.
PCT=0.3

# Location of swap file. Modify as appropriate based on location of


ephemeral disk.
LOCATION=/mnt

if [ ! -f ${LOCATION}/swapfile ]
then

# Get size of the ephemeral disk and multiply it by the percent of


space to allocate
size=$(/bin/df -m --output=target,avail | /usr/bin/awk -v
percent="$PCT" -v pattern=${LOCATION} '$0 ~ pattern
{SIZE=int($2*percent);print SIZE}')
echo "$size MB of space allocated to swap file"

# Create an empty file first and set correct permissions


/bin/dd if=/dev/zero of=${LOCATION}/swapfile bs=1M count=$size
/bin/chmod 0600 ${LOCATION}/swapfile

# Make the file available to use as swap


/sbin/mkswap ${LOCATION}/swapfile
fi

# Enable swap
/sbin/swapon ${LOCATION}/swapfile
/sbin/swapon -a

# Display current swap status


/sbin/swapon -s

Make the file executable.

Bash

chmod +x /var/lib/cloud/scripts/per-boot/swap.sh
Stop and start the VM. Stopping and starting the VM is only necessary the first
time after you create the SWAP file.

Installing SAP NetWeaver ASCS/ERS


1. [1] Create a virtual IP resource and health-probe for the ASCS instance

) Important

Recent testing revealed situations, where netcat stops responding to requests


due to backlog and its limitation of handling only one connection. The netcat
resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat
with socat. Currently we recommend using azure-lb resource agent, which is
part of package resource-agents, with the following package version
requirements:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Note that the change will require brief downtime.


For existing Pacemaker clusters, if the configuration was already changed to
use socat as described in Azure Load-Balancer Detection Hardening , there
is no requirement to switch immediately to azure-lb resource agent.

Bash

sudo crm node standby nw1-cl-1

sudo crm configure primitive fs_NW1_ASCS Filesystem device='nw1-


nfs:/NW1/ASCS' directory='/usr/sap/NW1/ASCS00' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ASCS IPaddr2 \


params ip=10.0.0.7 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000 \


op monitor timeout=20s interval=10
sudo crm configure group g-NW1_ASCS fs_NW1_ASCS nc_NW1_ASCS
vip_NW1_ASCS \
meta resource-stickiness=3000

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

Bash

sudo crm_mon -r

# Node nw1-cl-1: standby


# Online: [ nw1-cl-0 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-0
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-
cl-0
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-
cl-0
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-
cl-0

2. [1] Install SAP NetWeaver ASCS

Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the ASCS,
for example nw1-ascs, 10.0.0.7 and the instance number that you used for the
probe of the load balancer, for example 00.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/NW1/ASCS00, try setting


the owner and group of the ASCS00 folder and retry.

Bash

chown nw1adm /usr/sap/NW1/ASCS00


chgrp sapsys /usr/sap/NW1/ASCS00

3. [1] Create a virtual IP resource and health-probe for the ERS instance

Bash

sudo crm node online nw1-cl-1


sudo crm node standby nw1-cl-0

sudo crm configure primitive fs_NW1_ERS Filesystem device='nw1-


nfs:/NW1/ASCSERS' directory='/usr/sap/NW1/ERS02' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW1_ERS IPaddr2 \


params ip=10.0.0.8 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_ERS azure-lb port=62102 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW1_ERS fs_NW1_ERS nc_NW1_ERS vip_NW1_ERS

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

Bash

sudo crm_mon -r

# Node nw1-cl-0: standby


# Online: [ nw1-cl-1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-
cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-
cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-
cl-1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-
cl-1
4. [2] Install SAP NetWeaver ERS

Install SAP NetWeaver ERS as root on the second node using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS, for example nw1-aers, 10.0.0.8 and the instance number that you used for the
probe of the load balancer, for example 02.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

7 Note

Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions


correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/NW1/ERS02, try setting the


owner and group of the ERS02 folder and retry.

Bash

chown nw1adm /usr/sap/NW1/ERS02


chgrp sapsys /usr/sap/NW1/ERS02

5. [1] Adapt the ASCS/SCS and ERS instance profiles

ASCS/SCS profile

Bash

sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector =
/usr/bin/sap_suse_cluster_connector
# Add the keep alive parameter, if using ENSA1
enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP note 1410736 .

ERS profile

Bash

sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector =
/usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure Keep Alive

The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this you
need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1,
and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.

Bash

# Change the Linux system configuration


sudo sysctl net.ipv4.tcp_keepalive_time=300

7. [A] Configure the SAP users after the installation

Bash

# Add sidadm to the haclient group


sudo usermod -aG haclient nw1adm

8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to
the first node.

Bash

cat /usr/sap/sapservices | grep ASCS00 | sudo ssh nw1-cl-1 "cat


>>/usr/sap/sapservices"
sudo ssh nw1-cl-1 "cat /usr/sap/sapservices" | grep ERS02 | sudo tee -a
/usr/sap/sapservices

9. [1] Create the SAP cluster resources

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources. SAP introduced support for ENSA2 ,
including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809,
ENSA2 is installed by default. For ENSA2 support, see SAP Note 2630416 .

ENSA1

Bash

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \


operations \$id=rsc_sap_NW1_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ASCS00_nw1-ascs
START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \


operations \$id=rsc_sap_NW1_ERS02-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW1_ERS02_nw1-aers
START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00


sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02

sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS


g-NW1_ASCS
sudo crm configure location loc_sap_NW1_failover_to_ers
rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
sudo crm configure order ord_sap_NW1_first_start_ascs Optional:
rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS02:stop symmetrical=false
sudo crm_attribute --delete --name priority-fencing-delay

sudo crm node online nw1-cl-0


sudo crm configure property maintenance-mode="false"

If you're upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

Bash

sudo crm_mon -r

# Online: [ nw1-cl-0 nw1-cl-1 ]


#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0

SAP NetWeaver application server preparation


Some databases require that the database instance installation is executed on an
application server. Prepare the application server virtual machines to be able to use
them in these cases.

The steps bellow assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like
configuring host name resolution) aren't needed.

1. Configure operating system

Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .
Bash

sudo vi /etc/sysctl.conf

# Change/set the following settings


vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

2. Set up host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment

text

# IP address of the load balancer frontend configuration for NFS


10.0.0.4 nw1-nfs
# IP address of the load balancer frontend configuration for SAP
NetWeaver ASCS/SCS
10.0.0.7 nw1-ascs
# IP address of the load balancer frontend configuration for SAP
NetWeaver ERS
10.0.0.8 nw1-aers
# IP address of the load balancer frontend configuration for database
10.0.0.13 nw1-db
# IP address of all application servers
10.0.0.20 nw1-di-0
10.0.0.21 nw1-di-1

3. Create the sapmnt directory

Bash

sudo mkdir -p /sapmnt/NW1


sudo mkdir -p /usr/sap/trans

sudo chattr +i /sapmnt/NW1


sudo chattr +i /usr/sap/trans
4. Configure autofs

Bash

sudo vi /etc/auto.master

# Add the following line to the file, save and exit


+auto.master
/- /etc/auto.direct

Create a new file with

Bash

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit


/sapmnt/NW1 -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/sapmntsid
/usr/sap/trans -nfsvers=4,nosymlink,sync nw1-nfs:/NW1/trans

Restart autofs to mount the new shares

Bash

sudo systemctl enable autofs


sudo service autofs restart

5. Configure SWAP file

Bash

sudo vi /etc/waagent.conf

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
ResourceDisk.SwapSizeMB=2000

Restart the Agent to activate the change

Bash
sudo service waagent restart

Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). For a list of
supported databases, see SAP Note 1928533 .

1. Run the SAP database instance installation

Install the SAP NetWeaver database instance as root using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the
database, for example, nw1-db and 10.0.0.13.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

SAP NetWeaver application server installation


Follow these steps to install an SAP application server.

1. Prepare application server

Follow the steps in the chapter SAP NetWeaver application server preparation
above to prepare the application server.

2. Install SAP NetWeaver application server

Install a primary or additional SAP NetWeaver applications server.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst.

Bash

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

3. Update SAP HANA secure store

Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.

Run the following command to list the entries

Bash

hdbuserstore List

This should list all entries and should look similar to

text

DATA FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.DAT


KEY FILE : /home/nw1adm/.hdb/nw1-di-0/SSFS_HDB.KEY

KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: HN1

The output shows that the IP address of the default entry is pointing to the virtual
machine and not to the load balancer's IP address. This entry needs to be changed
to point to the virtual hostname of the load balancer. Make sure to use the same
port (30313 in the output above) and database name (HN1 in the output above)!

Bash

su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@HN1 SAPABAP1 <password of ABAP
schema>

Test the cluster setup


The following tests are a copy of the test cases in the best practices guides of SUSE.
They're copied for your convenience. Always also read the best practices guides and
perform all additional tests that might have been added.

1. Test HAGetFailoverConfig, HACheckConfig and HACheckFailoverConfig


Run the following commands as <sapsid>adm on the node where the ASCS
instance is currently running. If the commands fail with FAIL: Insufficient memory, it
might be caused by dashes in your hostname. This is a known issue and will be
fixed by SUSE in the sap-suse-cluster-connector package.

Bash

nw1-cl-0:nw1adm 54> sapcontrol -nr 00 -function HAGetFailoverConfig

# 15.08.2018 13:50:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: Toolchain Module
# HASAPInterfaceVersion: Toolchain Module (sap_suse_cluster_connector
3.0.1)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode:
# HANodes: nw1-cl-0, nw1-cl-1

nw1-cl-0:nw1adm 55> sapcontrol -nr 00 -function HACheckConfig

# 15.08.2018 14:00:04
# HACheckConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2
ABAP instances detected
# SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0
Java instances detected
# SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server
separated from application server
# SUCCESS, SAP CONFIGURATION, MessageServer separation, All
MessageServer separated from application server
# SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP
instances on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service
configuration, 2 ABAP instances with SPOOL service detected
# SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP
instances with active SPOOL service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on
multiple hosts, ABAP instances with active ABAP SPOOL service on
multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service
configuration, 2 ABAP instances with BATCH service detected
# SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP
instances with active BATCH service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on
multiple hosts, ABAP instances with active ABAP BATCH service on
multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service
configuration, 2 ABAP instances with DIALOG service detected
# SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP
instances with active DIALOG service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on
multiple hosts, ABAP instances with active ABAP DIALOG service on
multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service
configuration, 2 ABAP instances with UPDATE service detected
# SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP
instances with active UPDATE service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on
multiple hosts, ABAP instances with active ABAP UPDATE service on
multiple hosts detected
# SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (nw1-
ascs_NW1_00), SAPInstance includes is-ers patch
# SUCCESS, SAP CONFIGURATION, Enqueue replication (nw1-ascs_NW1_00),
Enqueue replication enabled
# SUCCESS, SAP STATE, Enqueue replication state (nw1-ascs_NW1_00),
Enqueue replication active

nw1-cl-0:nw1adm 56> sapcontrol -nr 00 -function HACheckFailoverConfig

# 15.08.2018 14:04:08
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Run the following commands as root to migrate the ASCS instance.

Bash

nw1-cl-0:~ # crm resource migrate rsc_sap_NW1_ASCS00 force


# INFO: Move constraint created for rsc_sap_NW1_ASCS00

nw1-cl-0:~ # crm resource unmigrate rsc_sap_NW1_ASCS00


# INFO: Removed migration constraints for rsc_sap_NW1_ASCS00

# Remove failed actions for the ERS that occurred as part of the
migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

3. Test HAFailoverToNode

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Run the following commands as <sapsid>adm to migrate the ASCS instance.

Bash

nw1-cl-0:nw1adm 55> sapcontrol -nr 00 -host nw1-ascs -user nw1adm


<password> -function HAFailoverToNode ""

# run as root
# Remove failed actions for the ERS that occurred as part of the
migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
# Remove migration constraints
nw1-cl-0:~ # crm resource clear rsc_sap_NW1_ASCS00
#INFO: Removed migration constraints for rsc_sap_NW1_ASCS00

Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

4. Simulate node crash

Resource state before starting the test:


Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-0


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run the following command as root on the node where the ASCS instance is
running

Bash

nw1-cl-0:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker shouldn't automatically start on the killed node. The
status after the node is started again should look like this.

Bash

Online: [ nw1-cl-1 ]
OFFLINE: [ nw1-cl-0 ]

Full list of resources:

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-1 'not running' (7):
call=219, status=complete, exitreason='none',
last-rc-change='Wed Aug 15 14:38:38 2018', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD
messages, and clean the failed resources.

Bash

# run as root
# list the SBD device(s)
nw1-cl-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"

nw1-cl-0:~ # sbd -d /dev/disk/by-id/scsi-


36001405772fe8401e6240c985857e116 -d /dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3 message nw1-cl-0 clear

nw1-cl-0:~ # systemctl start pacemaker


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

5. Blocking network communication

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Execute firewall rule to block the communication on one of the nodes.

Bash

# Execute iptable rule on nw1-cl-0 (10.0.0.5) to block the incoming and


outgoing traffic to nw1-cl-1 (10.0.0.6)
iptables -A INPUT -s 10.0.0.6 -j DROP; iptables -A OUTPUT -d 10.0.0.6 -
j DROP

When cluster nodes can't communicate to each other, there's a risk of a split-brain
scenario. In such situations, cluster nodes will try to simultaneously fence each
other, resulting in fence race.

When configuring a fencing device, it's recommended to configure


pcmk_delay_max property. So, in the event of split-brain scenario, the cluster
introduces a random delay up to the pcmk_delay_max value, to the fencing action
on each node. The node with the shortest delay will be selected for fencing.

Additionally, in ENSA 2 configuration, to prioritize the node hosting the ASCS


resource over the other node during a split brain scenario, it's recommended to
configure priority-fencing-delay property in the cluster. Enabling priority-
fencing-delay property allows the cluster to introduce an additional delay in the
fencing action specifically on the node hosting the ASCS resource, allowing the
ASCS node to win the fence race.

Execute below command to delete the firewall rule.

Bash

# If the iptables rule set on the server gets reset after a reboot, the
rules will be cleared out. In case they have not been reset, please
proceed to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.6 -j DROP; iptables -D OUTPUT -d 10.0.0.6 -
j DROP

6. Test manual restart of ASCS instance

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Create an enqueue lock by, for example edit a user in transaction su01. Run the
following commands as <sapsid>adm on the node where the ASCS instance is
running. The commands will stop the ASCS instance and start it again. If using
enqueue server 1 architecture, the enqueue lock is expected to be lost in this test.
If using enqueue server 2 architecture, the enqueue will be retained.

Bash
nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StopWait 600 2

The ASCS instance should now be disabled in Pacemaker

Bash

rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Stopped (disabled)

Start the ASCS instance again on the same node.

Bash

nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StartWait 600 2

The enqueue lock of transaction su01 should be lost and the back-end should have
been reset. Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

7. Kill message server process

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Run the following commands as root to identify the process of the message server
and kill it.

Bash

nw1-cl-1:~ # pgrep -f ms.sapNW1 | xargs kill -9

If you only kill the message server once, it will be restarted by sapstart. If you kill it
often enough, Pacemaker will eventually move the ASCS instance to the other
node, in case of ENSA1. Run the following commands as root to clean up the
resource state of the ASCS and ERS instance after the test.

Bash

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

8. Kill enqueue server process

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1

Run the following commands as root on the node where the ASCS instance is
running to kill the enqueue server.

Bash

nw1-cl-0:~ #
#If using ENSA1
pgrep -f en.sapNW1 | xargs kill -9
#If using ENSA2
pgrep -f enq.sapNW1 | xargs kill -9

The ASCS instance should immediately fail over to the other node, in the case of
ENSA1. The ERS instance should also fail over after the ASCS instance is started.
Run the following commands as root to clean up the resource state of the ASCS
and ERS instance after the test.

Bash

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00


nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

9. Kill enqueue replication server process

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Run the following command as root on the node where the ERS instance is
running to kill the enqueue replication server process.

Bash
nw1-cl-0:~ # pgrep -f er.sapNW1 | xargs kill -9

If you only run the command once, sapstart will restart the process. If you run it
often enough, sapstart will not restart the process, and the resource will be in a
stopped state. Run the following commands as root to clean up the resource state
of the ERS instance after the test.

Bash

nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02

Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

10. Kill enqueue sapstartsrv process

Resource state before starting the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Run the following commands as root on the node where the ASCS is running.

Bash

nw1-cl-1:~ # pgrep -fl ASCS00.*sapstartsrv


# 59545 sapstartsrv

nw1-cl-1:~ # kill -9 59545

The sapstartsrv process should always be restarted by the Pacemaker resource


agent. Resource state after the test:

Bash

stonith-sbd (stonith:external/sbd): Started nw1-cl-1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0

Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for NFS on Azure VMs
on SUSE Linux Enterprise Server
Article • 01/18/2024

7 Note

We recommend deploying one of the Azure first-party NFS services: NFS on Azure
Files or NFS ANF volumes for storing shared data in a highly available SAP system.
Be aware, that we are de-emphasizing SAP reference architectures, utilizing NFS
clusters.

This article describes how to deploy the virtual machines, configure the virtual machines,
install the cluster framework, and install a highly available NFS server that can be used
to store the shared data of a highly available SAP system. This guide describes how to
set up a highly available NFS server that is used by two SAP systems, NW1 and NW2.
The names of the resources (for example virtual machines, virtual networks) in the
example assume that you have used the SAP file server template with resource prefix
prod.

7 Note

This article contains references to terms that Microsoft no longer uses. When the
terms are removed from the software, we'll remove them from this article.

Read the following SAP Notes and papers first

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure

SAP Note 2015553 lists prerequisites for SAP-supported SAP software


deployments in Azure.

SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications

SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server
for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.

SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.

SAP Note 2243692 has information about SAP licensing on Linux in Azure.

SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.

SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.

SAP Community WIKI has all required SAP Notes for Linux.

Azure Virtual Machines planning and implementation for SAP on Linux

Azure Virtual Machines deployment for SAP on Linux (this article)

Azure Virtual Machines DBMS deployment for SAP on Linux

SUSE Linux Enterprise High Availability Extension 12 SP3 best practices guides
Highly Available NFS Storage with DRBD and Pacemaker

SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides

SUSE High Availability Extension 12 SP3 Release Notes

Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is
configured in a separate cluster and can be used by multiple SAP systems.
The NFS server uses a dedicated virtual hostname and virtual IP addresses for every SAP
system that uses this NFS server. On Azure, a load balancer is required to use a virtual IP
address. The presented configuration shows a load balancer with:

Frontend IP address 10.0.0.4 for NW1


Frontend IP address 10.0.0.5 for NW2
Probe port 61000 for NW1
Probe port 61001 for NW2

Set up a highly available NFS server

Deploy Linux manually via Azure portal


This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.

Deploy two virtual machines for NFS servers. Choose a suitable SLES image that is
supported with your SAP system. You can deploy VM in any one of the availability
options - scale set, availability zone or availability set.
Configure Azure load balancer
Follow create load balancer guide to configure a standard load balancer for an NFS
server high availability. During the configuration of load balancer, consider following
points.

1. Frontend IP Configuration: Create two frontend IP. Select the same virtual network
and subnet as your NFS server.
2. Backend Pool: Create backend pool and add NFS server VMs.
3. Inbound rules: Create two load balancing rule, one for NW1 and another for NW2.
Follow the same steps for both load balancing rules.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both NW1
and NW2)
Protocol: TCP
Port: [for example: 61000 for NW1, 61001 for NW2]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to 2. It is
currently not possible to set this property using Azure portal, so use either the
Azure CLI or PowerShell command.

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer

health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .

Create Pacemaker cluster


Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to
create a basic Pacemaker cluster for this NFS server.

Configure NFS server


The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.

1. [A] Setup host name resolution

You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands

Bash

sudo vi /etc/hosts

Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
Bash

# IP address of the load balancer frontend configuration for NFS

10.0.0.4 nw1-nfs
10.0.0.5 nw2-nfs

2. [A] Enable NFS server

Create the root NFS export entry

Bash

sudo sh -c 'echo /srv/nfs/ *\(rw,no_root_squash,fsid=0\)>/etc/exports'

sudo mkdir /srv/nfs/

3. [A] Install drbd components

Bash

sudo zypper install drbd drbd-kmp-default drbd-utils

4. [A] Create a partition for the drbd devices

List all available data disks

Bash

sudo ls /dev/disk/azure/scsi1/

# Example output
# lun0 lun1

Create partitions for every data disk

Bash

sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk


/dev/disk/azure/scsi1/lun0'
sudo sh -c 'echo -e "n\n\n\n\n\nw\n" | fdisk
/dev/disk/azure/scsi1/lun1'

5. [A] Create LVM configurations

List all available partitions


Bash

ls /dev/disk/azure/scsi1/lun*-part*

# Example output
# /dev/disk/azure/scsi1/lun0-part1 /dev/disk/azure/scsi1/lun1-part1

Create LVM volumes for every partition

Bash

sudo pvcreate /dev/disk/azure/scsi1/lun0-part1


sudo vgcreate vg-NW1-NFS /dev/disk/azure/scsi1/lun0-part1
sudo lvcreate -l 100%FREE -n NW1 vg-NW1-NFS

sudo pvcreate /dev/disk/azure/scsi1/lun1-part1


sudo vgcreate vg-NW2-NFS /dev/disk/azure/scsi1/lun1-part1
sudo lvcreate -l 100%FREE -n NW2 vg-NW2-NFS

6. [A] Configure drbd

Bash

sudo vi /etc/drbd.conf

Make sure that the drbd.conf file contains the following two lines

text

include "drbd.d/global_common.conf";
include "drbd.d/*.res";

Change the global drbd configuration

Bash

sudo vi /etc/drbd.d/global_common.conf

Add the following entries to the handler and net section.

text

global {
usage-count no;
}
common {
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger
; reboot -f";
}
startup {
wfc-timeout 0;
}
options {
}
disk {
md-flushes yes;
disk-flushes yes;
c-plan-ahead 1;
c-min-rate 100M;
c-fill-target 20M;
c-max-rate 4G;
}
net {
after-sb-0pri discard-younger-primary;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
protocol C;
tcp-cork yes;
max-buffers 20000;
max-epoch-size 20000;
sndbuf-size 0;
rcvbuf-size 0;
}
}

7. [A] Create the NFS drbd devices

Bash

sudo vi /etc/drbd.d/NW1-nfs.res

Insert the configuration for the new drbd device and exit

text

resource NW1-nfs {
protocol C;
disk {
on-io-error detach;
}
net {
fencing resource-and-stonith;
}
on prod-nfs-0 {
address 10.0.0.6:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
}

Bash

sudo vi /etc/drbd.d/NW2-nfs.res

Insert the configuration for the new drbd device and exit

text

resource NW2-nfs {
protocol C;
disk {
on-io-error detach;
}
net {
fencing resource-and-stonith;
}
on prod-nfs-0 {
address 10.0.0.6:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
}

Create the drbd device and start it

Bash

sudo drbdadm create-md NW1-nfs


sudo drbdadm create-md NW2-nfs
sudo drbdadm up NW1-nfs
sudo drbdadm up NW2-nfs

8. [1] Skip initial synchronization

Bash

sudo drbdadm new-current-uuid --clear-bitmap NW1-nfs


sudo drbdadm new-current-uuid --clear-bitmap NW2-nfs

9. [1] Set the primary node

Bash

sudo drbdadm primary --force NW1-nfs


sudo drbdadm primary --force NW2-nfs

10. [1] Wait until the new drbd devices are synchronized

Bash

sudo drbdsetup wait-sync-resource NW1-nfs


sudo drbdsetup wait-sync-resource NW2-nfs

11. [1] Create file systems on the drbd devices

Bash

sudo mkfs.xfs /dev/drbd0


sudo mkdir /srv/nfs/NW1
sudo chattr +i /srv/nfs/NW1
sudo mount -t xfs /dev/drbd0 /srv/nfs/NW1
sudo mkdir /srv/nfs/NW1/sidsys
sudo mkdir /srv/nfs/NW1/sapmntsid
sudo mkdir /srv/nfs/NW1/trans
sudo mkdir /srv/nfs/NW1/ASCS
sudo mkdir /srv/nfs/NW1/ASCSERS
sudo mkdir /srv/nfs/NW1/SCS
sudo mkdir /srv/nfs/NW1/SCSERS
sudo umount /srv/nfs/NW1

sudo mkfs.xfs /dev/drbd1


sudo mkdir /srv/nfs/NW2
sudo chattr +i /srv/nfs/NW2
sudo mount -t xfs /dev/drbd1 /srv/nfs/NW2
sudo mkdir /srv/nfs/NW2/sidsys
sudo mkdir /srv/nfs/NW2/sapmntsid
sudo mkdir /srv/nfs/NW2/trans
sudo mkdir /srv/nfs/NW2/ASCS
sudo mkdir /srv/nfs/NW2/ASCSERS
sudo mkdir /srv/nfs/NW2/SCS
sudo mkdir /srv/nfs/NW2/SCSERS
sudo umount /srv/nfs/NW2

12. [A] Setup drbd split-brain detection

When using drbd to synchronize data from one host to another, a so called split
brain can occur. A split brain is a scenario where both cluster nodes promoted the
drbd device to be the primary and went out of sync. It might be a rare situation
but you still want to handle and resolve a split brain as fast as possible. It is
therefore important to be notified when a split brain happened.

Read the official drbd documentation on how to set up a split brain notification.

It is also possible to automatically recover from a split brain scenario. For more
information, read Automatic split brain recovery policies

Configure Cluster Framework


1. [1] Add the NFS drbd devices for SAP system NW1 to the cluster configuration

) Important

Recent testing revealed situations, where netcat stops responding to requests


due to backlog and its limitation of handling only one connection. The netcat
resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat
with socat. Currently we recommend using azure-lb resource agent, which is
part of package resource-agents, with the following package version
requirements:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Note that the change will require brief downtime.


For existing Pacemaker clusters, if the configuration was already changed to
use socat as described in Azure Load-Balancer Detection Hardening , there
is no requirement to switch immediately to azure-lb resource agent.
Bash

sudo crm configure rsc_defaults resource-stickiness="200"

# Enable maintenance mode


sudo crm configure property maintenance-mode=true

sudo crm configure primitive drbd_NW1_nfs \


ocf:linbit:drbd \
params drbd_resource="NW1-nfs" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

sudo crm configure ms ms-drbd_NW1_nfs drbd_NW1_nfs \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" interleave="true"

sudo crm configure primitive fs_NW1_sapmnt \


ocf:heartbeat:Filesystem \
params device=/dev/drbd0 \
directory=/srv/nfs/NW1 \
fstype=xfs \
op monitor interval="10s"

sudo crm configure primitive nfsserver systemd:nfs-server \


op monitor interval="30s"
sudo crm configure clone cl-nfsserver nfsserver

sudo crm configure primitive exportfs_NW1 \


ocf:heartbeat:exportfs \
params directory="/srv/nfs/NW1" \
options="rw,no_root_squash,crossmnt" clientspec="*" fsid=1
wait_for_leasetime_on_stop=true op monitor interval="30s"

sudo crm configure primitive vip_NW1_nfs IPaddr2 \


params ip=10.0.0.4 op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW1_nfs azure-lb port=61000 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW1_nfs \


fs_NW1_sapmnt exportfs_NW1 nc_NW1_nfs vip_NW1_nfs

sudo crm configure order o-NW1_drbd_before_nfs inf: \


ms-drbd_NW1_nfs:promote g-NW1_nfs:start

sudo crm configure colocation col-NW1_nfs_on_drbd inf: \


g-NW1_nfs ms-drbd_NW1_nfs:Master

2. [1] Add the NFS drbd devices for SAP system NW2 to the cluster configuration

Bash
# Enable maintenance mode
sudo crm configure property maintenance-mode=true

sudo crm configure primitive drbd_NW2_nfs \


ocf:linbit:drbd \
params drbd_resource="NW2-nfs" \
op monitor interval="15" role="Master" \
op monitor interval="30" role="Slave"

sudo crm configure ms ms-drbd_NW2_nfs drbd_NW2_nfs \


meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" interleave="true"

sudo crm configure primitive fs_NW2_sapmnt \


ocf:heartbeat:Filesystem \
params device=/dev/drbd1 \
directory=/srv/nfs/NW2 \
fstype=xfs \
op monitor interval="10s"

sudo crm configure primitive exportfs_NW2 \


ocf:heartbeat:exportfs \
params directory="/srv/nfs/NW2" \
options="rw,no_root_squash,crossmnt" clientspec="*" fsid=2
wait_for_leasetime_on_stop=true op monitor interval="30s"

sudo crm configure primitive vip_NW2_nfs IPaddr2 \


params ip=10.0.0.5 op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW2_nfs azure-lb port=61001 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW2_nfs \


fs_NW2_sapmnt exportfs_NW2 nc_NW2_nfs vip_NW2_nfs

sudo crm configure order o-NW2_drbd_before_nfs inf: \


ms-drbd_NW2_nfs:promote g-NW2_nfs:start

sudo crm configure colocation col-NW2_nfs_on_drbd inf: \


g-NW2_nfs ms-drbd_NW2_nfs:Master

The crossmnt option in the exportfs cluster resources is present in our


documentation for backward compatibility with older SLES versions.

3. [1] Disable maintenance mode

Bash

sudo crm configure property maintenance-mode=false


Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server for SAP applications multi-SID
guide
Article • 01/18/2024

This article describes how to deploy multiple SAP NetWeaver or S4HANA highly
available systems(that is, multi-SID) in a two node cluster on Azure VMs with SUSE Linux
Enterprise Server for SAP applications.

In the example configurations, installation commands etc. three SAP NetWeaver 7.50
systems are deployed in a single, two node high availability cluster. The SAP systems
SIDs are:

NW1: ASCS instance number 00 and virtual host name msnw1ascs; ERS instance
number 02 and virtual host name msnw1ers.
NW2: ASCS instance number 10 and virtual hostname msnw2ascs; ERS instance
number 12 and virtual host name msnw2ers.
NW3: ASCS instance number 20 and virtual hostname msnw3ascs; ERS instance
number 22 and virtual host name msnw3ers.

The article doesn't cover the database layer and the deployment of the SAP NFS shares.
In the examples in this article, we're using virtual names nw2-nfs for the NW2 NFS
shares and nw3-nfs for the NW3 NFS shares, assuming that NFS cluster was deployed.

Before you begin, refer to the following SAP Notes and papers first:

SAP Note 1928533 , which has:


List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server
for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA Best Practice Guides - The guides contain all required information
to set up Netweaver HA and SAP HANA System Replication on-premises. Use these
guides as a general baseline. They provide much more detailed information.
SUSE High Availability Extension 12 SP3 Release Notes
SUSE multi-SID cluster guide for SLES 12 and SLES 15
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
The virtual machines that participate in the cluster must be sized to be able to run all
resources, in case failover occurs. Each SAP SID can fail over independent from each
other in the multi-SID high availability cluster. If using SBD fencing, the SBD devices can
be shared between multiple clusters.

To achieve high availability, SAP NetWeaver requires highly available NFS shares. In this
example, we assume the SAP NFS shares are either hosted on highly available NFS file
server, which can be used by multiple SAP systems. Or the shares are deployed on Azure
NetApp Files NFS volumes.
) Important

The support for multi-SID clustering of SAP ASCS/ERS with SUSE Linux as guest
operating system in Azure VMs is limited to five SAP SIDs on the same cluster. Each
new SID increases the complexity. A mix of SAP Enqueue Replication Server 1 and
Enqueue Replication Server 2 on the same cluster is not supported. Multi-SID
clustering describes the installation of multiple SAP ASCS/ERS instances with
different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only
supported for ASCS/ERS.

 Tip

The multi-SID clustering of SAP ASCS/ERS is a solution with higher complexity. It is


more complex to implement. It also involves higher administrative effort, when
executing maintenance activities (like OS patching). Before you start the actual
implementation, take time to carefully plan out the deployment and all involved
components like VMs, NFS mounts, VIPs, load balancer configurations and so on.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and
the SAP HANA database use virtual hostname and virtual IP addresses. On Azure, a load
balancer is required to use a virtual IP address. We recommend using Standard load
balancer.

The presented configuration for this multi-SID cluster example with three SAP systems
shows a load balancer with:

Frontend IP addresses for ASCS: 10.3.1.14 (NW1), 10.3.1.16 (NW2) and 10.3.1.13
(NW3)
Frontend IP addresses for ERS: 10.3.1.15 (NW1), 10.3.1.17 (NW2) and 10.3.1.19
(NW3)
Probe port 62000 for NW1 ASCS, 62010 for NW2 ASCS and 62020 for NW3 ASCS
Probe port 62102 for NW1 ASCS, 62112 for NW2 ASCS and 62122 for NW3 ASCS

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.

) Important

Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer

health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more information, see saptune 3.1.1 – Do I Need to Update? .
SAP NFS shares
SAP NetWeaver requires shared storage for the transport, profile directory, and so on.
For highly available SAP system, it's important to have highly available NFS shares. You
would need to decide on the architecture for your SAP NFS shares. One option is to
build Highly available NFS cluster on Azure VMs on SUSE Linux Enterprise Server, which
can be shared between multiple SAP systems.

Another option is to deploy the shares on Azure NetApp Files NFS volumes. With Azure
NetApp Files, you would get built-in high availability for the SAP NFS shares.

Deploy the first SAP system in the cluster


Based on the architecture for the SAP NFS shares, deploy the first SAP system in the
cluster, following the corresponding documentation.

If using highly available NFS server, follow High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise Server for SAP applications.
If using Azure NetApp Files NFS volumes, follow High availability for SAP
NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files
for SAP applications

The documents listed above would guide you through the steps to prepare the
necessary infrastructures, build the cluster, prepare the OS for running the SAP
application.

 Tip

Always test the fail over functionality of the cluster, after the first system is
deployed, before adding the additional SAP SIDs to the cluster. That way you will
know that the cluster functionality works, before adding the complexity of
additional SAP systems to the cluster.

Deploy additional SAP systems in the cluster


In this example, we assume that system NW1 was already deployed in the cluster. We
will show how to deploy in the cluster SAP systems NW2 and NW3.

The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.
Prerequisites

) Important

Before following the instructions to deploy additional SAP systems in the cluster,
follow the instructions to deploy the first SAP system in the cluster, as there are
steps which are only necessary during the first system deployment.

This documentation assumes that:

The Pacemaker cluster is already configured and running.


At least one SAP system (ASCS / ERS instance) is already deployed and is running
in the cluster.
The cluster fail over functionality is tested.
The NFS shares for all SAP systems are deployed.

Prepare for SAP NetWeaver Installation


1. Add configuration for the newly deployed system (that is, NW2, NW3) to the
existing Azure Load Balancer, following the instructions configure Azure Load
Balancer manually via Azure portal. Adjust the IP addresses, health probe ports,
load-balancing rules for your configuration.

2. [A] Set up name resolution for the additional SAP systems. You can either use DNS
server or modify /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Adapt the IP addresses and the host names to your environment.

Bash

sudo vi /etc/hosts

# IP address of the load balancer frontend configuration for NW2 ASCS


10.3.1.16 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.13 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.17 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.19 msnw3ers
# IP address for virtual host name for the NFS server for NW2
10.3.1.31 nw2-nfs
# IP address for virtual host name for the NFS server for NW3
10.3.1.32 nw3-nfs
3. [A] Create the shared directories for the additional NW2 and NW3 SAP systems
that you're deploying to the cluster.

Bash

sudo mkdir -p /sapmnt/NW2


sudo mkdir -p /usr/sap/NW2/SYS
sudo mkdir -p /usr/sap/NW2/ASCS10
sudo mkdir -p /usr/sap/NW2/ERS12
sudo mkdir -p /sapmnt/NW3
sudo mkdir -p /usr/sap/NW3/SYS
sudo mkdir -p /usr/sap/NW3/ASCS20
sudo mkdir -p /usr/sap/NW3/ERS22

sudo chattr +i /sapmnt/NW2


sudo chattr +i /usr/sap/NW2/SYS
sudo chattr +i /usr/sap/NW2/ASCS10
sudo chattr +i /usr/sap/NW2/ERS12
sudo chattr +i /sapmnt/NW3
sudo chattr +i /usr/sap/NW3/SYS
sudo chattr +i /usr/sap/NW3/ASCS20
sudo chattr +i /usr/sap/NW3/ERS22

4. [A] Configure autofs to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems
for the additional SAP systems that you're deploying to the cluster. In this example
NW2 and NW3.

Update file /etc/auto.direct with the file systems for the additional SAP systems
that you're deploying to the cluster.

If using NFS file server, follow the instructions on the Azure VMs high
availability for SAP NetWeaver on SLES page
If using Azure NetApp Files, follow the instructions on the Azure VMs high
availability for SAP NW on SLES with Azure NetApp Files page

You need to restart the autofs service to mount the newly added shares.

Install ASCS / ERS


1. Create the virtual IP and health probe cluster resources for the ASCS instance of
the additional SAP system you're deploying to the cluster. The example shown
here is for NW2 and NW3 ASCS, using highly available NFS server.

) Important
Recent testing revealed situations, where netcat stops responding to requests
due to backlog and its limitation of handling only one connection. The netcat
resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat
with socat. Currently we recommend using azure-lb resource agent, which is
part of package resource-agents, with the following package version
requirements:

For SLES 12 SP4/SP5, the version must be at least resource-agents-


4.3.018.a7fb5035-3.30.1.
For SLES 15/15 SP1, the version must be at least resource-agents-
4.3.0184.6ee15eb2-4.13.1.

Note that the change will require brief downtime.


For existing Pacemaker clusters, if the configuration was already changed to
use socat as described in Azure Load-Balancer Detection Hardening , there
is no requirement to switch immediately to azure-lb resource agent.

Bash

sudo crm configure primitive fs_NW2_ASCS Filesystem device='nw2-


nfs:/NW2/ASCS' directory='/usr/sap/NW2/ASCS10' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW2_ASCS IPaddr2 \


params ip=10.3.1.16 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW2_ASCS azure-lb port=62010 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW2_ASCS fs_NW2_ASCS nc_NW2_ASCS


vip_NW2_ASCS \
meta resource-stickiness=3000

sudo crm configure primitive fs_NW3_ASCS Filesystem device='nw3-


nfs:/NW3/ASCS' directory='/usr/sap/NW3/ASCS20' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW3_ASCS IPaddr2 \


params ip=10.3.1.13 \
op monitor interval=10 timeout=20
sudo crm configure primitive nc_NW3_ASCS azure-lb port=62020 \
op monitor timeout=20s interval=10

sudo crm configure group g-NW3_ASCS fs_NW3_ASCS nc_NW3_ASCS


vip_NW3_ASCS \
meta resource-stickiness=3000

As you creating the resources they may be assigned to different cluster resources.
When you group them, they'll migrate to one of the cluster nodes. Make sure the
cluster status is ok and that all resources are started. It isn't important on which
node the resources are running.

2. [1] Install SAP NetWeaver ASCS

Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS. For example, for
system NW2, the virtual hostname is msnw2ascs, 10.3.1.16 and the instance
number that you used for the probe of the load balancer, for example 10. for
system NW3, the virtual hostname is msnw3ascs, 10.3.1.13 and the instance
number that you used for the probe of the load balancer, for example 20.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a


non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual host name.

Bash

sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

If the installation fails to create a subfolder in /usr/sap/SID/ASCSInstance#, try


setting the owner to sidadm and group to sapsys of the ASCSInstance# and retry.

3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of
the additional SAP system you're deploying to the cluster. The example shown
here is for NW2 and NW3 ERS, using highly available NFS server.

Bash

sudo crm configure primitive fs_NW2_ERS Filesystem device='nw2-


nfs:/NW2/ASCSERS' directory='/usr/sap/NW2/ERS12' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW2_ERS IPaddr2 \


params ip=10.3.1.17 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW2_ERS azure-lb port=62112 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW2_ERS fs_NW2_ERS nc_NW2_ERS vip_NW2_ERS

sudo crm configure primitive fs_NW3_ERS Filesystem device='nw3-


nfs:/NW3/ASCSERS' directory='/usr/sap/NW3/ERS22' fstype='nfs4' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s

sudo crm configure primitive vip_NW3_ERS IPaddr2 \


params ip=10.3.1.19 \
op monitor interval=10 timeout=20

sudo crm configure primitive nc_NW3_ERS azure-lb port=62122 \


op monitor timeout=20s interval=10

sudo crm configure group g-NW3_ERS fs_NW3_ERS nc_NW3_ERS vip_NW3_ERS

As you creating the resources they may be assigned to different cluster nodes.
When you group them, they'll migrate to one of the cluster nodes. Make sure the
cluster status is ok and that all resources are started.

Next, make sure that the resources of the newly created ERS group, are running on
the cluster node, opposite to the cluster node where the ASCS instance for the
same SAP system was installed. For example, if NW2 ASCS was installed on
slesmsscl1 , then make sure the NW2 ERS group is running on slesmsscl2 . You

can migrate the NW2 ERS group to slesmsscl2 by running the following
command:

Bash

crm resource migrate g-NW2_ERS slesmsscl2 force

4. [2] Install SAP NetWeaver ERS

Install SAP NetWeaver ERS as root on the other node, using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS. For example for system NW2, the virtual host name is msnw2ers, 10.3.1.17 and
the instance number that you used for the probe of the load balancer, for example
12. For system NW3, the virtual host name msnw3ers, 10.3.1.19 and the instance
number that you used for the probe of the load balancer, for example 22.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a
non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual host name.

Bash

sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=virtual_hostname

7 Note

Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions


correctly and the installation will fail.

If the installation fails to create a subfolder in /usr/sap/NW2/ERSInstance#, try


setting the owner to sidadm and the group to sapsys of the ERSInstance# folder
and retry.

If it was necessary for you to migrate the ERS group of the newly deployed SAP
system to a different cluster node, don't forget to remove the location constraint
for the ERS group. You can remove the constraint by running the following
command (the example is given for SAP systems NW2 and NW3).

Bash

crm resource unmigrate g-NW2_ERS


crm resource unmigrate g-NW3_ERS

5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP
system(s). The example shown below is for NW2. You'll need to adapt the
ASCS/SCS and ERS profiles for all SAP instances added to the cluster.

ASCS/SCS profile

Bash

sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs

# Change the restart command to a start command


#Restart_Program_01 = local $(_EN) pf=$(_PF)
Start_Program_01 = local $(_EN) pf=$(_PF)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
# Add the keep alive parameter, if using ENSA1
enque/encni/set_so_keepalive = true

For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set
as described in SAP note 1410736 .

ERS profile

Bash

sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers

# Change the restart command to a start command


#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)

# Add the following lines


service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector

# remove Autostart from ERS profile


# Autostart = 1

6. [A] Configure the SAP users for the newly deployed SAP system, in this example
NW2 and NW3.

Bash

# Add sidadm to the haclient group


sudo usermod -aG haclient nw2adm
sudo usermod -aG haclient nw3adm

7. Add the ASCS and ERS SAP services for the newly installed SAP system to the
sapservice file. The example shown below is for SAP systems NW2 and NW3.

Add the ASCS service entry to the second node and copy the ERS service entry to
the first node. Execute the commands for each SAP system on the node, where the
ASCS instance for the SAP system was installed.

Bash

# Execute the following commands on slesmsscl1,assuming the NW2 ASCS


instance was installed on slesmsscl1
cat /usr/sap/sapservices | grep ASCS10 | sudo ssh slesmsscl2 "cat
>>/usr/sap/sapservices"
sudo ssh slesmsscl2 "cat /usr/sap/sapservices" | grep ERS12 | sudo tee
-a /usr/sap/sapservices
# Execute the following commands on slesmsscl2, assuming the NW3 ASCS
instance was installed on slesmsscl2
cat /usr/sap/sapservices | grep ASCS20 | sudo ssh slesmsscl1 "cat
>>/usr/sap/sapservices"
sudo ssh slesmsscl1 "cat /usr/sap/sapservices" | grep ERS22 | sudo tee
-a /usr/sap/sapservices

8. [1] Create the SAP cluster resources for the newly installed SAP system.

Depending on whether you are running an ENSA1 or ENSA2 system, select


respective tab to define the resources for NW2 and NW3 systems. SAP introduced
support for ENSA2 , including replication, in SAP NetWeaver 7.52. Starting with
ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP
Note 2630416 .

ENSA1

Bash

sudo crm configure property maintenance-mode="true"

sudo crm configure primitive rsc_sap_NW2_ASCS10 SAPInstance \


operations \$id=rsc_sap_NW2_ASCS10-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW2_ASCS10_msnw2ascs
START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW2_ERS12 SAPInstance \


operations \$id=rsc_sap_NW2_ERS12-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW2_ERS12_msnw2ers
START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW2_ASCS add rsc_sap_NW2_ASCS10


sudo crm configure modgroup g-NW2_ERS add rsc_sap_NW2_ERS12

sudo crm configure colocation col_sap_NW2_no_both -5000: g-NW2_ERS


g-NW2_ASCS
sudo crm configure location loc_sap_NW2_failover_to_ers
rsc_sap_NW2_ASCS10 rule 2000: runs_ers_NW2 eq 1
sudo crm configure order ord_sap_NW2_first_start_ascs Optional:
rsc_sap_NW2_ASCS10:start rsc_sap_NW2_ERS12:stop symmetrical=false

sudo crm configure primitive rsc_sap_NW3_ASCS20 SAPInstance \


operations \$id=rsc_sap_NW3_ASCS20-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW3_ASCS10_msnw3ascs
START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10

sudo crm configure primitive rsc_sap_NW3_ERS22 SAPInstance \


operations \$id=rsc_sap_NW3_ERS22-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=NW3_ERS22_msnw3ers
START_PROFILE="/sapmnt/NW3/profile/NW3_ERS22_msnw2ers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000

sudo crm configure modgroup g-NW3_ASCS add rsc_sap_NW3_ASCS20


sudo crm configure modgroup g-NW3_ERS add rsc_sap_NW3_ERS22

sudo crm configure colocation col_sap_NW3_no_both -5000: g-NW3_ERS


g-NW3_ASCS
sudo crm configure location loc_sap_NW3_failover_to_ers
rsc_sap_NW3_ASCS10 rule 2000: runs_ers_NW3 eq 1
sudo crm configure order ord_sap_NW3_first_start_ascs Optional:
rsc_sap_NW3_ASCS20:start rsc_sap_NW3_ERS22:stop symmetrical=false
sudo crm configure property maintenance-mode="false"

If you're upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .

Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.

The following example shows the cluster resources status, after SAP systems NW2 and
NW3 were added to the cluster.

Bash

sudo crm_mon -r

# Online: [ slesmsscl1 slesmsscl2 ]

#Full list of resources:

#stonith-sbd (stonith:external/sbd): Started slesmsscl1


# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl2
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl1
# Resource Group: g-NW2_ASCS
# fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
# nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
# vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
# rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
# Resource Group: g-NW2_ERS
# fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
# nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
# vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
# rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
# Resource Group: g-NW3_ASCS
# fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
# nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
# vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
# rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl1
# Resource Group: g-NW3_ERS
# fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
# nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
# vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
# rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl2

The following picture shows how the resources would look like in the HA Web
Konsole(Hawk), with the resources for SAP system NW2 expanded.

Proceed with the SAP installation


Complete your SAP installation by:

Preparing your SAP NetWeaver application servers


Installing a DBMS instance
Installing A primary SAP application server
Installing one or more additional SAP application instances

Test the multi-SID cluster setup


The following tests are a subset of the test cases in the best practices guides of SUSE.
They're included for your convenience. For the full list of cluster tests, reference the
following documentation:

If using highly available NFS server, follow High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise Server for SAP applications.
If using Azure NetApp Files NFS volumes, follow High availability for SAP
NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files
for SAP applications

Always read the SUSE best practices guides and perform all additional tests that might
have been added.
The tests that are presented are in a two nodes, multi-SID cluster with three SAP
systems installed.

1. Test HAGetFailoverConfig and HACheckFailoverConfig

Run the following commands as <sapsid>adm on the node where the ASCS
instance is currently running. If the commands fail with FAIL: Insufficient memory, it
might be caused by dashes in your hostname. This is a known issue and will be
fixed by SUSE in the sap-suse-cluster-connector package.

Bash

slesmsscl1:nw1adm 57> sapcontrol -nr 00 -function HAGetFailoverConfig

# 10.12.2019 21:33:08
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications
12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP
Applications 12 SP4 (sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2

slesmsscl1:nw1adm 53> sapcontrol -nr 00 -function


HACheckFailoverConfig
# 19.12.2019 21:19:58
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch

slesmsscl2:nw2adm 35> sapcontrol -nr 10 -function HAGetFailoverConfig

# 10.12.2019 21:37:09
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications
12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP
Applications 12 SP4 (sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode: slesmsscl2
# HANodes: slesmsscl2, slesmsscl1

slesmsscl2:nw2adm 52> sapcontrol -nr 10 -function


HACheckFailoverConfig

# 19.12.2019 21:17:39
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch

slesmsscl1:nw3adm 49> sapcontrol -nr 20 -function HAGetFailoverConfig

# 10.12.2019 23:35:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications
12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP
Applications 12 SP4 (sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2

slesmsscl1:nw3adm 52> sapcontrol -nr 20 -function


HACheckFailoverConfig

# 19.12.2019 21:10:42
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch

2. Manually migrate the ASCS instance. The example shows migrating the ASCS
instance for SAP system NW2.

Resource state, before starting the test:

text

Full list of resources:


stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1

Run the following commands as root to migrate the NW2 ASCS instance.

Bash

crm resource migrate rsc_sap_NW2_ASCS10 force


# INFO: Move constraint created for rsc_sap_NW2_ASCS10

crm resource unmigrate rsc_sap_NW2_ASCS10


# INFO: Removed migration constraints for rsc_sap_NW2_ASCS10

# Remove failed actions for the ERS that occurred as part of the
migration
crm resource cleanup rsc_sap_NW2_ERS12

Resource state after the test:

text

Full list of resources:


stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1

3. Test HAFailoverToNode. The test presented here shows migrating the ASCS
instance for SAP system NW2.

Resource state before starting the test:

text

Full list of resources:


stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1

Run the following commands as nw2adm to migrate the NW2 ASCS instance.

Bash

slesmsscl2:nw2adm 53> sapcontrol -nr 10 -host msnw2ascs -user nw2adm


password -function HAFailoverToNode ""

# run as root
# Remove failed actions for the ERS that occurred as part of the
migration
crm resource cleanup rsc_sap_NW2_ERS12
# Remove migration constraints
crm resource clear rsc_sap_NW2_ASCS10
#INFO: Removed migration constraints for rsc_sap_NW2_ASCS10

Resource state after the test:

text

Full list of resources:


stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1

4. Simulate node crash

Resource state before starting the test:

text
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Run the following command as root on the node where at least one ASCS instance
is running. In this example, we executed the command on slesmsscl2 , where the
ASCS instances for NW1 and NW3 are running.

Bash

slesmsscl2:~ # echo b > /proc/sysrq-trigger

If you use SBD, Pacemaker shouldn't automatically start on the killed node. The
status after the node is started again should look like this.

text

Online: [ slesmsscl1 ]
OFFLINE: [ slesmsscl2 ]
Full list of resources:

stonith-sbd (stonith:external/sbd): Started slesmsscl1


Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1

Failed Resource Actions:


* rsc_sap_NW1_ERS02_monitor_11000 on slesmsscl1 'not running' (7):
call=125, status=complete, exitreason='',
last-rc-change='Fri Dec 13 19:32:10 2019', queued=0ms, exec=0ms
* rsc_sap_NW2_ERS12_monitor_11000 on slesmsscl1 'not running' (7):
call=126, status=complete, exitreason='',
last-rc-change='Fri Dec 13 19:32:10 2019', queued=0ms, exec=0ms
* rsc_sap_NW3_ERS22_monitor_11000 on slesmsscl1 'not running' (7):
call=127, status=complete, exitreason='',
last-rc-change='Fri Dec 13 19:32:10 2019', queued=0ms, exec=0ms

Use the following commands to start Pacemaker on the killed node, clean the SBD
messages, and clean the failed resources.

Bash

# run as root
# list the SBD device(s)
cat /etc/sysconfig/sbd | grep SBD_DEVICE=

# output is like:
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"

sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d
/dev/disk/by-id/scsi-36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-
id/scsi-36001405cdd5ac8d40e548449318510c3 message slesmsscl2 clear

systemctl start pacemaker


crm resource cleanup rsc_sap_NW1_ERS02
crm resource cleanup rsc_sap_NW2_ERS12
crm resource cleanup rsc_sap_NW3_ERS22

Resource state after the test:


text

Full list of resources:


stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl1
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl1
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
Cluster an SAP ASCS/SCS instance on a
Windows failover cluster by using a
shared disk in Azure
Article • 10/12/2023

Windows

Windows Server Failover Clustering (WSFC) is the foundation of a high-availability (HA)


SAP ASCS/SCS installation and database management systems (DBMSs) in Windows.

A failover cluster is a group of 1+n independent servers (nodes) that work together to
increase the availability of applications and services. If a node failure occurs, WSFC
calculates the number of failures that can occur and still maintain a healthy cluster to
provide applications and services. You can choose from various quorum modes to
achieve failover clustering.

Prerequisites
Before you begin the tasks in this article, review the article High-availability architecture
and scenarios for SAP NetWeaver.

Windows Server Failover Clustering in Azure


WSFC with Azure virtual machines (VMs) requires additional configuration steps. When
you build a cluster, you need to set several IP addresses and virtual host names for the
SAP ASCS/SCS instance.

Name resolution in Azure and the cluster virtual host


name
The Azure cloud platform doesn't offer the option to configure virtual IP addresses, such
as floating IP addresses. You need an alternative solution to set up a virtual IP address to
reach the cluster resource in the cloud.

The Azure Load Balancer service provides an internal load balancer for Azure. With the
internal load balancer, clients reach the cluster over the cluster's virtual IP address.
Deploy the internal load balancer in the resource group that contains the cluster nodes.
Then, configure all necessary port-forwarding rules by using the probe ports of the
internal load balancer. Clients can connect via the virtual host name. The DNS server
resolves the cluster IP address, and the internal load balancer handles port forwarding
to the active node of the cluster.

) Important

Floating IP addresses are not supported on a secondary IP configuration for a


network adapter (NIC) in load-balancing scenarios. For details, see Azure Load
Balancer limitations. If you need an additional IP address for the VM, deploy a
second NIC.

SAP ASCS/SCS HA with cluster shared disks


In Windows, an SAP ASCS/SCS instance contains SAP central services, the SAP message
server, enqueue server processes, and SAP global host files. SAP global host files store
central files for the entire SAP system.

An SAP ASCS/SCS instance has the following components:


SAP central services:
Two processes (for a message server and an enqueue server) and an ASCS/SCS
virtual host name that's used to access the two processes
File structure: S:\usr\sap\<SID>\ASCS/SCS<instance number>

SAP global host files:

File structure: S:\usr\sap\<SID>\SYS...

The sapmnt file share, which enables access to these global S:\usr\sap\
<SID>\SYS... files by using the following UNC path:

\\<ASCS/SCS virtual host name>\sapmnt\<SID>\SYS...

In a high-availability setting, you cluster SAP ASCS/SCS instances. You use cluster shared
disks (drive S in this article's example) to place the SAP ASCS/SCS and SAP global host
files.
With an Enqueue Replication Server 1 (ERS1) architecture:

The same ASCS/SCS virtual host name is used to access the SAP message server
and enqueue server processes, in addition to the SAP global host files via the
sapmnt file share.
The same cluster shared disk (drive S) is shared between them.

With Enqueue Replication Server 2 (ERS2) architecture:

The same ASCS/SCS virtual host name is used to access the SAP message server
process, in addition to the SAP global host files via the sapmnt file share.
The same cluster shared disk (drive S) is shared between them.
There's a separate ERS virtual host name to access the enqueue server process.

Shared disks and Enqueue Replication Server


Shared disks are supported with an ERS1 architecture, where the ERS1 instance:

Is not clustered.
Uses a localhost name.
Is deployed on local disks on each of the cluster nodes.

Shared disks are also supported with an ERS2 architecture, where the ERS2 instance:

Is clustered.
Uses a dedicated virtual or network host name.
Needs the IP address of ERS virtual host name to be configured on an Azure
internal load balancer, in addition to the (A)SCS IP address.
Is deployed on local disks on each of the clustered nodes, so there's no need for a
shared disk.

For more information about ERS1 and ERS2, see Enqueue Replication Server in a
Microsoft Failover Cluster and New Enqueue Replicator in Failover Cluster
environments on the SAP website.

Options for shared disks in Azure for SAP workloads


There are two options for shared disks in a Windows failover cluster in Azure:

Use Azure shared disks to attach Azure managed disks to multiple VMs
simultaneously.
Use SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates
cluster shared storage.

When you're selecting the technology for shared disks, keep in mind the following
considerations about Azure shared disks for SAP workloads:

Use of Azure shared disks with Azure Premium SSD disks is supported for SAP
deployment in availability sets and availability zones.
Azure Ultra Disk Storage disks and Azure Standard SSD disks are not supported as
Azure shared disks for SAP workloads.
Be sure to provision Azure Premium SSD disks with a minimum disk size, as
specified in Premium SSD ranges, to be able to attach to the required number of
VMs simultaneously. You typically need two VMs for SAP ASCS Windows failover
clusters.

Keep in mind the following considerations about SIOS:

The SIOS solution provides real-time synchronous data replication between two
disks.
With the SIOS solution, you operate with two managed disks. If you're using either
availability sets or availability zones, the managed disks are on different storage
clusters.
Deployment in availability zones is supported.
The SIOS solution requires installing and operating third-party software, which you
need to purchase separately.

Azure shared disks


You can implement SAP ASCS/SCS HA with Azure shared disks.
Prerequisites and limitations
Currently, you can use Azure Premium SSD disks as Azure shared disks for the SAP
ASCS/SCS instance. The following limitations are currently in place:

Azure Ultra Disk Storage disks and Standard SSD disks are not supported as Azure
shared disks for SAP workloads.
Azure Shared disks with Premium SSD disks are supported for SAP deployment in
availability sets and availability zones.
Azure shared disks with Premium SSD disks come with two storage options:
Locally redundant storage (LRS) for Premium SSD shared disks ( skuName value of
Premium_LRS ) is supported with deployment in availability sets.

Zone-redundant storage (ZRS) for Premium SSD shared disks ( skuName value of
Premium_ZRS ) is supported with deployment in availability zones.

The Azure shared disk value maxShares determines how many cluster nodes can
use the shared disk. For an SAP ASCS/SCS instance, you typically configure two
nodes in WSFC. You then set the value for maxShares to 2 .
An Azure proximity placement group (PPG) is not required for Azure shared disks.
But for SAP deployment with PPGs, follow these guidelines:
If you're using PPGs for an SAP system deployed in a region, all virtual machines
that share a disk must be part of the same PPG.
If you're using PPGs for an SAP system deployed across zones, as described in
Proximity placement groups with zonal deployments, you can attach
Premium_ZRS storage to virtual machines that share a disk.

For more information, review the Limitations section of the documentation for Azure
shared disks.

Important considerations for Premium SSD shared disks

Consider these important points about Azure Premium SSD shared disks:

LRS for Premium SSD shared disks:


SAP deployment with LRS for Premium SSD shared disks operates with a single
Azure shared disk on one storage cluster. If there's a problem with the storage
cluster where the Azure shared disk is deployed, it affects your SAP ASCS/SCS
instance.

ZRS for Premium SSD shared disks:


Write latency for ZRS is higher than that of LRS because of cross-zonal copying
of data.
The distance between availability zones in different regions varies, and so does
ZRS disk latency across availability zones. Benchmark your disks to identify the
latency of ZRS disks in your region.
ZRS for Premium SSD shared disks synchronously replicates data across three
availability zones in the region. If there's a problem in one of the storage
clusters, your SAP ASCS/SCS instance continues to run because storage failover
is transparent to the application layer.
For more information, review the Limitations section of the documentation
about ZRS for managed disks.

For other important considerations about planning your SAP deployment, review Plan
and implement an SAP deployment on Azure and Azure Storage types for SAP
workloads.

Supported OS versions
Windows Server 2016, 2019, and later are supported. Use the latest datacenter images.

We strongly recommend using at least Windows Server 2019 Datacenter, for these
reasons:

WSFC in Windows Server 2019 is Azure aware.


Windows Server 2019 Datacenter includes integration and awareness of Azure host
maintenance and improved experience by monitoring for Azure scheduled events.
You can use distributed network names. (It's the default option.) There's no need to
have a dedicated IP address for the cluster network name. Also, you don't need to
configure an IP address on an Azure internal load balancer.

Shared disks in Azure with SIOS DataKeeper


Another option for shared disks is to use SIOS DataKeeper Cluster Edition to create a
mirrored storage that simulates cluster shared storage. The SIOS solution provides real-
time synchronous data replication.

To create a shared disk resource for a cluster:

1. Attach an additional disk to each of the virtual machines in a Windows cluster


configuration.
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes.
3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the
additional disk-attached volume from the source virtual machine to the additional
disk-attached volume of the target virtual machine. SIOS DataKeeper abstracts the
source and target local volumes, and then presents them to WSFC as one shared
disk.

7 Note

You don't need shared disks for high availability with some DBMS products, like
SQL Server. SQL Server Always On replicates DBMS data and log files from the local
disk of one cluster node to the local disk of another cluster node. In this case, the
Windows cluster configuration doesn't need a shared disk.

Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Windows
Server Failover Clustering to reduce the total number of VMs.

This configuration can be either local SAP application servers on an SAP ASCS/SCS
cluster or an SAP ASCS/SCS cluster role on Microsoft SQL Server Always On nodes.

) Important

Installing a local SAP application server on a SQL Server Always On node is not
supported.

Both SAP ASCS/SCS and the Microsoft SQL Server database are single points of failure
(SPOFs). WSFC helps protect these SPOFs in a Windows environment.
Although the resource consumption of the SAP ASCS/SCS is fairly small, we recommend
a reduction of the memory configuration for either SQL Server or the SAP application
server by 2 GB.

This diagram illustrates SAP application servers on WSFC nodes with the use of SIOS
DataKeeper:

Because the SAP application servers are installed locally, there's no need to set up any
synchronization.

This diagram illustrates SAP ASCS/SCS on SQL Server Always On nodes with the use of
SIOS DataKeeper:

For information about other configurations, see the following resources:


Optional configuration for SAP application servers on WSFC nodes using Windows
Scale-Out File Server

Optional configuration for SAP application servers on WSFC nodes using Server
Message Block in Azure NetApp Files

Optional configuration for SAP ASCS/SCS on SQL Server Always On nodes using
Windows Scale-Out File Server

Optional configuration for SAP ASCS/SCS on SQL Server Always On nodes using
Server Message Block in Azure NetApp Files

Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for an SAP ASCS/SCS instance

Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
Prepare the Azure infrastructure for SAP HA by
using a Windows failover cluster and shared disk
for SAP ASCS/SCS
Article • 01/21/2024

Windows

This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a
high-availability SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk as an
option for clustering an SAP ASCS instance. Two alternatives for cluster shared disk are presented in the
documentation:

Azure shared disks


Using SIOS DataKeeper Cluster Edition to create mirrored storage, that simulates clustered shared
disk

The documentation doesn't cover the database layer.

Prerequisites
Before you begin the installation review this article:

Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a
cluster shared disk

Create the ASCS VMs


For SAP ASCS / SCS cluster deploy two VMs in Azure availability set or Azure availability zones based on
the type of your deployment. Once the VMs are deployed:

Create Azure Internal Load Balancer for SAP ASCS /SCS instance.
Add Windows VMs to the AD domain.

Based on your deployment type, the host names and the IP addresses of the scenario would be like:

SAP deployment in Azure availability set

ノ Expand table

Host name role Host Static IP address Availability Disk


name set SkuName

First cluster node ASCS/SCS cluster pr1-ascs- 10.0.0.4 pr1-ascs-avset Premium_LRS


10

Second cluster node ASCS/SCS pr1-ascs- 10.0.0.5 pr1-ascs-avset


cluster 11
Host name role Host Static IP address Availability Disk
name set SkuName

Cluster Network Name pr1clust 10.0.0.42(only for Win 2016 n/a


cluster)

ASCS cluster network name pr1-ascscl 10.0.0.43 n/a

ERS cluster network name (only for pr1-erscl 10.0.0.44 n/a


ERS2)

SAP deployment in Azure availability zones

ノ Expand table

Host name role Host Static IP address Availability Disk


name zone SkuName

First cluster node ASCS/SCS pr1-ascs- 10.0.0.4 AZ01 Premium_ZRS


cluster 10

Second cluster node ASCS/SCS pr1-ascs- 10.0.0.5 AZ02


cluster 11

Cluster Network Name pr1clust 10.0.0.42(only for Win 2016 n/a


cluster)

ASCS cluster network name pr1-ascscl 10.0.0.43 n/a

ERS cluster network name (only pr1-erscl 10.0.0.44 n/a


for ERS2)

The steps mentioned in the document remain same for both deployment type. But if your cluster is
running in availability set, you need to deploy LRS for Azure premium shared disk (Premium_LRS) and if
the cluster is running in availability zone deploy ZRS for Azure premium shared disk (Premium_ZRS).

7 Note

Azure proximity placement group is not required for Azure shared disk. But for SAP deployment
with PPG, follow below guidelines:

If you are using PPG for SAP system deployed in a region then all virtual machines sharing a
disk must be part of the same PPG.
If you are using PPG for SAP system deployed across zones like described in the document
Proximity placement groups with zonal deployments, you can attach Premium_ZRS storage to
virtual machines sharing a disk.

Create Azure internal load balancer


During VM configuration, you can create or select exiting load balancer in networking section. For the
ENSA1 architecture on Windows, you would need only one virtual IP address for SAP ASCS/SCS. On the
other hand, the ENSA2 architecture necessitates two virtual IP addresses - one for SAP ASCS/SCS and
another for ERS2. When configuring a standard internal load balancer for the HA setup of SAP ASCS/SCS
on Windows, follow below guidelines.

1. Frontend IP Configuration: Create frontend IP (example: 10.0.0.43). Select the same virtual network
and subnet as your ASCS/ERS virtual machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs. In this example, VMs are pr1-ascs-
10 and pr1-ascs-11.
3. Inbound rules: Create load balancing rule.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"

4. Applicable to only ENSA2 architecture: Create additional frontend IP (10.0.0.44), load balancing rule
(use 621<Instance-no.> for ERS2 health probe port) as described in point 1 and 3.

7 Note

Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in


Portal, isn't respected. So to control the number of successful or failed consecutive probes, set the
property "probeThreshold" to 2. It is currently not possible to set this property using Azure portal, so
use either the Azure CLI or PowerShell command.

) Important

A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in
load-balancing scenarios. For details, see Azure Load Balancer limitations. If you need another IP
address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP
address) Standard Azure load balancer, there will be no outbound internet connectivity unless you
perform additional configuration to allow routing to public endpoints. For details on how to achieve
outbound connectivity, see Public endpoint connectivity for virtual machines using Azure Standard
Load Balancer in SAP high-availability scenarios.

 Tip
With the Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure
Shared Disk , you can automate the infrastructure preparation, using Azure Shared Disk for one SAP
SID with ERS1.
The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and
attach to the VMs. Azure Internal Load Balancer will be created and configured as well. For details -
see the ARM template.

Add registry entries on both cluster nodes of the


ASCS/SCS instance
Azure Load Balancer may close connections, if the connections are idle for a period and exceed the idle
timeout. The SAP work processes open connections to the SAP enqueue process as soon as the first
enqueue/dequeue request needs to be sent. To avoid interrupting these connections, change the TCP/IP
KeepAliveTime and KeepAliveInterval values on both cluster nodes. If using ERS1, it's also necessary to add
SAP profile parameters, as described later in this article. The following registry entries must be changed on
both cluster nodes:

KeepAliveTime
KeepAliveInterval

ノ Expand table

Path Variable name Variable Value Documentation


type

HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters KeepAliveTime REG_DWORD 120000 KeepAliveTime


(Decimal)

HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters KeepAliveInterval REG_DWORD 120000 KeepAliveInterval


(Decimal)

To apply the changes, restart both cluster nodes.

Add the Windows VMs to the domain


After you assign static IP addresses to the virtual machines, add the virtual machines to the domain.

Install and configure Windows failover cluster

Install the Windows failover cluster feature


Run this command on one of the cluster nodes:

PowerShell

# Hostnames of the Win cluster for SAP ASCS/SCS


$SAPSID = "PR1"
$ClusterNodes = ("pr1-ascs-10","pr1-ascs-11")
$ClusterName = $SAPSID.ToLower() + "clust"
# Install Windows features.
# After the feature installs, manually reboot both nodes
Invoke-Command $ClusterNodes {Install-WindowsFeature Failover-Clustering, FS-FileServer -
IncludeAllSubFeature -IncludeManagementTools }

Once the feature installation has completed, reboot both cluster nodes.

Test and configure Windows failover cluster


On Windows 2019, the cluster will automatically recognize that it's running in Azure, and as a default
option for cluster management IP, it uses Distributed Network name. Therefore, it uses any of the cluster
nodes local IP addresses. As a result, there's no need for a dedicated (virtual) network name for the
cluster, and there's no need to configure this IP address on Azure Internal Load Balancer.

For more information, see, Windows Server 2019 Failover Clustering New features Run this command on
one of the cluster nodes:

PowerShell

# Hostnames of the Win cluster for SAP ASCS/SCS


$SAPSID = "PR1"
$ClusterNodes = ("pr1-ascs-10","pr1-ascs-11")
$ClusterName = $SAPSID.ToLower() + "clust"

# IP adress for cluster network name is needed ONLY on Windows Server 2016 cluster
$ClusterStaticIPAddress = "10.0.0.42"

# Test cluster
Test-Cluster –Node $ClusterNodes -Verbose

$ComputerInfo = Get-ComputerInfo

$WindowsVersion = $ComputerInfo.WindowsProductName

if($WindowsVersion -eq "Windows Server 2019 Datacenter"){


write-host "Configuring Windows Failover Cluster on Windows Server 2019 Datacenter..."
New-Cluster –Name $ClusterName –Node $ClusterNodes -Verbose
}elseif($WindowsVersion -eq "Windows Server 2016 Datacenter"){
write-host "Configuring Windows Failover Cluster on Windows Server 2016 Datacenter..."
New-Cluster –Name $ClusterName –Node $ClusterNodes –StaticAddress
$ClusterStaticIPAddress -Verbose
}else{
Write-Error "Not supported Windows version!"
}

Configure cluster cloud quorum


As you use Windows Server 2016 or 2019, we recommended configuring Azure Cloud Witness, as cluster
quorum.

Run this command on one of the cluster nodes:

PowerShell
$AzureStorageAccountName = "cloudquorumwitness"
Set-ClusterQuorum –CloudWitness –AccountName $AzureStorageAccountName -AccessKey
<YourAzureStorageAccessKey> -Verbose

Tuning the Windows failover cluster thresholds


After you successfully install the Windows failover cluster, you need to adjust some thresholds, to be
suitable for clusters deployed in Azure. The parameters to be changed are documented in Tuning failover
cluster network thresholds . Assuming that your two VMs that make up the Windows cluster
configuration for ASCS/SCS are in the same subnet, change the following parameters to these values:

SameSubNetDelay = 2000
SameSubNetThreshold = 15
RouteHistoryLength = 30

These settings were tested with customers and offer a good compromise. They're resilient enough, but
they also provide failover that is fast enough for real error conditions in SAP workloads or VM failure.

Configure Azure shared disk


This section is only applicable, if you're using Azure shared disk.

Create and attach Azure shared disk with PowerShell


Run this command on one of the cluster nodes. You'll need to adjust the values for your resource group,
Azure region, SAPSID, and so on.

PowerShell

#############################
# Create Azure Shared Disk
#############################

$ResourceGroupName = "MyResourceGroup"
$location = "MyAzureRegion"
$SAPSID = "PR1"

$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"

# With parameter '-MaxSharesCount', we define the maximum number of cluster nodes to attach
the shared disk
$NumberOfWindowsClusterNodes = 2

# For SAP deployment in availability set, use below storage SkuName


$SkuName = "Premium_LRS"
# For SAP deployment in availability zone, use below storage SkuName
$SkuName = "Premium_ZRS"

$diskConfig = New-AzDiskConfig -Location $location -SkuName $SkuName -CreateOption Empty -


DiskSizeGB $DiskSizeInGB -MaxSharesCount $NumberOfWindowsClusterNodes
$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk
$diskConfig
##################################
## Attach the disk to cluster VMs
##################################
# ASCS Cluster VM1
$ASCSClusterVM1 = "$SAPSID-ascs-10"

# ASCS Cluster VM2


$ASCSClusterVM2 = "$SAPSID-ascs-11"

# Add the Azure Shared Disk to Cluster Node 1


$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM1
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId
$dataDisk.Id -Lun 0
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

# Add the Azure Shared Disk to Cluster Node 2


$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM2
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId
$dataDisk.Id -Lun 0
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

Format the shared disk with PowerShell


1. Get the disk number. Run these PowerShell commands on one of the cluster nodes:

PowerShell

Get-Disk | Where-Object PartitionStyle -Eq "RAW" | Format-Table -AutoSize


# Example output
# Number Friendly Name Serial Number HealthStatus OperationalStatus Total Size
Partition Style
# ------ ------------- ------------- ------------ ----------------- ---------- ----
-----------
# 2 Msft Virtual Disk Healthy Online 512 GB RAW

2. Format the disk. In this example, it's disk number 2.

PowerShell

# Format SAP ASCS Disk number '2', with drive letter 'S'
$SAPSID = "PR1"
$DiskNumber = 2
$DriveLetter = "S"
$DiskLabel = "$SAPSID" + "SAP"

Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" | Initialize-Disk


-PartitionStyle GPT -PassThru | New-Partition -DriveLetter $DriveLetter -
UseMaximumSize | Format-Volume -FileSystem ReFS -NewFileSystemLabel $DiskLabel -Force
-Verbose
# Example outout
# DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus
SizeRemaining Size
# ----------- --------------- ---------- --------- ------------ ----------------- -----
-------- ----
# S PR1SAP ReFS Fixed Healthy OK
504.98 GB 511.81 GB

3. Verify that the disk is now visible as a cluster disk.


PowerShell

# List all disks


Get-ClusterAvailableDisk -All
# Example output
# Cluster : pr1clust
# Id : 88ff1d94-0cf1-4c70-89ae-cbbb2826a484
# Name : Cluster Disk 1
# Number : 2
# Size : 549755813888
# Partitions : {\\?\GLOBALROOT\Device\Harddisk2\Partition2\}

4. Register the disk in the cluster.

PowerShell

# Add the disk to cluster


Get-ClusterAvailableDisk -All | Add-ClusterDisk
# Example output
# Name State OwnerGroup ResourceType
# ---- ----- ---------- ------------
# Cluster Disk 1 Online Available Storage Physical Disk

SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS


cluster share disk
This section is only applicable, if you're using the third-party software SIOS DataKeeper Cluster Edition to
create a mirrored storage that simulates cluster shared disk.

Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP
ASCS/SCS instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster
Edition is a third-party solution that you can use to create shared disk resources.

Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:

Add Microsoft .NET Framework, if needed. See the SIOS documentation for the most up-to-date
.NET framework requirements
Install SIOS DataKeeper
Configure SIOS DataKeeper

Install SIOS DataKeeper


Install SIOS DataKeeper Cluster Edition on each node in the cluster. To create virtual shared storage with
SIOS DataKeeper, create a synced mirror and then simulate cluster shared storage.

Before you install the SIOS software, create the DataKeeperSvc domain user.

7 Note

Add the DataKeeperSvc domain user to the Local Administrator group on both cluster nodes.

1. Install the SIOS software on both cluster nodes.


First page of the SIOS DataKeeper installation

2. In the dialog box, select Yes.

DataKeeper informs you that a service will be disabled

3. In the dialog box, we recommend that you select Domain or Server account.
User selection for SIOS DataKeeper

4. Enter the domain account user name and password that you created for SIOS DataKeeper.

Enter the domain user name and password for the SIOS DataKeeper installation

5. Install the license key for your SIOS DataKeeper instance, as shown in Figure 35.
Enter your SIOS DataKeeper license key

6. When prompted, restart the virtual machine.

Configure SIOS DataKeeper


After you install SIOS DataKeeper on both nodes, start the configuration. The goal of the configuration is
to have synchronous data replication between the additional disks that are attached to each of the virtual
machines.

1. Start the DataKeeper Management and Configuration tool, and then select Connect Server.

SIOS DataKeeper Management and Configuration tool


2. Enter the name or TCP/IP address of the first node the Management and Configuration tool should
connect to, and, in a second step, the second node.

Insert the name or TCP/IP address of the first node the Management and Configuration tool should
connect to, and in a second step, the second node

3. Create the replication job between the two nodes.

Create a replication job

A wizard guides you through the process of creating a replication job.

4. Define the name of the replication job.

Define the name of the replication job


Define the base data for the node, which should be the current source node

5. Define the name, TCP/IP address, and disk volume of the target node.

Define the name, TCP/IP address, and disk volume of the current target node

6. Define the compression algorithms. In our example, we recommend that you compress the
replication stream. Especially in resynchronization situations, the compression of the replication
stream dramatically reduces resynchronization time. Compression uses the CPU and RAM resources
of a virtual machine. As the compression rate increases, so does the volume of CPU resources that
are used. You can adjust this setting later.

7. Another setting you need to check is whether the replication occurs asynchronously or
synchronously. When you protect SAP ASCS/SCS configurations, you must use synchronous
replication.
Define replication details

8. Define whether the volume that is replicated by the replication job should be represented to a
Windows Server failover cluster configuration as a shared disk. For the SAP ASCS/SCS configuration,
select Yes so that the Windows cluster sees the replicated volume as a shared disk that it can use as
a cluster volume.

Select Yes to set the replicated volume as a cluster volume

After the volume is created, the DataKeeper Management and Configuration tool shows that the
replication job is active.

DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active

Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 45:
Failover Cluster Manager shows the disk that DataKeeper replicated

Next steps
Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS
instance
Install SAP NetWeaver HA on a
Windows failover cluster and shared
disk for an SAP ASCS/SCS instance in
Azure
Article • 02/10/2023

This article describes how to install and configure a high-availability SAP system in Azure
by using a Windows Server failover cluster and cluster shared disk for clustering an SAP
ASCS/SCS instance. As described in Architecture guide: Cluster an SAP ASCS/SCS
instance on a Windows failover cluster by using a cluster shared disk, there are two
alternatives for cluster shared disk:

Azure shared disks


Using SIOS DataKeeper Cluster Edition to create mirrored storage, that will
simulate clustered shared disk

Prerequisites
Before you begin the installation, review these documents:

Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover


cluster by using a cluster shared disk

Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for an SAP ASCS/SCS instance

We don't describe the DBMS setup in this article because setups vary depending on the
DBMS system you use. We assume that high-availability concerns with the DBMS are
addressed with the functionalities that different DBMS vendors support for Azure.
Examples are Always On or database mirroring for SQL Server and Oracle Data Guard for
Oracle databases. The high availability scenarios for the DBMS are not covered in this
article.

There are no special considerations when different DBMS services interact with a
clustered SAP ASCS or SCS configuration in Azure.

7 Note
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and
ABAP+Java systems are almost identical. The most significant difference is that an
SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance
running in the same Microsoft failover cluster group. Any installation differences for
each SAP NetWeaver installation stack are explicitly mentioned. You can assume
that the rest of the steps are the same.

Install SAP with a high-availability ASCS/SCS


instance

) Important

If you use SIOS to present shared disk, don't place your page file on the SIOS
DataKeeper mirrored volumes. You can leave your page file on the temporary drive
D of an Azure virtual machine, which is the default. If it's not already there, move
the Windows page file to drive D of your Azure virtual machine.

Installing SAP with a high-availability ASCS/SCS instance involves these tasks:

Create a virtual host name for the clustered SAP ASCS/SCS instance.
Install SAP on the first cluster node.
Modify the SAP profile of the ASCS/SCS instance.
Add a probe port.
Open the Windows firewall probe port.

Create a virtual host name for the clustered SAP


ASCS/SCS instance
1. In the Windows DNS manager, create a DNS entry for the virtual host name of the
ASCS/SCS instance.

) Important

The IP address that you assign to the virtual host name of the ASCS/SCS
instance must be the same as the IP address that you assigned to Azure Load
Balancer.
Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address

2. If are using the new SAP Enqueue Replication Server 2, which is also clustered
instance, then you need to reserve in DNS a virtual host name for ERS2 as well.

) Important

The IP address that you assign to the virtual host name of the ERS2 instance
must be the second the IP address that you assigned to Azure Load Balancer.

Define the DNS entry for the SAP ERS2 cluster virtual name and TCP/IP address

3. To define the IP address that's assigned to the virtual host name, select DNS
Manager > Domain.
New virtual name and TCP/IP address for SAP ASCS/SCS cluster configuration

Install the SAP first cluster node


1. Execute the first cluster node option on cluster node A. Select:

ABAP system: ASCS instance number 00


Java system: SCS instance number 01
ABAP+Java system: ASCS instance number 00 and SCS instance number 01

) Important

Keep in mind that the configuration in the Azure internal load balancer load
balancing rules(if using Basic SKU) and the selected SAP instance numbers
must match.

2. Follow the SAP described installation procedure. Make sure in the start installation
option “First Cluster Node”, to choose “Cluster Shared Disk” as configuration
option.

 Tip

The SAP installation documentation describes how to install the first ASCS/SCS
cluster node.

Modify the SAP profile of the ASCS/SCS instance


If you have Enqueue Replication Server 1, add SAP profile parameter
enque/encni/set_so_keepalive as described below. The profile parameter prevents
connections between SAP work processes and the enqueue server from closing when
they are idle for too long. The SAP parameter is not required for ERS2.

1. Add this profile parameter to the SAP ASCS/SCS instance profile, if using ERS1.

enque/encni/set_so_keepalive = true

For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as
described in SAP note 1410736 .

2. To apply the SAP profile parameter changes, restart the SAP ASCS/SCS instance.

Add a probe port


Use the internal load balancer's probe functionality to make the entire cluster
configuration work with Azure Load Balancer. The Azure internal load balancer usually
distributes the incoming workload equally between participating virtual machines.

However, this won't work in some cluster configurations because only one instance is
active. The other instance is passive and can’t accept any of the workload. A probe
functionality helps when the Azure internal load balancer detect which instance is active,
and only target the active instance.

) Important

In this example configuration, the ProbePort is set to 620Nr. For SAP ASCS instance
with number 00 it is 62000. You will need to adjust the configuration to match your
SAP instance numbers and your SAP SID.

To add a probe port run this PowerShell Module on one of the cluster VMs:

In the case of SAP ASC/SCS Instance

PowerShell

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
SID -ProbePort 62000

If using ERS2, which is clustered. There is no need to configure probe port for
ERS1, as it is not clustered.
PowerShell

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
SID -ProbePort 62001 -IsSAPERSClusteredInstance $True

The code for function Set-


AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource would look like:

PowerShell

function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {

<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer Health Probe Port on 'SAP $SAPSID IP' cluster resource.

.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer Health Probe Port on 'SAP $SAPSID IP' cluster resource.
It will also restart SAP Cluster group (default behavior), to activate the
changes.

You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.

Expectation is that SAP group is installed with official SWPM installation


tool, which will set default expected naming convention for:
- SAP Cluster Group: 'SAP $SAPSID'
- SAP Cluster IP Address Resource: 'SAP $SAPSID IP'

.PARAMETER SAPSID
SAP SID - 3 characters staring with letter.

.PARAMETER ProbePort
Azure Load Balancer Health Check Probe Port.

.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is '$True', so SAP cluster group will be
restarted to activate the changes.

.PARAMETER IsSAPERSClusteredInstance
Optional parameter.Default value is '$False'.
If set to $True , then handle clsutered new SAP ERS2 instance.

.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP', and
restart the SAP cluster group 'SAP AB1', to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000

.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP'. SAP
cluster group 'SAP AB1' IS NOT restarted, therefore changes are NOT active.
# To activate the changes you need to manualy restart 'SAP AB1' cluster
group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -RestartSAPClusterGroup $False

.EXAMPLE
# Set probe port to 62001, on SAP cluster resource 'SAP AB1 ERS IP'. SAP
cluster group 'SAP AB1 ERS' IS restarted, to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -IsSAPERSClusteredInstance $True

#>

[CmdletBinding()]
param(

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,

[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,

[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
)

BEGIN{}

PROCESS{
try{

if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS Instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS Instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}

$SAPIPResourceClusterParameters = Get-ClusterResource
$SAPIPresourceName | Get-ClusterParameter
$IPAddress = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters |
Where-Object {$_.Name -eq "OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "ProbePort" }).Value

$var = Get-ClusterResource | Where-Object { $_.name -eq


$SAPIPresourceName }
Write-Output "Current configuration parameters for SAP IP
cluster resource '$SAPIPresourceName' are:"

Get-ClusterResource -Name $SAPIPresourceName | Get-


ClusterParameter

Write-Output " "


Write-Output "Current probe port property of the SAP cluster
resource '$SAPIPresourceName' is '$OldProbePort'."
Write-Output " "
Write-Output "Setting the new probe port property of the SAP
cluster resource '$SAPIPresourceName' to '$ProbePort' ..."
Write-Output " "

$var | Set-ClusterParameter -Multiple


@{"Address"=$IPAddress;"ProbePort"=$ProbePort;"Subnetmask"=$SubnetMask;"Netw
ork"=$NetworkName;"OverrideAddressMatch"=$OverrideAddressMatch;"EnableDhcp"=
$EnableDhcp}

Write-Output " "

if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."

Write-Output " "


Write-Output "Taking SAP cluster IP resource
'$SAPIPresourceName' offline ..."
Stop-ClusterResource -Name $SAPIPresourceName
sleep 5

Write-Output "Starting SAP cluster role


'$SAPClusterRoleName' ..."
Start-ClusterGroup -Name $SAPClusterRoleName

Write-Output "New ProbePort parameter is active."


Write-Output " "

Write-Output "New configuration parameters for SAP IP


cluster resource '$SAPIPresourceName':"
Write-Output " "
Get-ClusterResource -Name $SAPIPresourceName | Get-
ClusterParameter
}else
{
Write-Output "SAP cluster role '$SAPClusterRoleName' is not
restarted, therefore changes are not activated."
}
}
catch{
Write-Error $_.Exception.Message
}
}
END {}
}

Open the Windows firewall probe port


Open a Windows firewall probe port on both cluster nodes. Use the following script to
open a Windows firewall probe port. Update the PowerShell variables for your
environment.
If using ERS2, you will also need to open the firewall port for the ERS2 probe port.

PowerShell

$ProbePort = 62000 # ProbePort of the Azure internal load balancer


New-NetFirewallRule -Name AzureProbePort -DisplayName "Rule for Azure
Probe Port" -Direction Inbound -Action Allow -Protocol TCP -LocalPort
$ProbePort

Install the database instance


To install the database instance, follow the process that's described in the SAP
installation documentation.

Install the second cluster node


To install the second cluster, follow the steps that are described in the SAP installation
guide.

Install the SAP Primary Application Server


Install the Primary Application Server (PAS) instance <SID>-di-0 on the virtual machine
that you've designated to host the PAS. There are no dependencies on Azure. If using
SIOS, there are no DataKeeper-specific settings.
Install the SAP Additional Application Server
Install an SAP Additional Application Server (AAS) on all the virtual machines that you've
designated to host an SAP Application Server instance.

Test the SAP ASCS/SCS instance failover


For the outlined failover tests, we assume that SAP ASCS is active on node A.

1. Verify that the SAP system can successfully failover from node A to node B Choose
one of these options to initiate a failover of the SAP <SID> cluster group from
cluster node A to cluster node B:

Failover Cluster Manager


Failover Cluster PowerShell

PowerShell

$SAPSID = "PR1" # SAP <SID>

$SAPClusterGroup = "SAP $SAPSID"


Move-ClusterGroup -Name $SAPClusterGroup

2. Restart cluster node A within the Windows guest operating system. This initiates an
automatic failover of the SAP <SID> cluster group from node A to node B.

3. Restart cluster node A from the Azure portal. This initiates an automatic failover of
the SAP <SID> cluster group from node A to node B.

4. Restart cluster node A by using Azure PowerShell. This initiates an automatic


failover of the SAP <SID> cluster group from node A to node B.

5. Verification

After failover, verify that the the SAP <SID> cluster group is running on
cluster node B.
In Failover Cluster Manager, the SAP <SID> cluster group is running on cluster
node B

After failover, verify shared disk is now mounted on cluster node B.

After failover, if using SIOS, verify that SIOS DataKeeper is replicating data
from source volume drive S on cluster node B to target volume drive S on
cluster node A.

SIOS DataKeeper replicates the local volume from cluster node B to cluster
node A
SAP ASCS/SCS instance multi-SID high
availability with Windows Server
Failover Clustering and Azure shared
disk
Article • 01/21/2024

Windows

This article focuses on how to move from a single SAP ASCS/SCS installation to
configuration of multiple SAP system IDs (SIDs) by installing additional SAP ASCS/SCS
clustered instances into an existing Windows Server Failover Clustering (WSFC) cluster
with an Azure shared disk. When you complete this process, you've configured an SAP
multi-SID cluster.

Prerequisites and limitations


You can use Azure Premium SSD disks as Azure shared disks for the SAP ASCS/SCS
instance. The following limitations are currently in place:

Azure Ultra Disk Storage disks and Azure Standard SSD disks aren't supported as
Azure shared disks for SAP workloads.
Azure shared disks with Premium SSD disks are supported for SAP deployment in
availability sets and availability zones.
Azure shared disks with Premium SSD disks come with two storage options:
Locally redundant storage (LRS) for Premium SSD shared disks ( skuName value of
Premium_LRS ) is supported with deployment in availability sets.

Zone-redundant storage (ZRS) for Premium SSD shared disks ( skuName value of
Premium_ZRS ) is supported with deployment in availability zones.

The Azure shared disk value maxShares determines how many cluster nodes can
use the shared disk. For an SAP ASCS/SCS instance, you typically configure two
nodes in WSFC. You then set the value for maxShares to 2 .
An Azure proximity placement group (PPG) isn't required for Azure shared disks.
But for SAP deployment with PPGs, follow these guidelines:
If you're using PPGs for an SAP system deployed in a region, all virtual machines
that share a disk must be part of the same PPG.
If you're using PPGs for an SAP system deployed across zones, as described in
Proximity placement groups with zonal deployments, you can attach
Premium_ZRS storage to virtual machines that share a disk.

For more information, review the Limitations section of the documentation for Azure
shared disks.

Important considerations for Premium SSD shared disks


Consider these important points about Azure Premium SSD shared disks:

LRS for Premium SSD shared disks:


SAP deployment with LRS for Premium SSD shared disks operates with a single
Azure shared disk on one storage cluster. If there's a problem with the storage
cluster where the Azure shared disk is deployed, it affects your SAP ASCS/SCS
instance.

ZRS for Premium SSD shared disks:


Write latency for ZRS is higher than that of LRS because cross-zonal copying of
data.
The distance between availability zones in different regions varies, and so does
ZRS disk latency across availability zones. Benchmark your disks to identify the
latency of ZRS disks in your region.
ZRS for Premium SSD shared disks synchronously replicates data across three
availability zones in the region. If there's a problem in one of the storage
clusters, your SAP ASCS/SCS instance continues to run because storage failover
is transparent to the application layer.
For more information, review the Limitations section of the documentation
about ZRS for managed disks.

) Important

The setup must meet the following conditions:

The SID for each database management system (DBMS) must have its own
dedicated WSFC cluster.
SAP application servers that belong to one SAP SID must have their own
dedicated virtual machines (VMs).
A mix of Enqueue Replication Server 1 (ERS1) and Enqueue Replication Server
2 (ERS2) in the same cluster is not supported.

Supported OS versions
Windows Server 2016, 2019, and later are supported. Use the latest datacenter images.

We strongly recommend using at least Windows Server 2019 Datacenter, for these
reasons:

WSFC in Windows Server 2019 is Azure aware.


Windows Server 2019 Datacenter includes integration and awareness of Azure host
maintenance and improved experience by monitoring for Azure scheduled events.
You can use distributed network names. (It's the default option.) There's no need to
have a dedicated IP address for the cluster network name. Also, you don't need to
configure an IP address on an Azure internal load balancer.

Architecture
Both ERS1 and ERS2 are supported in a multi-SID configuration. A mix of ERS1 and ERS2
isn't supported in the same cluster.

The following example shows two SAP SIDs. Both have an ERS1 architecture where:

SAP SID1 is deployed on a shared disk with ERS1. The ERS instance is installed on a
local host and on a local drive.

SAP SID1 has its own virtual IP address (SID1 (A)SCS IP1), which is configured on
the Azure internal load balancer.

SAP SID2 is deployed on a shared disk with ERS1. The ERS instance is installed on a
local host and on a local drive.

SAP SID2 has own virtual IP address (SID2 (A)SCS IP2), which is configured on the
Azure internal load balancer.
The next example also shows two SAP SIDs. Both have an ERS2 architecture where:

SAP SID1 is deployed on a shard disk with ERS2, which is clustered and is deployed
on a local drive.

SAP SID1 has its own virtual IP address (SID1 (A)SCS IP1), which is configured on
the Azure internal load balancer.

SAP ERS2 has its own virtual IP address (SID1 ERS2 IP2), which is configured on the
Azure internal load balancer.

SAP SID2 is deployed on a shard disk with ERS2, which is clustered and is deployed
on a local drive.

SAP SID2 has own virtual IP address (SID2 (A)SCS IP3), which is configured on the
Azure internal load balancer.

SAP ERS2 has its own virtual IP address (SID2 ERS2 IP4), which is configured on the
Azure internal load balancer.

There's a total of four virtual IP addresses:


SID1 (A)SCS IP1
SID2 ERS2 IP2
SID2 (A)SCS IP3
SID2 ERS2 IP4

Infrastructure preparation
You install a new SAP SID PR2 instance, in addition to the existing clustered SAP PR1
ASCS/SCS instance.

Host names and IP addresses


Based on your deployment type, the host names and the IP addresses of the scenario
should be like the following examples.

Here are the details for an SAP deployment in an Azure availability set:

ノ Expand table

Host name role Host Static IP address Availability Disk SkuName


name set value

First cluster node pr1-ascs- 10.0.0.4 pr1-ascs- Premium_LRS


ASCS/SCS cluster 10 avset
Host name role Host Static IP address Availability Disk SkuName
name set value

Second cluster node pr1-ascs- 10.0.0.5 pr1-ascs-


ASCS/SCS cluster 11 avset

Cluster network name pr1clust 10.0.0.42 (only for a Not


Windows Server 2016 applicable
cluster)

SID1 ASCS cluster pr1- 10.0.0.43 Not


network name ascscl applicable

SID1 ERS cluster pr1-erscl 10.0.0.44 Not


network name (only for applicable
ERS2)

SID2 ASCS cluster pr2- 10.0.0.45 Not


network name ascscl applicable

SID2 ERS cluster pr1-erscl 10.0.0.46 Not


network name (only for applicable
ERS2)

Here are the details for an SAP deployment in Azure availability zones:

ノ Expand table

Host name role Host Static IP address Availability Disk SkuName


name zone value

First cluster node pr1-ascs- 10.0.0.4 AZ01 Premium_ZRS


ASCS/SCS cluster 10

Second cluster node pr1-ascs- 10.0.0.5 AZ02


ASCS/SCS cluster 11

Cluster network name pr1clust 10.0.0.42 (only for a Not


Windows Server 2016 applicable
cluster)

SID1 ASCS cluster pr1- 10.0.0.43 Not


network name ascscl applicable

SID2 ERS cluster pr1-erscl 10.0.0.44 Not


network name (only for applicable
ERS2)

SID2 ASCS cluster pr2- 10.0.0.45 Not


network name ascscl applicable
Host name role Host Static IP address Availability Disk SkuName
name zone value

SID2 ERS cluster pr1-erscl 10.0.0.46 Not


network name (only for applicable
ERS2)

The steps in this article remain the same for both deployment types. But if your cluster is
running in an availability set, you need to deploy LRS for Azure Premium SSD shared
disks ( Premium_LRS ). If your cluster is running in an availability zone, you need to deploy
ZRS for Azure Premium SSD shared disks ( Premium_ZRS ).

Create an Azure internal load balancer


For multi-sid configuration of SAP SID, PR2, you could use the same internal load
balancer that you have created for SAP SID, PR1 system. For the ENSA1 architecture on
Windows, you would need only one virtual IP address for SAP ASCS/SCS. On the other
hand, the ENSA2 architecture necessitates two virtual IP addresses - one for SAP ASCS
and another for ERS2.

Configure additional frontend IP and load balancing rule for SAP SID, PR2 system on the
existing load balancer using following guidelines. This section assumes that the
configuration of standard internal load balancer for SAP SID, PR1 is already in place as
described in create load balancer.

1. Open the same standard internal load balancer that you have created for SAP SID,
PR1 system.
2. Frontend IP Configuration: Create frontend IP (example: 10.0.0.45).
3. Backend Pool: Backend Pool is same as that of SAP SID PR1 system.
4. Inbound rules: Create load balancing rule.

Frontend IP address: Select frontend IP


Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details
Protocol: TCP
Port: [for example: 620<Instance-no.> for SAP SID, PR2 ASCS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"
5. Applicable to only ENSA2 architecture: Create additional frontend IP (10.0.0.44),
load balancing rule (use 621<Instance-no.> for ERS2 health probe port) as
described in point 1 and 3.

7 Note

Health probe configuration property numberOfProbes, otherwise known as


"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to 2. It is
currently not possible to set this property using Azure portal, so use either the
Azure CLI or PowerShell command.

) Important

A floating IP address isn't supported on a network interface card (NIC) secondary IP


configuration in load-balancing scenarios. For details, see Azure Load Balancer
limitations. If you need another IP address for the VM, deploy a second NIC.

7 Note

When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard Azure load balancer, there will be no
outbound internet connectivity unless you perform additional configuration to
allow routing to public endpoints. For details on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Azure
Standard Load Balancer in SAP high-availability scenarios.

Create and attach a second Azure shared disk


Run this command on one of the cluster nodes. Adjust the values for details like your
resource group, Azure region, and SAP SID.

PowerShell

$ResourceGroupName = "MyResourceGroup"
$location = "MyRegion"
$SAPSID = "PR2"
$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"
$NumberOfWindowsClusterNodes = 2
# For SAP deployment in an availability set, use this storage SkuName value
$SkuName = "Premium_LRS"
# For SAP deployment in an availability zone, use this storage SkuName value
$SkuName = "Premium_ZRS"

$diskConfig = New-AzDiskConfig -Location $location -SkuName $SkuName -


CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount
$NumberOfWindowsClusterNodes

$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName


$DiskName -Disk $diskConfig
##################################
## Attach the disk to cluster VMs
##################################
# ASCS cluster VM1
$ASCSClusterVM1 = "pr1-ascs-10"
# ASCS cluster VM2
$ASCSClusterVM2 = "pr1-ascs-11"
# Next free LUN
$LUNNumber = 1

# Add the Azure shared disk to Cluster Node 1


$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM1
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -
ManagedDiskId $dataDisk.Id -Lun $LUNNumber
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

# Add the Azure shared disk to Cluster Node 2


$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM2
$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -
ManagedDiskId $dataDisk.Id -Lun $LUNNumber
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

Format the shared disk by using PowerShell


1. Get the disk number. Run these PowerShell commands on one of the cluster
nodes:

PowerShell

Get-Disk | Where-Object PartitionStyle -Eq "RAW" | Format-Table -


AutoSize
# Example output
# Number Friendly Name Serial Number HealthStatus
OperationalStatus Total Size Partition Style
# ------ ------------- ------------- ------------ ----------------
- ---------- ---------------
# 3 Msft Virtual Disk Healthy Online
512 GB RAW
2. Format the disk. In this example, it's disk number 3:

PowerShell

# Format SAP ASCS disk number 3, with drive letter S


$SAPSID = "PR2"
$DiskNumber = 3
$DriveLetter = "S"
$DiskLabel = "$SAPSID" + "SAP"

Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" |


Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -
DriveLetter $DriveLetter -UseMaximumSize | Format-Volume -FileSystem
ReFS -NewFileSystemLabel $DiskLabel -Force -Verbose
# Example outout
# DriveLetter FileSystemLabel FileSystem DriveType HealthStatus
OperationalStatus SizeRemaining Size
# ----------- --------------- ---------- --------- ------------ ------
----------- ------------- ----
# S PR2SAP ReFS Fixed Healthy OK
504.98 GB 511.81 GB

3. Verify that the disk is now visible as a cluster disk:

PowerShell

# List all disks


Get-ClusterAvailableDisk -All
# Example output
# Cluster : pr1clust
# Id : c469b5ad-d089-4d8f-ae4c-d834cbbde1a2
# Name : Cluster Disk 2
# Number : 3
# Size : 549755813888
# Partitions : {\\?\GLOBALROOT\Device\Harddisk3\Partition2\}

4. Register the disk in the cluster:

PowerShell

# Add the disk to the cluster


Get-ClusterAvailableDisk -All | Add-ClusterDisk
# Example output
# Name State OwnerGroup ResourceType
# ---- ----- ---------- ------------
# Cluster Disk 2 Online Available Storage Physical Disk
Create a virtual host name for the clustered
SAP ASCS/SCS instance
1. Create a DNS entry for the virtual host name for new the SAP ASCS/SCS instance in
the Windows DNS manager.

The IP address that you assigned to the virtual host name in DNS must be the
same as the IP address that you assigned in Azure Load Balancer.

2. If you're using a clustered instance of SAP ERS2, you need to reserve in DNS a
virtual host name for ERS2.

The IP address that you assigned to the virtual host name for ERS2 in DNS must be
the same as the IP address that you assigned in Azure Load Balancer.
3. To define the IP address assigned to the virtual host name, select DNS Manager >
Domain.

SAP installation
Install the SAP first cluster node
Follow the SAP-described installation procedure. Be sure to select First Cluster Node as
the option for starting installation. Select Cluster Shared Disk as the configuration
option. Choose the newly created shared disk.

Modify the SAP profile of the ASCS/SCS instance


If you're running ERS1, add the SAP profile parameter enque/encni/set_so_keepalive .
The profile parameter prevents connections between SAP work processes and the
enqueue server from closing when they're idle for too long. The SAP parameter isn't
required for ERS2.

1. Add this profile parameter to the SAP ASCS/SCS instance profile, if you're using
ERS1:

PowerShell

enque/encni/set_so_keepalive = true

For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as
described in SAP note 1410736 .

2. To apply the changes to the SAP profile parameter, restart the SAP ASCS/SCS
instance.

Configure a probe port on the cluster resource


Use the internal load balancer's probe functionality to make the entire cluster
configuration work with Azure Load Balancer. The Azure internal load balancer usually
distributes the incoming workload equally between participating virtual machines.

However, this approach won't work in some cluster configurations because only one
instance is active. The other instance is passive and can't accept any of the workload. A
probe functionality helps when the Azure internal load balancer detects which instance
is active and targets only the active instance.

) Important

In this example configuration, the probe port is set to 620nr. For SAP ASCS with
instance number 02, it's 62002.
You need to adjust the configuration to match your SAP instance numbers and your
SAP SID.

To add a probe port, run this PowerShell module on one of the cluster VMs:

If you're using SAP ASC/SCS with instance number 02:

PowerShell

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
PR2 -ProbePort 62002

If you're using ERS2 with instance number 12, configure a probe port. There's no
need to configure a probe port for ERS1. ERS2 with instance number 12 is
clustered, whereas ERS1 isn't clustered.

PowerShell

Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
PR2 -ProbePort 62012 -IsSAPERSClusteredInstance $True

The code for the function Set-


AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource looks like this example:

PowerShell

function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {
<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer health probe port on the SAP $SAPSID IP cluster
resource.

.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer health probe port on the SAP $SAPSID IP cluster
resource.
It will also restart the SAP cluster group (default behavior), to activate
the changes.

You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.

The expectation is that the SAP group is installed with the official SWPM
installation tool, which will set the default expected naming convention
for:
- SAP cluster group: SAP $SAPSID
- SAP cluster IP address resource: SAP $SAPSID IP
.PARAMETER SAPSID
SAP SID - three characters, starting with a letter.

.PARAMETER ProbePort
Azure Load Balancer health check probe port.

.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is $True, so the SAP cluster group will
be restarted to activate the changes.

.PARAMETER IsSAPERSClusteredInstance
Optional parameter. Default value is $False.
If it's set to $True, then handle the clustered new SAP ERS2 instance.

.EXAMPLE
# Set the probe port to 62000 on SAP cluster resource SAP AB1 IP, and
restart the SAP cluster group SAP AB1 to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000

.EXAMPLE
# Set the probe port to 62000 on SAP cluster resource SAP AB1 IP. SAP
cluster group SAP AB1 is not restarted, so the changes are not active.
# To activate the changes, you need to manually restart the SAP AB1 cluster
group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -RestartSAPClusterGroup $False

.EXAMPLE
# Set the probe port to 62001 on SAP cluster resource SAP AB1 ERS IP. SAP
cluster group SAP AB1 ERS is restarted to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -IsSAPERSClusteredInstance $True

#>

[CmdletBinding()]
param(

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,

[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,

[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,

[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
)

BEGIN{}

PROCESS{
try{

if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}

$SAPIPResourceClusterParameters = Get-ClusterResource
$SAPIPresourceName | Get-ClusterParameter
$IPAddress = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters |
Where-Object {$_.Name -eq "OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "ProbePort" }).Value

$var = Get-ClusterResource | Where-Object { $_.name -eq


$SAPIPresourceName }

#Write-Host "Current configuration parameters for SAP IP


cluster resource '$SAPIPresourceName' are:" -ForegroundColor Cyan
Write-Output "Current configuration parameters for SAP IP
cluster resource '$SAPIPresourceName' are:"

Get-ClusterResource -Name $SAPIPresourceName | Get-


ClusterParameter

Write-Output " "


Write-Output "Current probe port property of the SAP cluster
resource '$SAPIPresourceName' is '$OldProbePort'."
Write-Output " "
Write-Output "Setting the new probe port property of the SAP
cluster resource '$SAPIPresourceName' to '$ProbePort' ..."
Write-Output " "

$var | Set-ClusterParameter -Multiple


@{"Address"=$IPAddress;"ProbePort"=$ProbePort;"Subnetmask"=$SubnetMask;"Netw
ork"=$NetworkName;"OverrideAddressMatch"=$OverrideAddressMatch;"EnableDhcp"=
$EnableDhcp}
Write-Output " "

#$ActivateChanges = Read-Host "Do you want to take restart SAP


cluster role '$SAPClusterRoleName', to activate the changes (yes/no)?"

if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."

Write-Output " "


Write-Output "Taking SAP cluster IP resource
'$SAPIPresourceName' offline ..."
Stop-ClusterResource -Name $SAPIPresourceName
sleep 5

Write-Output "Starting SAP cluster role


'$SAPClusterRoleName' ..."
Start-ClusterGroup -Name $SAPClusterRoleName

Write-Output "New ProbePort parameter is active."


Write-Output " "

Write-Output "New configuration parameters for SAP IP


cluster resource '$SAPIPresourceName':"
Write-Output " "
Get-ClusterResource -Name $SAPIPresourceName | Get-
ClusterParameter
}else
{
Write-Output "SAP cluster role '$SAPClusterRoleName' is not
restarted, therefore changes are not activated."
}
}
catch{
Write-Error $_.Exception.Message
}

END {}
}

Continue with the SAP installation


1. Install the database instance by following the process described in the SAP
installation guide.

2. Install SAP on the second cluster node by following the steps that are described in
the SAP installation guide.
3. Install the SAP Primary Application Server (PAS) instance on the virtual machine
that is designated to host the PAS.

Follow the process described in the SAP installation guide. There are no
dependencies on Azure.

4. Install additional SAP application servers on the virtual machines that are
designated to host SAP application server instances.

Follow the process described in the SAP installation guide. There are no
dependencies on Azure.

Test SAP ASCS/SCS instance failover


The outlined failover tests assume that SAP ASCS is active on node A.

1. Verify that the SAP system can successfully fail over from node A to node B. In this
example, the test is for SAP SID PR2.

Make sure that each SAP SID can successfully move to the other cluster node.
Choose one of these options to initiate a failover of the SAP <SID> cluster group
from cluster node A to cluster node B:

Failover Cluster Manager


PowerShell commands for failover clusters

PowerShell

$SAPSID = "PR2" # SAP <SID>

$SAPClusterGroup = "SAP $SAPSID"


Move-ClusterGroup -Name $SAPClusterGroup

2. Restart cluster node A within the Windows guest operating system. This step
initiates an automatic failover of the SAP <SID> cluster group from node A to
node B.

3. Restart cluster node A from the Azure portal. This step initiates an automatic
failover of the SAP <SID> cluster group from node A to node B.

4. Restart cluster node A by using Azure PowerShell. This step initiates an automatic
failover of the SAP <SID> cluster group from node A to node B.
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for an SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
SAP ASCS/SCS instance multi-SID high
availability with Windows Server
Failover Clustering and shared disk on
Azure
Article • 02/10/2023

Windows

If you have an SAP deployment, you must use an internal load balancer to create a
Windows cluster configuration for SAP Central Services (ASCS/SCS) instances.

This article focuses on how to move from a single ASCS/SCS installation to an SAP
multi-SID configuration by installing additional SAP ASCS/SCS clustered instances into
an existing Windows Server Failover Clustering (WSFC) cluster with shared disk, using
SIOS to simulate shared disk. When this process is completed, you have configured an
SAP multi-SID cluster.

7 Note

This feature is available only in the Azure Resource Manager deployment model.

There is a limit on the number of private front-end IPs for each Azure internal load
balancer.

The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to
the maximum number of private front-end IPs for each Azure internal load
balancer.

For more information about load-balancer limits, see the "Private front-end IP per load
balancer" section in Networking limits: Azure Resource Manager.

) Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing


scenarios. For details see Azure Load balancer Limitations. If you need additional
IP address for the VM, deploy a second NIC.
7 Note

We recommend that you use the Azure Az PowerShell module to interact with
Azure. See Install Azure PowerShell to get started. To learn how to migrate to the
Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by
using file share, as shown in this diagram.
) Important

The setup must meet the following conditions:

The SAP ASCS/SCS instances must share the same WSFC cluster.
Each database management system (DBMS) SID must have its own dedicated
WSFC cluster.
SAP application servers that belong to one SAP system SID must have their
own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in
the same cluster is not supported.

SAP ASCS/SCS multi-SID architecture with


shared disk
The goal is to install multiple SAP ABAP ASCS or SAP Java SCS clustered instances in the
same WSFC cluster, as illustrated here:
For more information about load-balancer limits, see the "Private front-end IP per load
balancer" section in Networking limits: Azure Resource Manager.

The complete landscape with two high-availability SAP systems would look like this:
Prepare the infrastructure for an SAP multi-SID
scenario
To prepare your infrastructure, you can install an additional SAP ASCS/SCS instance with
the following parameters:

Parameter name Value

SAP ASCS/SCS SID pr1-lb-ascs

SAP DBMS internal load balancer PR5

SAP virtual host name pr5-sap-cl

SAP ASCS/SCS virtual host IP address (additional Azure load balancer IP address) 10.0.0.50

SAP ASCS/SCS instance number 50

ILB probe port for additional SAP ASCS/SCS instance 62350

7 Note
For SAP ASCS/SCS cluster instances, each IP address requires a unique probe port.
For example, if one IP address on an Azure internal load balancer uses probe port
62300, no other IP address on that load balancer can use probe port 62300.

For our purposes, because probe port 62300 is already reserved, we are using
probe port 62350.

You can install additional SAP ASCS/SCS instances in the existing WSFC cluster with two
nodes:

Virtual machine role Virtual machine host name Static IP address

First cluster node for ASCS/SCS instance pr1-ascs-0 10.0.0.10

Second cluster node for ASCS/SCS instance pr1-ascs-1 10.0.0.9

Create a virtual host name for the clustered SAP


ASCS/SCS instance on the DNS server
You can create a DNS entry for the virtual host name of the ASCS/SCS instance by using
the following parameters:

New SAP ASCS/SCS virtual host name Associated IP address

pr5-sap-cl 10.0.0.50

The new host name and IP address are displayed in DNS Manager, as shown in the
following screenshot:
7 Note

The new IP address that you assign to the virtual host name of the additional
ASCS/SCS instance must be the same as the new IP address that you assigned to
the SAP Azure load balancer.

In our scenario, the IP address is 10.0.0.50.

Add an IP address to an existing Azure internal load


balancer by using PowerShell
To create more than one SAP ASCS/SCS instance in the same WSFC cluster, use
PowerShell to add an IP address to an existing Azure internal load balancer. Each IP
address requires its own load-balancing rules, probe port, front-end IP pool, and back-
end pool.

The following script adds a new IP address to an existing load balancer. Update the
PowerShell variables for your environment. The script creates all the required load-
balancing rules for all SAP ASCS/SCS ports.

PowerShell

# Select-AzSubscription -SubscriptionId <xxxxxxxxxxx-xxxx-xxxx-xxxx-


xxxxxxxxxxxx>
Clear-Host
$ResourceGroupName = "SAP-MULTI-SID-Landscape" # Existing resource
group name
$VNetName = "pr2-vnet" # Existing virtual network
name
$SubnetName = "Subnet" # Existing subnet name
$ILBName = "pr2-lb-ascs" # Existing ILB name
$ILBIP = "10.0.0.50" # New IP address
$VMNames = "pr2-ascs-0","pr2-ascs-1" # Existing cluster virtual
machine names
$SAPInstanceNumber = 50 # SAP ASCS/SCS instance
number: must be a unique value for each cluster
[int]$ProbePort = "623$SAPInstanceNumber" # Probe port: must be a unique
value for each IP and load balancer

$ILB = Get-AzLoadBalancer -Name $ILBName -ResourceGroupName


$ResourceGroupName

$count = $ILB.FrontendIpConfigurations.Count + 1
$FrontEndConfigurationName ="lbFrontendASCS$count"
$LBProbeName = "lbProbeASCS$count"

# Get the Azure virtual network and subnet


$VNet = Get-AzVirtualNetwork -Name $VNetName -ResourceGroupName
$ResourceGroupName
$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $VNet -Name
$SubnetName

# Add a second front-end and probe configuration


Write-Host "Adding new front end IP Pool '$FrontEndConfigurationName' ..." -
ForegroundColor Green
$ILB | Add-AzLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -
PrivateIpAddress $ILBIP -SubnetId $Subnet.Id
$ILB | Add-AzLoadBalancerProbeConfig -Name $LBProbeName -Protocol Tcp -Port
$Probeport -ProbeCount 2 -IntervalInSeconds 10 | Set-AzLoadBalancer

# Get a new updated configuration


$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName
$ResourceGroupName

# Get an updated LP FrontendIpConfig


$FEConfig = Get-AzLoadBalancerFrontendIpConfig -Name
$FrontEndConfigurationName -LoadBalancer $ILB
$HealthProbe = Get-AzLoadBalancerProbeConfig -Name $LBProbeName -
LoadBalancer $ILB

# Add a back-end configuration into an existing ILB


$BackEndConfigurationName = "backendPoolASCS$count"
Write-Host "Adding new backend Pool '$BackEndConfigurationName' ..." -
ForegroundColor Green
$BEConfig = Add-AzLoadBalancerBackendAddressPoolConfig -Name
$BackEndConfigurationName -LoadBalancer $ILB | Set-AzLoadBalancer

# Get an updated config


$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName
$ResourceGroupName

# Assign VM NICs to the back-end pool


$BEPool = Get-AzLoadBalancerBackendAddressPoolConfig -Name
$BackEndConfigurationName -LoadBalancer $ILB
foreach($VMName in $VMNames){
$VM = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $VMName
$NICName = ($VM.NetworkInterfaceIDs[0].Split('/') | select -last 1)
$NIC = Get-AzNetworkInterface -name $NICName -ResourceGroupName
$ResourceGroupName
$NIC.IpConfigurations[0].LoadBalancerBackendAddressPools += $BEPool
Write-Host "Assigning network card '$NICName' of the '$VMName' VM to
the backend pool '$BackEndConfigurationName' ..." -ForegroundColor Green
Set-AzNetworkInterface -NetworkInterface $NIC
#start-AzVM -ResourceGroupName $ResourceGroupName -Name $VM.Name
}

# Create load-balancing rules


$Ports =
"445","32$SAPInstanceNumber","33$SAPInstanceNumber","36$SAPInstanceNumber","
39$SAPInstanceNumber","5985","81$SAPInstanceNumber","5$SAPInstanceNumber`13"
,"5$SAPInstanceNumber`14","5$SAPInstanceNumber`16"
$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName
$ResourceGroupName
$FEConfig = get-AzLoadBalancerFrontendIpConfig -Name
$FrontEndConfigurationName -LoadBalancer $ILB
$BEConfig = Get-AzLoadBalancerBackendAddressPoolConfig -Name
$BackEndConfigurationName -LoadBalancer $ILB
$HealthProbe = Get-AzLoadBalancerProbeConfig -Name $LBProbeName -
LoadBalancer $ILB

Write-Host "Creating load balancing rules for the ports: '$Ports' ... " -
ForegroundColor Green

foreach ($Port in $Ports) {

$LBConfigrulename = "lbrule$Port" + "_$count"


Write-Host "Creating load balancing rule '$LBConfigrulename' for the
port '$Port' ..." -ForegroundColor Green

$ILB | Add-AzLoadBalancerRuleConfig -Name $LBConfigRuleName -


FrontendIpConfiguration $FEConfig -BackendAddressPool $BEConfig -Probe
$HealthProbe -Protocol tcp -FrontendPort $Port -BackendPort $Port -
IdleTimeoutInMinutes 30 -LoadDistribution Default -EnableFloatingIP
}

$ILB | Set-AzLoadBalancer

Write-Host "Successfully added new IP '$ILBIP' to the internal load balancer


'$ILBName'!" -ForegroundColor Green
After the script has run, the results are displayed in the Azure portal, as shown in the
following screenshot:

Add disks to cluster machines, and configure the SIOS


cluster-share disk
You must add a new cluster-share disk for each additional SAP ASCS/SCS instance. For
Windows Server 2012 R2, the WSFC cluster share disk currently in use is the SIOS
DataKeeper software solution.

Do the following:

1. Add an additional disk or disks of the same size (which you need to stripe) to each
of the cluster nodes, and format them.
2. Configure storage replication with SIOS DataKeeper.

This procedure assumes that you have already installed SIOS DataKeeper on the WSFC
cluster machines. If you have installed it, you must now configure replication between
the machines. The process is described in detail in Install SIOS DataKeeper Cluster
Edition for the SAP ASCS/SCS cluster share disk.
Deploy VMs for SAP application servers and the DBMS
cluster
To complete the infrastructure preparation for the second SAP system, do the following:

1. Deploy dedicated VMs for the SAP application servers, and put each in its own
dedicated availability group.
2. Deploy dedicated VMs for the DBMS cluster, and put each in its own dedicated
availability group.

Install an SAP NetWeaver multi-SID system


For a description of the complete process of installing a second SAP SID2 system, see
SAP NetWeaver HA installation on Windows Failover Cluster and shared disk for an SAP
ASCS/SCS instance.

The high-level procedure is as follows:

1. Install SAP with a high-availability ASCS/SCS instance.


In this step, you are installing SAP with a high-availability ASCS/SCS instance on
the existing WSFC cluster node 1.

2. Modify the SAP profile of the ASCS/SCS instance.

3. Configure a probe port.


In this step, you are configuring an SAP cluster resource SAP-SID2-IP probe port by
using PowerShell. Execute this configuration on one of the SAP ASCS/SCS cluster
nodes.

4. Install the database instance.


To install the second cluster, follow the steps in the SAP installation guide.

5. Install the second cluster node.


In this step, you are installing SAP with a high-availability ASCS/SCS instance on
the existing WSFC cluster node 2. To install the second cluster, follow the steps in
the SAP installation guide.

6. Open Windows Firewall ports for the SAP ASCS/SCS instance and probe port.
On both cluster nodes that are used for SAP ASCS/SCS instances, you are opening
all Windows Firewall ports that are used by SAP ASCS/SCS. These SAP ASCS/SCS
instance ports are listed in the chapter SAP ASCS / SCS Ports.

For a list of all other SAP ports, see TCP/IP ports of all SAP products .

Also open the Azure internal load balancer probe port, which is 62350 in our
scenario. It is described in this article.

7. Install the SAP primary application server on the new dedicated VM, as described
in the SAP installation guide.

8. Install the SAP additional application server on the new dedicated VM, as
described in the SAP installation guide.

9. Test the SAP ASCS/SCS instance failover and SIOS replication.

Next steps
Networking limits: Azure Resource Manager
Multiple VIPs for Azure Load Balancer
Install HA SAP NetWeaver with Azure
Files SMB
Article • 04/18/2023

Microsoft and SAP now fully support Azure Files premium Server Message Block (SMB)
file shares. SAP Software Provisioning Manager (SWPM) 1.0 SP32 and SWPM 2.0 SP09
(and later) support Azure Files premium SMB storage.

There are special requirements for sizing Azure Files premium SMB shares. This article
contains specific recommendations on how to distribute workloads, choose an adequate
storage size, and meet minimum installation requirements for Azure Files premium SMB.

High-availability (HA) SAP solutions need a highly available file share for hosting
sapmnt, transport, and interface directories. Azure Files premium SMB is a simple Azure
platform as a service (PaaS) solution for shared file systems for SAP on Windows
environments. You can use Azure Files premium SMB with availability sets and
availability zones. You can also use Azure Files premium SMB for disaster recovery (DR)
scenarios to another region.

7 Note

Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
systems with SAP Kernel 7.22 (and later). For details, see SAP Note 2698948 .

Sizing and distribution of Azure Files premium


SMB for SAP systems
Evaluate the following points when you're planning the deployment of Azure Files
premium SMB:

The file share name sapmnt can be created once per storage account. It's possible
to create additional storage IDs (SIDs) as directories on the same /sapmnt share,
such as /sapmnt/<SID1> and /sapmnt/<SID2>.
Choose an appropriate size, IOPS, and throughput. A suggested size for the share
is 256 GB per SID. The maximum size for a share is 5,120 GB.
Azure Files premium SMB might not perform well for very large sapmnt shares with
more than 1 million files per storage account. Customers who have millions of
batch jobs that create millions of job log files should regularly reorganize them, as
described in SAP Note 16083 . If needed, you can move or archive old job logs to
another Azure Files premium SMB file share. If you expect sapmnt to be very large,
consider other options (such as Azure NetApp Files).
We recommend that you use a private network endpoint.
Avoid putting too many SIDs in a single storage account and its file share.
As general guidance, don't put together more than four nonproduction SIDs.
Don't put the entire development, production, and quality assurance system (QAS)
landscape in one storage account or file share. Failure of the share leads to
downtime of the entire SAP landscape.
We recommend that you put the sapmnt and transport directories on different
storage accounts, except in smaller systems. During the installation of the SAP
primary application server, SAPinst will request the transport host name. Enter the
FQDN of a different storage account as <storage_account>.file.core.windows.net.
Don't put the file system used for interfaces onto the same storage account as
/sapmnt/<SID>.
You must add the SAP users and groups to the sapmnt share. Set the Storage File
Data SMB Share Elevated Contributor permission for them in the Azure portal.

Distributing transport, interface, and sapmnt among separate storage accounts improves
throughput and resiliency. It also simplifies performance analysis. If you put many SIDs
and other file systems in a single Azure Files storage account, and the storage account's
performance is poor because you're hitting the throughput limits, it's difficult to identify
which SID or application is causing the problem.

Planning

) Important

The installation of SAP HA systems on Azure Files premium SMB with Active
Directory integration requires cross-team collaboration. We recommend that the
following teams work together to achieve tasks:

Azure team: Set up and configure storage accounts, script execution, and
Active Directory synchronization.
Active Directory team: Create user accounts and groups.
Basis team: Run SWPM and set access control lists (ACLs), if necessary.

Here are prerequisites for the installation of SAP NetWeaver HA systems on Azure Files
premium SMB with Active Directory integration:

Join the SAP servers to an Active Directory domain.


Replicate the Active Directory domain that contains the SAP servers to Azure Active
Directory (Azure AD) by using Azure AD Connect.
Make sure that at least one Active Directory domain controller is in the Azure
landscape, to avoid traversing Azure ExpressRoute to contact domain controllers
on-premises.
Make sure that the Azure support team reviews the documentation for Azure Files
SMB with Active Directory integration. The video shows extra configuration
options, which were modified (DNS) and skipped (DFS-N) for simplification
reasons. But these are valid configuration options.
Make sure that the user who's running the Azure Files PowerShell script has
permission to create objects in Active Directory.
Use SWPM version 1.0 SP32 and SWPM 2.0 SP09 or later for the installation. The
SAPinst patch must be 749.0.91 or later.
Install an up-to-date release of PowerShell on the Windows Server instance where
the script is run.

Installation sequence

Create users and groups


The Active Directory administrator should create, in advance, three domain users with
Local Administrator rights and one global group in the local Windows Server Active
Directory instance.

SAPCONT_ADMIN@SAPCONTOSO.local has Domain Administrator rights and is used to


run SAPinst, <sid>adm, and SAPService<SID> as SAP system users and the
SAP_<SAPSID>_GlobalAdmin group. The SAP Installation Guide contains the specific
details required for these accounts.

7 Note

SAP user accounts should not be Domain Administrator. We generally recommend


that you don't use <sid>adm to run SAPinst.

Check Synchronization Service Manager


The Active Directory administrator or Azure administrator should check Synchronization
Service Manager in Azure AD Connect. By default, it takes about 30 minutes to replicate
to Azure AD.
Create a storage account, private endpoint, and file share
The Azure administrator should complete the following tasks:

1. On the Basics tab, create a storage account with either premium zone-redundant
storage (ZRS) or locally redundant storage (LRS). Customers with zonal deployment
should choose ZRS. Here, the administrator needs to make the choice between
setting up a Standard or Premium account.

) Important

For production use, we recommend choosing a Premium account. For non-


production use, a Standard account should be sufficient.

2. On the Advanced tab, the default settings should be OK.


3. On the Networking tab, the administrator makes the decision to use a private
endpoint.
a. Select Add private endpoint for the storage account, and then enter the
information for creating a private endpoint.

b. If necessary, add a DNS A record into Windows DNS for


<storage_account_name>.file.core.windows.net. (This might need to be in a new
DNS zone.) Discuss this topic with the DNS administrator. The new zone should
not update outside an organization.

4. Create the sapmnt file share with an appropriate size. The suggested size is 256 GB,
which delivers 650 IOPS, 75-MB/sec egress, and 50-MB/sec ingress.
5. Download the Azure Files GitHub content and run the script.

This script creates either a computer account or a service account in Active


Directory. It has the following requirements:

The user who's running the script must have permission to create objects in
the Active Directory domain that contains the SAP servers. Typically, an
organization uses a Domain Administrator account such as
SAPCONT_ADMIN@SAPCONTOSO.local.
Before the user runs the script, confirm that this Active Directory domain user
account is synchronized with Azure AD. An example of this would be to open
the Azure portal and go to Azure AD users, check that the user
SAPCONT_ADMIN@SAPCONTOSO.local exists, and verify the Azure AD user
account.
Grant the Contributor role-based access control (RBAC) role to this Azure AD
user account for the resource group that contains the storage account that
holds the file share. In this example, the user
SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com is granted the
Contributor role to the respective resource group.
The user should run the script while logged on to a Windows Server instance
by using an Active Directory domain user account with the permission as
specified earlier.

In this example scenario, the Active Directory administrator would log on to the
Windows Server instance as SAPCONT_ADMIN@SAPCONTOSO.local. When the
administrator is using the PowerShell command Connect-AzAccount , the
administrator connects as user SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com.
Ideally, the Active Directory administrator and the Azure administrator should work
together on this task.

) Important

When a user is running the PowerShell script command Connect-AzAccount ,


we highly recommend entering the Azure AD user account that corresponds
and maps to the Active Directory domain user account that was used to log
on to a Windows Server instance.
After the script runs successfully, go to Storage > File Shares and verify that Active
Directory: Configured appears.

6. Assign SAP users <sid>adm and SAPService<SID>, and the


SAP_<SAPSID>_GlobalAdmin group, to the Azure Files premium SMB file share.
Select the role Storage File Data SMB Share Elevated Contributor in the Azure
portal.

7. Check the ACL on the sapmnt file share after the installation. Then add the
DOMAIN\CLUSTER_NAME$ account, DOMAIN\<sid>adm account,
DOMAIN\SAPService<SID> account, and SAP_<SID>_GlobalAdmin group. These
accounts and group should have full control of the sapmnt directory.

) Important

Complete this step before the SAPinst installation. It will be difficult or


impossible to change ACLs after SAPinst has created directories and files on
the file share.

The following screenshots show how to add computer machine accounts.

You can find the DOMAIN\CLUSTER_NAME$ account by selecting Computers under


Object types.
8. If necessary, move the computer account created for Azure Files to an Active
Directory container that doesn't have account expiration. The name of the
computer account is the short name of the storage account.

) Important

To initialize the Windows ACL for the SMB share, mount the share once to a
drive letter.

The storage key is the password, and the user is Azure\<SMB share name>.
Complete SAP Basis tasks
An SAP Basis administrator should complete these tasks:

1. Install the Windows cluster on ASCS/ERS nodes and add the cloud witness.
2. The first cluster node installation asks for the Azure Files SMB storage account
name. Enter the FQDN <storage_account_name>.file.core.windows.net. If SAPinst
doesn't accept more than 13 characters, the SWPM version is too old.
3. Modify the SAP profile of the ASCS/SCS instance.
4. Update the probe port for the SAP <SID> role in Windows Server Failover Cluster
(WSFC).
5. Continue with SWPM installation for the second ASCS/ERS node. SWPM requires
only the path of the profile directory. Enter the full UNC path to the profile
directory.
6. Enter the UNC profile path for the database and for the installation of the primary
application server (PAS) and additional application server (AAS).
7. The PAS installation asks for the transport host name. Provide the FQDN of a
separate storage account name for the transport directory.
8. Verify the ACLs on the SID and transport directory.

Disaster recovery setup


Azure Files premium SMB supports disaster recovery scenarios and cross-region
replication scenarios. All data in Azure Files premium SMB directories can be
continuously synchronized to a DR region's storage account. For more information, see
the procedure for synchronizing files in Transfer data with AzCopy and file storage.

After a DR event and failover of the ASCS instance to the DR region, change the
SAPGLOBALHOST profile parameter to point to Azure Files SMB in the DR region. Perform

the same preparation steps on the DR storage account to join the storage account to
Active Directory and assign RBAC roles for SAP users and groups.
Troubleshooting
The PowerShell scripts that you downloaded earlier contain a debug script to conduct
basic checks for validating the configuration.

PowerShell

Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -


ResourceGroupName $ResourceGroupName -Verbose

Here's a PowerShell screenshot of the debug script output.

The following screenshot shows the technical information to validate a successful


domain join.

Useful links and resources


SAP Note 2273806 (SAP support for solutions related to storage or file systems)
Install SAP NetWeaver high availability on a Windows failover cluster and file share
for SAP ASCS/SCS instances on Azure
Azure Virtual Machines high-availability architecture and scenarios for SAP
NetWeaver
Add a probe port in an ASCS cluster configuration
Installation of an (A)SCS Instance on a Failover Cluster with no Shared Disks (SAP
documentation)

Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Windows
Server Failover Cluster to reduce the total number of VMs.

This configuration can be either local SAP application servers on an SAP ASCS/SCS
cluster or an SAP ASCS/SCS cluster role on Microsoft SQL Server Always On nodes.

) Important

Installing a local SAP application server on a SQL Server Always On node is not
supported.

Both SAP ASCS/SCS and the Microsoft SQL Server database are single points of failure
(SPOFs). Using Azure Files SMB helps protect these SPOFs in a Windows environment.

Although the resource consumption of the SAP ASCS/SCS is fairly small, we recommend
a reduction of the memory configuration by 2 GB for either SQL Server or the SAP
application server.

SAP application servers on WSFC nodes using Azure Files


SMB
The following diagram shows SAP application servers locally installed.
7 Note

The diagram shows the use of additional local disks. This setup is optional for
customers who won't install application software on the OS drive (drive C).

SAP ASCS/SCS on SQL Server Always On nodes using


Azure Files SMB
The following diagram shows Azure Files SMB with local SQL Server setup.

) Important

Using Azure Files SMB for any SQL Server volume is not supported.
7 Note

The diagram shows the use of additional local disks. This setup is optional for
customers who won't install application software on the OS drive (drive C).
High availability for SAP NetWeaver on
Azure VMs on Windows with Azure
NetApp Files(SMB) for SAP applications
Article • 02/10/2023

This article describes how to deploy, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system on Windows VMs,
using SMB on Azure NetApp Files.

The database layer isn't covered in detail in this article. We assume that the Azure virtual
network has already been created.

Read the following SAP Notes and papers first:

Azure NetApp Files documentation


SAP Note 1928533 , which contains:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Note 2287140 lists prerequisites for SAP-supported CA feature of SMB 3.x
protocol.
SAP Note 2802770 has troubleshooting information for the slow running SAP
transaction AL11 on Windows 2012 and 2016.
SAP Note 1911507 has information about transparent failover feature for a file
share on Windows Server with the SMB 3.0 protocol.
SAP Note 662452 has recommendation(deactivating 8.3 name generation) to
address Poor file system performance/errors during data accesses.
Install SAP NetWeaver high availability on a Windows failover cluster and file share
for SAP ASCS/SCS instances on Azure
Azure Virtual Machines high-availability architecture and scenarios for SAP
NetWeaver
Add probe port in ASCS cluster configuration
Installation of an (A)SCS Instance on a Failover Cluster
Create an SMB volume for Azure NetApp Files
NetApp SAP Applications on Microsoft Azure using Azure NetApp Files

Overview
SAP developed a new approach, and an alternative to cluster shared disks, for clustering
an SAP ASCS/SCS instance on a Windows failover cluster. Instead of using cluster shared
disks, one can use an SMB file share to deploy SAP global host files. Azure NetApp Files
supports SMBv3 (along with NFS) with NTFS ACL using Active Directory. Azure NetApp
Files is automatically highly available (as it is a PaaS service). These features make Azure
NetApp Files great option for hosting the SMB file share for SAP global.
Both Azure Active Directory (AD) Domain Services and Active Directory Domain Services
(AD DS) are supported. You can use existing Active Directory domain controllers with
Azure NetApp Files. Domain controllers can be in Azure as virtual machines, or on
premises via ExpressRoute or S2S VPN. In this article, we will use Domain controller in an
Azure VM.
High availability(HA) for SAP Netweaver central services requires shared storage. To
achieve that on Windows, so far it was necessary to build either SOFS cluster or use
cluster shared disk s/w like SIOS. Now it is possible to achieve SAP Netweaver HA by
using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files for the
shared storage eliminates the need for either SOFS or SIOS.

7 Note

Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
systems with SAP Kernel 7.22 (and later). For details see SAP note 2698948
The prerequisites for an SMB file share are:

SMB 3.0 (or later) protocol.


Ability to set Active Directory access control lists (ACLs) for Active Directory user
groups and the computer$ computer object.
The file share must be HA-enabled.

The share for the SAP Central services in this reference architecture is offered by Azure
NetApp Files:
Create and mount SMB volume for Azure
NetApp Files
Perform the following steps, as preparation for using Azure NetApp Files.

1. Create Azure NetApp account, following the steps described in Create a NetApp
account

2. Set up capacity pool, following the instructions in Set up a capacity pool

3. Azure NetApp Files resources must reside in delegated subnet. Follow the
instructions in Delegate a subnet to Azure NetApp Files to create delegated
subnet.

) Important

You need to create Active Directory connections before creating an SMB


volume. Review the requirements for Active Directory connections.

When creating the Active Directory connection, make sure to enter SMB
Server (Computer Account) Prefix no longer than 8 characters to avoid the 13
characters hostname limitation for SAP Applications (a suffix is automatically
added to the SMB Computer Account name).
The hostname limitations for SAP applications are described in 2718300 -
Physical and Virtual hostname length limitations and 611361 - Hostnames
of SAP ABAP Platform servers .

4. Create Active Directory connection, as described in Create an Active Directory


connection. Make sure to add the user that will run SWPM to install the SAP
system, as Administrators privilege user in the Active Directory connection. If
you don't add the SAP installation user as Administrators privilege user in the
Active Directory connection, SWPM will fail with permission errors, unless you run
SWPM as user with elevated Domain Admin rights.

5. Create SMB Azure NetApp Files SMB volume, following the instructions in Add an
SMB volume.

6. Mount the SMB volume on your Windows Virtual Machine.

 Tip

You can find the instructions on how to mount the Azure NetApp Files volume, if
you navigate in Azure Portal to the Azure NetApp Files object, click on the
Volumes blade, then Mount Instructions.

Important considerations
When considering Azure NetApp Files for the SAP Netweaver architecture, be aware of
the following important considerations:

The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1
TiB increments.
The minimum volume is 100 GiB
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. While sizing the SAP Azure NetApp volumes, make sure
that the resulting throughput meets the application requirements.
Prepare the infrastructure for SAP HA by using
a Windows failover cluster
1. Set the ASCS/SCS load balancing rules for the Azure internal load balancer.
2. Add Windows virtual machines to the domain.
3. Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
4. Set up a Windows Server failover cluster for an SAP ASCS/SCS instance
5. If you are using Windows Server 2016, we recommend that you configure Azure
Cloud Witness.

Install SAP ASCS instance on both nodes


You need the following software from SAP:

SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or


later.
SAP Kernel 7.22 or later
Create a virtual host name (cluster network name) for the clustered SAP ASCS/SCS
instance, as described in Create a virtual host name for the clustered SAP
ASCS/SCS instance.

Install an ASCS/SCS instance on the first ASCS/SCS cluster


node
1. Install an SAP ASCS/SCS instance on the first cluster node. Start the SAP SWPM
installation tool, then navigate to: Product > DBMS > Installation > Application
Server ABAP (or Java) > High-Availability System > ASCS/SCS instance > First
cluster node.

2. Select File Share Cluster as the Cluster share Configuration in SWPM.

3. When prompted at step SAP System Cluster Parameters, enter the host name for
the Azure NetApp Files SMB share you already created as File Share Host Name. In
this example, the SMB share host name is anfsmb-9562.

) Important

If Pre-requisite checker Results in SWPM shows Continuous availability feature


condition not met, it can be addressed by following the instructions in
Delayed error message when you try to access a shared folder that no
longer exists in Windows .
 Tip

If Pre-requisite checker Results in SWPM shows Swap Size condition not met,
you can adjust the SWAP size by navigating to My Computer>System
Properties>Performance Settings> Advanced> Virtual memory> Change.

4. Configure an SAP cluster resource, the SAP-SID-IP probe port, by using


PowerShell. Execute this configuration on one of the SAP ASCS/SCS cluster nodes,
as described in Configure probe port.

Install an ASCS/SCS instance on the second ASCS/SCS


cluster node
1. Install an SAP ASCS/SCS instance on the second cluster node. Start the SAP SWPM
installation tool, then navigate to Product > DBMS > Installation > Application
Server ABAP (or Java) > High-Availability System > ASCS/SCS instance > Additional
cluster node.

Update the SAP ASCS/SCS instance profile


Update parameters in the SAP ASCS/SCS instance profile <SID>ASCS/SCS<Nr><Host>.

Parameter name Parameter value

gw/netstat_once 0

enque/encni/set_so_keepalive true

service/ha_check_node 1

Parameter enque/encni/set_so_keepalive is only needed if using ENSA1.


Restart the SAP ASCS/SCS instance. Set KeepAlive parameters on both SAP ASCS/SCS
cluster nodes follow the instructions to Set registry entries on the cluster nodes of the
SAP ASCS/SCS instance.

Install a DBMS instance and SAP application servers


Complete your SAP installation, by installing:

A DBMS instance
A primary SAP application server
An additional SAP application server

Test the SAP ASCS/SCS instance failover

Fail over from cluster node A to cluster node B and back


In this test scenario we'll refer to cluster node sapascs1 as node A, and to cluster node
sapascs2 as node B.

1. Verify that the cluster resources are running on node A.

2. Restart cluster node A. The SAP cluster resources will move to cluster node B.

Lock entry test


1.Verify that the SAP Enqueue Replication Server (ERS) is active
2. Log on to the SAP system, execute transaction SU01 and open a user ID in change
mode. That will generate SAP lock entry.
3. As you're logged in the SAP system, display the lock entry, by navigating to
transaction ST12.
4. Fail over ASCS resources from cluster node A to cluster node B.
5. Verify that the lock entry, generated before the SAP ASCS/SCS cluster resources
failover is retained.
For more information, see Troubleshooting for Enqueue Failover in ASCS with ERS

Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Microsoft
Windows Failover Cluster to reduce the total number of VMs.

This can either be local SAP Application Servers on a SAP ASCS/SCS cluster or a SAP
ASCS/SCS Cluster Role on Microsoft SQL Server Always On nodes.

) Important

Installing a local SAP Application Server on a SQL Server Always On node is not
supported.

Both, SAP ASCS/SCS and the Microsoft SQL Server database, are single points of failure
(SPOF). To protect these SPOFs in a Windows environment Azure NetApp Files SMB is
used.

While the resource consumption of the SAP ASCS/SCS is fairly small, a reduction of the
memory configuration for either SQL Server or the SAP Application Server by 2 GB is
recommended.
SAP Application Servers on WSFC nodes using NetApp
Files SMB

7 Note

The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)

SAP ASCS/SCS on SQL Server Always On nodes using


Azure NetApp Files SMB

) Important

Using Azure NetApp Files SMB for any SQL Server volume is not supported.
7 Note

The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)

Using Windows DFS-N to support flexible SAPMNT share


creation for SMB based file share
Using DFS-N allows you to utilize individual sapmnt volumes for SAP systems deployed
within the same Azure region and subscription. Using Windows DFS-N to support
flexible SAPMNT share creation for SMB-based file share shows how to set this up.

Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability
and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
Cluster an SAP ASCS/SCS instance on a
Windows failover cluster by using a file
share in Azure
Article • 02/10/2023

Windows

Windows Server failover clustering is the foundation of a high-availability SAP ASCS/SCS


installation and DBMS in Windows.

A failover cluster is a group of 1+n independent servers (nodes) that work together to
increase the availability of applications and services. If a node failure occurs, Windows
Server failover clustering calculates the number of failures that can occur and still
maintain a healthy cluster to provide applications and services. You can choose from
different quorum modes to achieve failover clustering.

Prerequisites
Before you begin the tasks that are described in this article, review the following articles
and SAP notes:

Azure Virtual Machines high-availability architecture and scenarios for SAP


NetWeaver
SAP Note 1928533 , which contains:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Note 2287140 lists prerequisites for SAP-supported CA feature of SMB 3.x
protocol.
SAP Note 2802770 has troubleshooting information for the slow running SAP
transaction AL11 on Windows 2012 and 2016.
SAP Note 1911507 has information about transparent failover feature for a file
share on Windows Server with the SMB 3.0 protocol.
SAP Note 662452 has recommendation(deactivating 8.3 name generation) to
address Poor file system performance/errors during data accesses.
Install SAP NetWeaver high availability on a Windows failover cluster and file share
for SAP ASCS/SCS instances on Azure
Installation of an (A)SCS Instance on a Failover Cluster

7 Note

Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
systems with SAP Kernel 7.22 (and later). For details see SAP note 2698948

Windows Server failover clustering in Azure


Compared to bare-metal or private cloud deployments, Azure Virtual Machines requires
additional steps to configure Windows Server failover clustering. When you build a
cluster, you need to set several IP addresses and virtual host names for the SAP
ASCS/SCS instance.

Name resolution in Azure and the cluster virtual host


name
The Azure cloud platform doesn't offer the option to configure virtual IP addresses, such
as floating IP addresses. You need an alternative solution to set up a virtual IP address to
reach the cluster resource in the cloud.

The Azure Load Balancer service provides an internal load balancer for Azure. With the
internal load balancer, clients reach the cluster over the cluster virtual IP address.

Deploy the internal load balancer in the resource group that contains the cluster nodes.
Then, configure all necessary port forwarding rules by using the probe ports of the
internal load balancer. The clients can connect via the virtual host name. The DNS server
resolves the cluster IP address. The internal load balancer handles port forwarding to the
active node of the cluster.
Figure 1: Windows Server failover clustering configuration in Azure without a shared disk

SAP ASCS/SCS HA with file share


SAP developed a new approach, and an alternative to cluster shared disks, for clustering
an SAP ASCS/SCS instance on a Windows failover cluster. Instead of using cluster shared
disks, you can use an SMB file share to deploy SAP global host files.

7 Note

An SMB file share is an alternative to using cluster shared disks for clustering SAP
ASCS/SCS instances.

This architecture is specific in the following ways:

SAP central services (with its own file structure and message and enqueue
processes) are separate from the SAP global host files.
SAP central services run under an SAP ASCS/SCS instance.
SAP ASCS/SCS instance is clustered and is accessible by using the <ASCS/SCS
virtual host name> virtual host name.
SAP global files are placed on the SMB file share and are accessed by using the
<SAP global host> host name: \\<SAP global host>\sapmnt\<SID>\SYS...
The SAP ASCS/SCS instance is installed on a local disk on both cluster nodes.
The <ASCS/SCS virtual host name> network name is different from <SAP global
host>.

Figure 2: New SAP ASCS/SCS HA architecture with an SMB file share

Prerequisites for an SMB file share:

SMB 3.0 (or later) protocol.


Ability to set Active Directory access control lists (ACLs) for Active Directory user
groups and the computer$ computer object.
The file share must be HA-enabled:
Disks used to store files must not be a single point of failure.
Server or VM downtime does not cause downtime on the file share.

The SAP <SID> cluster role does not contain cluster shared disks or a generic file share
cluster resource.
Figure 3: SAP <SID> cluster role resources for using a file share

Scale-out file shares with Storage Spaces Direct


in Azure as an SAPMNT file share
You can use a scale-out file share to host and protect SAP global host files. A scale-out
file share also offers a highly available SAPMNT file share service.
Figure 4: A scale-out file share used to protect SAP global host files

) Important

Scale-out file shares are fully supported in the Microsoft Azure cloud, and in on-
premises environments.

A scale-out file share offers a highly available and horizontally scalable SAPMNT file
share.

Storage Spaces Direct is used as a shared disk for a scale-out file share. You can use
Storage Spaces Direct to build highly available and scalable storage using servers with
local storage. Shared storage that is used for a scale-out file share, like for SAP global
host files, is not a single point of failure.

When choosing Storage Spaces Direct, consider these use cases:

The virtual machines used to build the Storage Spaces Direct cluster need to be
deployed in an Azure availability set.
For disaster recovery of a Storage Spaces Direct Cluster, you can use Azure Site
Recovery Services.
It is not supported to stretch the Storage Space Direct Cluster across different
Azure Availability Zones.

SAP prerequisites for scale-out file shares in Azure


To use a scale-out file share, your system must meet the following requirements:

At least two cluster nodes for a scale-out file share.


Each node must have at least two local disks.
For performance reason, you must use mirroring resiliency:
Two-way mirroring for a scale-out file share with two cluster nodes.
Three-way mirroring for a scale-out file share with three (or more) cluster nodes.
We recommend three (or more) cluster nodes for a scale-out file share, with three-
way mirroring. This setup offers more scalability and more storage resiliency than
the scale-out file share setup with two cluster nodes and two-way mirroring.
You must use Azure Premium disks.
We recommend that you use Azure Managed Disks.
We recommend that you format volumes by using Resilient File System (ReFS).
For more information, see SAP Note 1869038 - SAP support for ReFS
filesystem and the Choosing the file system chapter of the article Planning
volumes in Storage Spaces Direct.
Be sure that you install Microsoft KB4025334 cumulative update .
You can use DS-Series or DSv2-Series Azure VM sizes.
For good network performance between VMs, which is needed for Storage Spaces
Direct disk sync, use a VM type that has at least a “high” network bandwidth. For
more information, see the DSv2-Series and DS-Series specifications.
We recommend that you reserve some unallocated capacity in the storage pool.
Leaving some unallocated capacity in the storage pool gives volumes space to
repair "in place" if a drive fails. This improves data safety and performance. For
more information, see Choosing volume size.
You don't need to configure the Azure internal load balancer for the scale-out file
share network name, such as for <SAP global host>. This is done for the
<ASCS/SCS virtual host name> of the SAP ASCS/SCS instance or for the DBMS. A
scale-out file share scales out the load across all cluster nodes. <SAP global host>
uses the local IP address for all cluster nodes.

) Important

You cannot rename the SAPMNT file share, which points to <SAP global host>. SAP
supports only the share name "sapmnt."

For more information, see SAP Note 2492395 - Can the share name sapmnt be
changed?
Configure SAP ASCS/SCS instances and a scale-out file
share in two clusters
You must deploy the SAP ASCS/SCS instances in a separate cluster, with their own SAP
<SID> cluster role. In this case, you configure the scale-out file share on another cluster,
with another cluster role.

) Important

The setup must meet the following requirement: the SAP ASCS/SCS instances and
the SOFS share must be deployed in separate clusters.

) Important

In this scenario, the SAP ASCS/SCS instance is configured to access the SAP global
host by using UNC path \\<SAP global host>\sapmnt\<SID>\SYS.

Figure 5: An SAP ASCS/SCS instance and a scale-out file share deployed in two clusters

Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Microsoft
Windows Failover Cluster to reduce the total number of VMs.

This can either be local SAP Application Servers on a SAP ASCS/SCS cluster or a SAP
ASCS/SCS Cluster Role on Microsoft SQL Server Always On nodes.
) Important

Installing a local SAP Application Server on a SQL Server Always On node is not
supported.

Both, SAP ASCS/SCS and the Microsoft SQL Server database, are single points of failure
(SPOF). To protect these SPOFs in a Windows environment WSFC is used.

While the resource consumption of the SAP ASCS/SCS is fairly small, a reduction of the
memory configuration for either SQL Server or the SAP Application Server by 2 GB is
recommended.

SAP Application Servers on WSFC nodes using Windows


SOFS

7 Note

The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)

SAP ASCS/SCS on SQL Server Always On nodes using


Windows SOFS
7 Note

The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)

) Important

In the Azure cloud, each cluster that is used for SAP and scale-out file shares must
be deployed in its own Azure availability set or across Azure Availability Zones. This
ensures distributed placement of the cluster VMs across the underlying Azure
infrastructure. Availability Zone deployments are supported with this technology.

Generic file share with SIOS DataKeeper as


cluster shared disks
A generic file share is another option for achieving a highly available file share.
In this case, you can use a third-party SIOS solution as a cluster shared disk.

Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and file share for an SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and file share for an SAP
ASCS/SCS instance
Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in
Azure
Storage Spaces Direct in Windows Server 2016
Deep dive: Volumes in Storage Spaces Direct
Prepare Azure infrastructure for SAP
high availability by using a Windows
failover cluster and file share for SAP
ASCS/SCS instances
Article • 02/10/2023

This article describes the Azure infrastructure preparation steps that are needed to
install and configure high-availability SAP systems on a Windows Server Failover
Clustering cluster (WSFC), using scale-out file share as an option for clustering SAP
ASCS/SCS instances.

Prerequisite
Before you start the installation, review the following article:

Architecture guide: Cluster SAP ASCS/SCS instances on a Windows failover cluster


by using file share

Host names and IP addresses


Virtual host name role Virtual host name Static IP address Availability set

First cluster node ASCS/SCS cluster ascs-1 10.0.6.4 ascs-as

Second cluster node ASCS/SCS ascs-2 10.0.6.5 ascs-as


cluster

Cluster network name ascs-cl 10.0.6.6 n/a

SAP PR1 ASCS cluster network name pr1-ascs 10.0.6.7 n/a

Table 1: ASCS/SCS cluster

SAP <SID> SAP ASCS/SCS instance number

PR1 00

Table 2: SAP ASCS/SCS instance details

Virtual host name role Virtual host name Static IP address Availability set
Virtual host name role Virtual host name Static IP address Availability set

First cluster node sofs-1 10.0.6.10 sofs-as

Second cluster node sofs-2 10.0.6.11 sofs-as

Third cluster node sofs-3 10.0.6.12 sofs-as

Cluster network name sofs-cl 10.0.6.13 n/a

SAP global host name sapglobal Use IPs of all cluster nodes n/a

Table 3: Scale-Out File Server cluster

Deploy VMs for an SAP ASCS/SCS cluster, a


Database Management System (DBMS) cluster,
and SAP Application Server instances
To prepare the Azure infrastructure, complete the following:

Deploy the VMs.

Create and configure Azure Load balancer for SAP ASCS.

If using Enqueue replication server 2 (ERS2), perform the Azure Load Balancer
configuration for ERS2 .

Add Windows virtual machines to the domain.

Add registry entries on both cluster nodes of the SAP ASCS/SCS instance.

As you use Windows Server 2016, we recommend that you configure Azure Cloud
Witness.

Deploy the Scale-Out File Server cluster


manually
You can deploy the Microsoft Scale-Out File Server cluster manually, as described in the
blog Storage Spaces Direct in Azure , by executing the following code:

PowerShell

# Set an execution policy - all cluster nodes


Set-ExecutionPolicy Unrestricted
# Define Scale-Out File Server cluster nodes
$nodes = ("sofs-1", "sofs-2", "sofs-3")

# Add cluster and Scale-Out File Server features


Invoke-Command $nodes {Install-WindowsFeature Failover-Clustering, FS-
FileServer -IncludeAllSubFeature -IncludeManagementTools -Verbose}

# Test cluster
Test-Cluster -node $nodes -Verbose

# Install cluster
$ClusterNetworkName = "sofs-cl"
$ClusterIP = "10.0.6.13"
New-Cluster -Name $ClusterNetworkName -Node $nodes –NoStorage –StaticAddress
$ClusterIP -Verbose

# Set Azure Quorum


Set-ClusterQuorum –CloudWitness –AccountName gorcloudwitness -AccessKey
<YourAzureStorageAccessKey>

# Enable Storage Spaces Direct


Enable-ClusterS2D

# Create Scale-Out File Server with an SAP global host name


# SAPGlobalHostName
$SAPGlobalHostName = "sapglobal"
Add-ClusterScaleOutFileServerRole -Name $SAPGlobalHostName

Deploy Scale-Out File Server automatically


You can also automate the deployment of Scale-Out File Server by using Azure Resource
Manager templates in an existing virtual network and Active Directory environment.

) Important

We recommend that you have three or more cluster nodes for Scale-Out File Server
with three-way mirroring.

In the Scale-Out File Server Resource Manager template UI, you must specify the
VM count.

Use managed disks


The Azure Resource Manager template for deploying Scale-Out File Server with Storage
Spaces Direct and Azure Managed Disks is available on GitHub .
We recommend that you use Managed Disks.

Figure 1: UI screen for Scale-Out File Server Resource Manager template with managed
disks

In the template, do the following:

1. In the Vm Count box, enter a minimum count of 2.


2. In the Vm Disk Count box, enter a minimum disk count of 3 (2 disks + 1 spare disk
= 3 disks).
3. In the Sofs Name box, enter the SAP global host network name, sapglobalhost.
4. In the Share Name box, enter the file share name, sapmnt.

Use unmanaged disks


The Azure Resource Manager template for deploying Scale-Out File Server with Storage
Spaces Direct and Azure Unmanaged Disks is available on GitHub .

Figure 2: UI screen for the Scale-Out File Server Azure Resource Manager template without
managed disks

In the Storage Account Type box, select Premium Storage. All other settings are the
same as the settings for managed disks.

Adjust cluster timeout settings


After you successfully install the Windows Scale-Out File Server cluster, adapt timeout
thresholds for failover detection to conditions in Azure. The parameters to be changed
are documented in Tuning failover cluster network thresholds . Assuming that your
clustered VMs are in the same subnet, change the following parameters to these values:

SameSubNetDelay = 2000
SameSubNetThreshold = 15
RouteHistoryLength = 30

These settings were tested with customers, and offer a good compromise. They are
resilient enough, but they also provide fast enough failover in real error conditions or
VM failure.

Next steps
Install SAP NetWeaver high availability on a Windows failover cluster and file share
for SAP ASCS/SCS instances
Install SAP NetWeaver high availability
on a Windows failover cluster and file
share for SAP ASCS/SCS instances on
Azure
Article • 02/10/2023

This article describes how to install and configure a high-availability SAP system on
Azure, with Windows Server Failover Cluster (WSFC) and Scale-Out File Server as an
option for clustering SAP ASCS/SCS instances.

Prerequisites
Before you start the installation, review the following articles:

Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover


cluster by using file share

Prepare Azure infrastructure SAP high availability by using a Windows failover


cluster and file share for SAP ASCS/SCS instances

[High availability for SAP NetWeaver on Azure VMs]

You need the following executables and DLLs from SAP:

SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or


later.
SAP Kernel 7.49 or later

) Important

Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
NetWeaver 7.40 (and later), with SAP Kernel 7.49 (and later).

) Important

The setup must meet the following requirement: the SAP ASCS/SCS instances and
the SOFS share must be deployed in separate clusters.
We do not describe the Database Management System (DBMS) setup because setups
vary depending on the DBMS you use. However, we assume that high-availability
concerns with the DBMS are addressed with the functionalities that various DBMS
vendors support for Azure. Such functionalities include Always On or database mirroring
for SQL Server, and Oracle Data Guard for Oracle databases. In the scenario we use in
this article, we didn't add more protection to the DBMS.

There are no special considerations when various DBMS services interact with this kind
of clustered SAP ASCS/SCS configuration in Azure.

7 Note

The installation procedures of SAP NetWeaver ABAP systems, Java systems, and
ABAP+Java systems are almost identical. The most significant difference is that an
SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance
running in the same Microsoft failover cluster group. Any installation differences for
each SAP NetWeaver installation stack are explicitly mentioned. You can assume
that all other parts are the same.

Prepare an SAP global host on the SOFS cluster


Create the following volume and file share on the SOFS cluster:

SAP GLOBALHOST file C:\ClusterStorage\Volume1\usr\sap\<SID>\SYS\ structure on


SOFS cluster shared volume (CSV)

SAPMNT file share

Set security on the SAPMNT file share and folder with full control for:
The <DOMAIN>\SAP_<SID>_GlobalAdmin user group
The SAP ASCS/SCS cluster node computer objects <DOMAIN>\ClusterNode1$
and <DOMAIN>\ClusterNode2$

To create a CSV volume with mirror resiliency, execute the following PowerShell cmdlet
on one of the SOFS cluster nodes:

PowerShell

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName SAPPR1 -FileSystem


CSVFS_ReFS -Size 5GB -ResiliencySettingName Mirror
To create SAPMNT and set folder and share security, execute the following PowerShell
script on one of the SOFS cluster nodes:

PowerShell

# Create SAPMNT on file share


$SAPSID = "PR1"
$DomainName = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName = "$DomainName\SAP_" + $SAPSID + "_GlobalAdmin"

# SAP ASCS/SCS cluster nodes


$ASCSClusterNode1 = "ascs-1"
$ASCSClusterNode2 = "ascs-2"

# Define SAP ASCS/SCS cluster node computer objects


$ASCSClusterObjectNode1 = "$DomainName\$ASCSClusterNode1$"
$ASCSClusterObjectNode2 = "$DomainName\$ASCSClusterNode2$"

# Create usr\sap\.. folders on CSV


$SAPGlobalFolder = "C:\ClusterStorage\SAP$SAPSID\usr\sap\$SAPSID\SYS"
New-Item -Path $SAPGlobalFOlder -ItemType Directory

$UsrSAPFolder = "C:\ClusterStorage\SAP$SAPSID\usr\sap\"

# Create a SAPMNT file share and set share security


New-SmbShare -Name sapmnt -Path $UsrSAPFolder -FullAccess
"BUILTIN\Administrators", $ASCSClusterObjectNode1, $ASCSClusterObjectNode2 -
ContinuouslyAvailable $true -CachingMode None -Verbose

# Get SAPMNT file share security settings


Get-SmbShareAccess sapmnt

# Set file and folder security


$Acl = Get-Acl $UsrSAPFolder

# Add a security object of the clusternode1$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode1,"
FullControl",'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add a security object of the clusternode2$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode2,"
FullControl",'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
Create a virtual host name for the clustered
SAP ASCS/SCS instance
Create an SAP ASCS/SCS cluster network name (for example, pr1-ascs [10.0.6.7]), as
described in Create a virtual host name for the clustered SAP ASCS/SCS instance.

Install an ASCS/SCS and ERS instances in the


cluster

Install an ASCS/SCS instance on the first ASCS/SCS cluster


node
Install an SAP ASCS/SCS instance on the first cluster node. To install the instance, in the
SAP SWPM installation tool, go to:

<Product> > <DBMS> > Installation > Application Server ABAP (or Java) > High-
Availability System > ASCS/SCS instance > First cluster node.

Add a probe port


Configure an SAP cluster resource, the SAP-SID-IP probe port, by using PowerShell.
Execute this configuration on one of the SAP ASCS/SCS cluster nodes, as described in
this article.

Install an ASCS/SCS instance on the second ASCS/SCS


cluster node
Install an SAP ASCS/SCS instance on the second cluster node. To install the instance, in
the SAP SWPM installation tool, go to:

<Product> > <DBMS> > Installation > Application Server ABAP (or Java) > High-
Availability System > ASCS/SCS instance > Additional cluster node.

Update the SAP ASCS/SCS instance profile


Update parameters in the SAP ASCS/SCS instance profile <SID>ASCS/SCS<Nr><Host>.

Parameter name Parameter value


Parameter name Parameter value

gw/netstat_once 0

enque/encni/set_so_keepalive true

service/ha_check_node 1

Parameter enque/encni/set_so_keepalive is only needed if using ENSA1.


Restart the SAP ASCS/SCS instance. Set KeepAlive parameters on both SAP ASCS/SCS
cluster nodes follow the instructions to Set registry entries on the cluster nodes of the
SAP ASCS/SCS instance.

Install a DBMS instance and SAP application


servers
Finalize your SAP system installation by installing:

A DBMS instance.
A primary SAP application server.
An additional SAP application server.

Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks - Official
SAP guidelines for high-availability file share

Storage Spaces Direct in Windows Server 2016

Scale-Out File Server for application data overview

What's new in storage in Windows Server 2016


SAP ASCS/SCS instance multi-SID high
availability with Windows Server
Failover Clustering and file share on
Azure
Article • 02/10/2023

Windows

You can manage multiple virtual IP addresses by using an Azure internal load balancer.

If you have an SAP deployment, you can use an internal load balancer to create a
Windows cluster configuration for SAP Central Services (ASCS/SCS) instances.

This article focuses on how to move from a single ASCS/SCS installation to an SAP
multi-SID configuration by installing additional SAP ASCS/SCS clustered instances into
an existing Windows Server Failover Clustering (WSFC) cluster with file share. When this
process is completed, you have configured an SAP multi-SID cluster.

7 Note

This feature is available only in the Azure Resource Manager deployment model.

There is a limit on the number of private front-end IPs for each Azure internal load
balancer.

The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to
the maximum number of private front-end IPs for each Azure internal load
balancer.

The configuration introduced in this documentation is not yet supported to be


used for Azure Availability Zones

For more information about load-balancer limits, see the "Private front-end IP per load
balancer" section in Networking limits: Azure Resource Manager. Also consider using the
Azure Standard Load Balancer SKU instead of the basic SKU of the Azure load balancer.

Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by
using file share, as shown in this diagram.

Figure 1: An SAP ASCS/SCS instance and SOFS deployed in two clusters

) Important

The setup must meet the following conditions:

The SAP ASCS/SCS instances must share the same WSFC cluster.
Different SAP Global Hosts file shares belonging to different SAP SIDs must
share the same SOFS cluster.
The SAP ASCS/SCS instances and the SOFS shares must not be combined in
the same cluster.
Each database management system (DBMS) SID must have its own dedicated
WSFC cluster.
SAP application servers that belong to one SAP system SID must have their
own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in
the same cluster is not supported.
SAP ASCS/SCS multi-SID architecture with file
share
The goal is to install multiple SAP Advanced Business Application Programming (ASCS)
or SAP Java (SCS) clustered instances in the same WSFC cluster, as illustrated here:

Figure 2: SAP multi-SID configuration in two clusters

The installation of an additional SAP <SID2> system is identical to the installation of


one <SID> system. Two additional preparation steps are required on the ASCS/SCS
cluster as well as on the file share SOFS cluster.

Prepare the infrastructure for an SAP multi-SID


scenario

Prepare the infrastructure on the domain controller


Create the domain group <Domain>\SAP_<SID2>_GlobalAdmin, for example, with
<SID2> = PR2. The domain group name is <Domain>\SAP_PR2_GlobalAdmin.

Prepare the infrastructure on the ASCS/SCS cluster


You must prepare the infrastructure on the existing ASCS/SCS cluster for a second SAP
<SID>:

Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS
server.
Add an IP address to an existing Azure internal load balancer by using PowerShell.

These steps are described in Infrastructure preparation for an SAP multi-SID scenario.

Prepare the infrastructure on an SOFS cluster by using the


existing SAP Global Host
You can reuse the existing <SAPGlobalHost> and Volume1 of the first SAP <SID1>
system.

Figure 3: Multi-SID SOFS is the same as the SAP Global Host name
) Important

For the second SAP <SID2> system, the same Volume1 and the same
<SAPGlobalHost> network name are used. Because you have already set SAPMNT
as the share name for various SAP systems, to reuse the <SAPGlobalHost> network
name, you must use the same Volume1.

The file path for the <SID2> global host is


C:\ClusterStorage\Volume1\usr\sap<SID2>\SYS.

For the <SID2> system, you must prepare the SAP Global Host ..\SYS.. folder on the
SOFS cluster.

To prepare the SAP Global Host for the <SID2> instance, execute the following
PowerShell script:

PowerShell

##################
# SAP multi-SID
##################

$SAPSID2 = "PR2"
$DomainName2 = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName2 = "$DomainName2\SAP_" + $SAPSID2 +
"_GlobalAdmin"

# SAP ASCS/SCS cluster nodes


$ASCSCluster2Node1 = "ja1-ascs-0"
$ASCSCluster2Node2 = "ja1-ascs-1"

# Define the SAP ASCS/SCS cluster node computer objects


$ASCSCluster2ObjectNode1 = "$DomainName2\$ASCSCluster2Node1$"
$ASCSCluster2ObjectNode2 = "$DomainName2\$ASCSCluster2Node2$"

# Create usr\sap\.. folders on CSV


$SAPGlobalFolder2 = "C:\ClusterStorage\Volume1\usr\sap\$SAPSID2\SYS"
New-Item -Path $SAPGlobalFolder2 -ItemType Directory

# Add permissions for the SAP SID2 system


Grant-SmbShareAccess -Name sapmnt -AccountName $SAPSIDGlobalAdminGroupName2,
$ASCSCluster2ObjectNode1, $ASCSCluster2ObjectNode2 -AccessRight Full -Force

$UsrSAPFolder = "C:\ClusterStorage\Volume1\usr\sap\"

# Set file and folder security


$Acl = Get-Acl $UsrSAPFolder
# Add the security object of the SAP_<sid>_GlobalAdmin group
$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($SAPSIDGlobalAdminGroupNa
me2,"FullControl", 'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode1$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSCluster2ObjectNode1,
"FullControl",'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode2$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSCluster2ObjectNode2,
"FullControl",'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose

Prepare the infrastructure on the SOFS cluster by using a


different SAP Global Host
You can configure the second SOFS (for example, the second SOFS cluster role with
<SAPGlobalHost2> and a different Volume2 for the second <SID2>).
Figure 4: Multi-SID SOFS is the same as SAP GLOBAL host name 2

To create the second SOFS role with <SAPGlobalHost2>, execute this PowerShell script:

PowerShell

# Create SOFS with SAP Global Host Name 2


$SAPGlobalHostName = "sapglobal2"
Add-ClusterScaleOutFileServerRole -Name $SAPGlobalHostName

Create the second Volume2. Execute this PowerShell script:

PowerShell

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName SAPPR2 -FileSystem


CSVFS_ReFS -Size 5GB -ResiliencySettingName Mirror
Figure 5: Second Volume2 in Failover Cluster Manager

Create an SAP Global folder for the second <SID2>, and set file security.

Execute this PowerShell script:

PowerShell

# Create a folder for <SID2> on a second Volume2 and set file security
$SAPSID = "PR2"
$DomainName = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName = "$DomainName\SAP_" + $SAPSID + "_GlobalAdmin"

# SAP ASCS/SCS cluster nodes


$ASCSClusterNode1 = "ascs-1"
$ASCSClusterNode2 = "ascs-2"

# Define SAP ASCS/SCS cluster node computer objects


$ASCSClusterObjectNode1 = "$DomainName\$ASCSClusterNode1$"
$ASCSClusterObjectNode2 = "$DomainName\$ASCSClusterNode2$"

# Create usr\sap\.. folders on CSV


$SAPGlobalFolder = "C:\ClusterStorage\Volume2\usr\sap\$SAPSID\SYS"
New-Item -Path $SAPGlobalFOlder -ItemType Directory

$UsrSAPFolder = "C:\ClusterStorage\Volume2\usr\sap\"

# Set file and folder security


$Acl = Get-Acl $UsrSAPFolder
# Add the file security object of the SAP_<sid>_GlobalAdmin group
$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($SAPSIDGlobalAdminGroupNa
me,"FullControl", 'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode1$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode1,"
FullControl",'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Add the security object of the clusternode2$ computer object


$Ar = New-Object
system.security.accesscontrol.filesystemaccessrule($ASCSClusterObjectNode2,"
FullControl",'ContainerInherit,ObjectInherit', 'None', 'Allow')
$Acl.SetAccessRule($Ar)

# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose

To create a SAPMNT file share on Volume2 with the <SAPGlobalHost2> host name for
the second SAP <SID2>, start the Add File Share wizard in Failover Cluster Manager.

Right-click the saoglobal2 SOFS cluster group, and then select Add File Share.

Figure 6: Start "Add File Share" wizard


Figure 7: Select "SMB Share – Quick"

Figure 8: Select "sapglobalhost2" and specify path on Volume2


Figure 9: Set file share name to "sapmnt"

Figure 10: Disable all settings

Assign Full control permissions to files and sapmnt share for:

The SAP_<SID>_GlobalAdmin domain user group


Computer object of ASCS/SCS cluster nodes ascs-1$ and ascs-2$
Figure 11: Assign "Full control" to user group and computer accounts

Figure 12: Select "Create"


Figure 13: The second sapmnt bound to sapglobal2 host and Volume2 is created

Install SAP NetWeaver multi-SID

Install SAP <SID2> ASCS/SCS and ERS instances


Follow the same installation and configuration steps as described earlier for one SAP
<SID>.

Install DBMS and SAP application servers


Install DBMS and SAP application Servers as described earlier.

Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks : Official
SAP guidelines for an HA file share

Storage spaces direct in Windows Server 2016

Scale-out file server for application data overview

What's new in storage in Windows Server 2016


Using Windows DFS-N to support
flexible SAPMNT share creation for
SMB-based file share
Article • 02/10/2023

Introduction
SAP instances like ASCS/SCS based on WSFC require SAP files being installed on a
shared drive. SAP supports either a Cluster Shared Disks or a File Share Cluster to host
these files.

SWPM selection screen for Cluster Share configuration option

For installations based on Azure NetApp Files SMB, the option File Share Cluster needs
to be selected. In the follow-up screen, the File Share Host Name needs to be supplied.
SWPM selection screen for Cluster Share Host Name configuration

The Cluster Share Host Name is based on the chosen installation option. For Azure
NetApp Files SMB, it is the used to join the NetApp account to the Active Directory of
the installation. In SAP terms, this name is the so called SAPGLOBALHOST. SWPM
internally adds sapmnt to the host name resulting in the \\SAPGLOBALHOST\sapmnt
share. Unfortunately sapmnt can only be created once per either NetApp account. This
is restrictive. DFS-N can be used to create virtual share names, that can be assigned to
differently named shares. Rather than having to use sapmnt as the share name as
mandated by SWPM, a unique name like sapmnt-sid can be used. The same is valid for
the global transport directory. Since trans is the expected name of global transport
directory, the SAP DIR_TRANS profile parameter in the DEFAULT.PFL profile needs to be
adjusted.

As an example the following shares can be created by using DFS-N:

\\contoso.local\sapmnt\D01 pointing to \\ANF-670f.contoso.corp\d01-sapmnt

\\contoso.local\sapmnt\erp-trans pointing to \\ANF-670f.contoso.corp\erp-trans


with DIR_TRANS = \\contoso.local\sapmnt\erp-trans in the DEFAULT.PFL profile.

Microsoft DFS-N
DFS Namespaces overview provides an introduction and the installation instructions for
DFS-N

Setting up Folder Targets for Azure NetApp


Files SMB
Folder Targets for Azure NetApp Files SMB are volumes technically created the same
way as described in High availability for SAP NetWeaver on Azure VMs on Windows with
Azure NetApp Files(SMB) for SAP applications without using DFS-N.

Portal screenshot with existing ANF volumes.

Configuring DFS-N for SAPMNT


The following sequence shows the individual steps of initially configuring DFS-N.

Start the DFS Management console from the Windows Administrative Tools in the
Windows Server Start Menu.

This screen shows the opening DFS screen.


In this screen an AD joined Windows Server with DFS installed has to be selected.
In this screen the name of the second part of the Namespace root is defined. In this
screen sapmnt has to be supplied, which is part of the SAP naming convention.
In this step, the Namespace type is defined. This input also determines the name of the
first part of Namespace root. DFS supports domain-based or stand-alone namespaces.
In a Windows-based installation, domain-based is the default. Therefore the setup of the
namespace server needs to be domain-based. Based on this choice, the domain name
will become the first part of the Namespace root. So here the AD/domain name is
contoso.corp, the Namespace root is therefore \\contoso.corp\sapmnt.

Under the Namespace root, numerous Namespace folders can be created. Each of them
points to a Folder Target. While the name of the Folder Target can be chosen freely, the
name of the Namespace folder has to match a valid SAP SID. In combination, this will
create a valid SWPM compliant UNC share. This mechanism is also be used to create the
trans-directory in order to provide a SAP transport directory.
The screenshot shows an example for such a configuration.

Adding additional DFS namespace servers to


increase resiliency
The domain-based Namespace server setup easily allows adding extra Namespace
servers. Similar to having multiple domain controllers for redundancy in Active
Directories where critical information is replicated between the domain controllers,
adding extra Namespace servers does the same for DFS-N. This is allowed for domain
controllers, locally for cluster nodes or stand-alone domain-joined servers. Before using
any of them the DFS-N Role need to be installed.

By right-clicking on the Namespace root, the Add Namespace Server dialog is opened.
In this screen, the name of the Namespace server can be directly supplied. Alternatively
the Browse button can be pushed to list already existing servers will be shown.

Overview of existing Namespace servers.

Adding folders to Azure NetApp Files SMB-


based Namespace root
The following sequence shows how create folders in DFS-N and assign them to Folder
Targets.
In the DFS Management console, right-click on the Namespace root and select New
Folder

This step opens the New Folder dialog. Supply either a valid SID like in this case P01 or
use trans if the intention is to create a transport directory.

In the portal, get the mount instructions for the volume you want to use as a folder
target and copy the UNC name and paste as shown above.
This screen shows as an example the folder setup for an SAP landscape.
Deploy SAP dialog instances with SAP
ASCS/SCS high-availability VMs on
RHEL
Article • 02/29/2024

This article describes how to install and configure Primary Application Server (PAS) and
Additional Application Server (AAS) dialog instances on the same ABAP SAP Central
Services (ASCS)/SAP Central Services (SCS) high-availability cluster running on Red Hat
Enterprise Linux (RHEL).

References
Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in
Pacemaker
Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL
7.5+ and RHEL 8
SAP Note 1928533 , which has:
A list of Azure virtual machine (VM) sizes that are supported for the deployment
of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 lists the recommended OS settings for Red Hat Enterprise
Linux 7.x.
SAP Note 2772999 lists the recommended OS settings for Red Hat Enterprise
Linux 8.x.
SAP Note 2009879 has SAP HANA guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in Pacemaker cluster
General RHEL documentation:
High-Availability Add-On Overview
High-Availability Add-On Administration
High-Availability Add-On Reference
Azure-specific RHEL documentation:
Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure

Overview
This article describes the cost optimization scenario where you deploy PAS and AAS
dialog instances with SAP ASCS/SCS and Enqueue Replication Server (ERS) instances in a
high-availability setup. To minimize the number of VMs for a single SAP system, you
want to install PAS and AAS on the same host where SAP ASCS/SCS and SAP ERS are
running. With SAP ASCS/SCS being configured in a high-availability cluster setup, you
want PAS and AAS also to be managed by cluster. The configuration is basically an
addition to an already configured SAP ASCS/SCS cluster setup. In this setup, PAS and
AAS are installed on a virtual host name, and its instance directory is managed by the
cluster.

For this setup, PAS and AAS require a highly available instance directory
( /usr/sap/<SID>/D<nr> ). You can place the instance directory file system on the same
high-available storage that you used for ASCS and ERS instance configuration. The
presented architecture showcases NFS on Azure Files or Azure NetApp Files for a highly
available instance directory for the setup.

The example shown in this article to describe deployment uses the following system
information:

ノ Expand table

Instance name Instance Virtual host Virtual IP (Probe


number name port)

ABAP SAP Central Services 00 sapascs 10.90.90.10 (62000)


(ASCS)

Enqueue Replication Server 01 sapers 10.90.90.9 (62001)


Instance name Instance Virtual host Virtual IP (Probe
number name port)

(ERS)

Primary Application Server (PAS) 02 sappas 10.90.90.30 (62002)

Additional Application Server 03 sapers 10.90.90.31 (62003)


(AAS)

SAP system identifier NW1 --- ---

7 Note

Install more SAP application instances on separate VMs if you want to scale out.

Important considerations for the cost-optimization


solution
Only two dialog instances, PAS and one AAS, can be deployed with an SAP
ASCS/SCS cluster setup.
If you want to scale out your SAP system with more application servers (like
sapa03 and sapa04), you can install them in separate VMs. With PAS and AAS
being installed on virtual host names, you can install more application servers by
using either a physical or virtual host name in separate VMs. To learn more about
how to assign a virtual host name to a VM, see the blog Use SAP Virtual Host
Names with Linux in Azure .
With a PAS and AAS deployment with an SAP ASCS/SCS cluster setup, the instance
numbers of ASCS, ERS, PAS, and AAS must be different.
Consider sizing your VM SKUs appropriately based on the sizing guidelines. You
must factor in the cluster behavior where multiple SAP instances (ASCS, ERS, PAS,
and AAS) might run on a single VM when another VM in the cluster is unavailable.
The dialog instances (PAS and AAS) running with an SAP ASCS/SCS cluster setup
must be installed by using a virtual host name.
You also must use the same storage solution of the SAP ASCS/SCS cluster setup to
deploy PAS and AAS instances. For example, if you configured an SAP ASCS/SCS
cluster by using NFS on Azure Files, the same storage solution must be used to
deploy PAS and AAS.
The instance directory /usr/sap/<SID>/D<nr> of PAS and AAS must be mounted on
an NFS file system and are managed as a resource by the cluster.

7 Note

For SAP J2EE systems, it's not supported to place /usr/sap/<SID>/J<nr> on


NFS on Azure Files.

To install more application servers on separate VMs, you can either use NFS shares
or a local managed disk for an instance directory file system. If you're installing
more application servers for an SAP J2EE system, /usr/sap/<SID>/J<nr> on NFS on
Azure Files isn't supported.
In a traditional SAP ASCS/SCS high-availability configuration, application server
instances running on separate VMs aren't affected when there's any effect on SAP
ASCS and ERS cluster nodes. But with the cost-optimization configuration, either
the PAS or AAS instance restarts when there's an effect on one of the nodes in the
cluster.
See NFS on Azure Files considerations and Azure NetApp Files considerations
because the same considerations apply to this setup.

Prerequisites
The configuration described in this article is an addition to your already configured SAP
ASCS/SCS cluster setup. In this configuration, PAS and AAS are installed on a virtual host
name, and its instance directory is managed by the cluster. Based on your storage, use
the steps described in the following articles to configure the SAPInstance resource for
the SAP ASCS and SAP ERS instance in the cluster.

NFS on Azure Files: Azure VMs high availability for SAP NW on RHEL with NFS on
Azure Files
Azure NetApp Files: Azure VMs high availability for SAP NW on RHEL with Azure
NetApp Files

After you install the ASCS, ERS, and Database instance by using Software Provisioning
Manager (SWPM), follow the next steps to install the PAS and AAS instances.

Configure Azure Load Balancer for PAS and


AAS
This article assumes that you already configured the load balancer for the SAP ASCS/SCS
cluster setup as described in Configure Azure Load Balancer. In the same Azure Load
Balancer instance, follow these steps to create more front-end IPs and load-balancing
rules for PAS and AAS.

1. Open the internal load balancer that was created for the SAP ASCS/SCS cluster
setup.
2. Frontend IP Configuration: Create two front-end IPs, one for PAS and another for
AAS (for example, 10.90.90.30 and 10.90.90.31).
3. Backend Pool: This pool remains the same because we're deploying PAS and AAS
on the same back-end pool.
4. Inbound rules: Create two load-balancing rules, one for PAS and another for AAS.
Follow the same steps for both load-balancing rules.
5. Frontend IP address: Select the front-end IP.
a. Backend pool: Select the back-end pool.
b. High availability ports: Select this option.
c. Protocol: Select TCP.
d. Health Probe: Create a health probe with the following details (applies for both
PAS and AAS):
i. Protocol: Select TCP.
ii. Port: For example, 620<Instance-no.> for PAS and 620<Instance-no.> for
AAS.
iii. Interval: Enter 5.
iv. Probe Threshold: Enter 2.
e. Idle timeout (minutes): Enter 30.
f. Enable Floating IP: Select this option.

The health probe configuration property numberOfProbes , otherwise known as


Unhealthy threshold in the Azure portal, isn't respected. To control the number of
successful or failed consecutive probes, set the property probeThreshold to 2 . It's
currently not possible to set this property by using the Azure portal. Use either the
Azure CLI or the PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
more IP addresses for the VMs, deploy a second NIC.

When VMs without public IP addresses are placed in the back-end pool of an internal
(no public IP address) Standard Azure Load Balancer instance, there's no outbound
internet connectivity unless more configuration is performed to allow routing to public
endpoints. For steps on how to achieve outbound connectivity, see Public endpoint
connectivity for virtual machines using Azure Standard Load Balancer in SAP high-
availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.

Prepare servers for PAS and AAS installation


When steps in this document are marked with the following prefixes, they mean:

[A]: Applicable to all nodes.


[1]: Only applicable to node 1.
[2]: Only applicable to node 2.

1. [A] Set up host name resolution.


You can either use a DNS server or modify /etc/hosts on all nodes. This example
shows how to use the /etc/hosts file. Replace the IP address and the host name in
the following commands:

Bash

sudo vi /etc/hosts

# IP address of cluster node 1


10.90.90.7 sap-cl1
# IP address of cluster node 2
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers
# IP address of the load balancer frontend configuration for SAP
Netweaver PAS
10.90.90.30 sappas
# IP address of the load balancer frontend configuration for SAP
Netweaver AAS
10.90.90.31 sapaas

2. [1] Create the SAP directories on the NFS share. Mount the NFS share sapnw1
temporarily on one of the VMs, and create the SAP directories to be used as
nested mount points.

a. If you're using NFS on Azure Files:

Bash

# mount temporarily the volume


sudo mkdir -p /saptmp
sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1
/saptmp -o noresvport,vers=4,minorversion=1,sec=sys

# create the SAP directories


sudo cd /saptmp
sudo mkdir -p usrsapNW1D02
sudo mkdir -p usrsapNW1D03

# unmount the volume and delete the temporary directory


cd ..
sudo umount /saptmp
sudo rmdir /saptmp

b. If you're using Azure NetApp Files:


Bash

# mount temporarily the volume


sudo mkdir -p /saptmp

# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp
10.90.91.5:/sapnw1 /saptmp
# If using NFSv4.1
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp
10.90.91.5:/sapnw1 /saptmp

# create the SAP directories


sudo cd /saptmp
sudo mkdir -p usrsapNW1D02
sudo mkdir -p usrsapNW1D03

# unmount the volume and delete the temporary directory


sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp

3. [A] Create the shared directories.

Bash

sudo mkdir -p /usr/sap/NW1/D02


sudo mkdir -p /usr/sap/NW1/D03

sudo chattr +i /usr/sap/NW1/D02


sudo chattr +i /usr/sap/NW1/D03

4. [A] Configure swap space. When you install a dialog instance with central services,
you must configure more swap space.

Bash

sudo vi /etc/waagent.conf

# Check if property ResourceDisk.Format is already set to y and if not,


set it
ResourceDisk.Format=y

# Set the property ResourceDisk.EnableSwap to y


# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y

# Set the size of the SWAP file with property ResourceDisk.SwapSizeMB


# The free space of resource disk varies by virtual machine size. Make
sure that you do not set a value that is too big. You can check the
SWAP space with command swapon
# Size of the swapfile.
#ResourceDisk.SwapSizeMB=2000
ResourceDisk.SwapSizeMB=10480

Restart the agent to activate the change.

Bash

sudo service waagent restart

5. [A] Add firewall rules for PAS and AAS.

Bash

# Probe and gateway port for PAS and AAS


sudo firewall-cmd --zone=public --add-port={62002,62003,3302,3303}/tcp
--permanent
sudo firewall-cmd --zone=public --add-port={62002,62003,3303,3303}/tcp

Install an SAP Netweaver PAS instance


1. [1] Check the status of the cluster. Before you configure a PAS resource for
installation, make sure the ASCS and ERS resources are configured and started.

Bash

sudo pcs status

# Online: [ sap-cl1 sap-cl2 ]


#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl2

2. [1] Create file system, virtual IP, and health probe resources for the PAS instance.

Bash

sudo pcs node standby sap-cl2


sudo pcs resource create vip_NW1_PAS IPaddr2 ip=10.90.90.30 --group g-
NW1_PAS
sudo pcs resource create nc_NW1_PAS azure-lb port=62002 --group g-
NW1_PAS

# If using NFS on Azure files


sudo pcs resource create fs_NW1_PAS Filesystem
device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1D02' \
directory='/usr/sap/NW1/D02' fstype='nfs' force_unmount=safe
options='noresvport,vers=4,minorversion=1,sec=sys' \
op start interval=0 timeout=60 \
op stop interval=0 timeout=120 \
op monitor interval=200 timeout=40 \
--group g-NW1_PAS

# If using NFsv3 on Azure NetApp Files


sudo pcs resource create fs_NW1_PAS Filesystem
device='10.90.91.5:/sapnw1/usrsapNW1D02' \
directory='/usr/sap/NW1/D02' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 \
op stop interval=0 timeout=120 \
op monitor interval=200 timeout=40 \
--group g-NW1_PAS

# If using NFSv4.1 on Azure NetApp Files


sudo pcs resource create fs_NW1_PAS Filesystem
device='10.90.91.5:/sapnw1/usrsapNW1D02' \
directory='/usr/sap/NW1/D02' fstype='nfs' force_unmount=safe
options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 \
op stop interval=0 timeout=120 \
op monitor interval=200 timeout=105 \
--group g-NW1_PAS

Make sure that the cluster status is okay and that all resources are started. It isn't
important on which node the resources are running.

Bash

sudo pcs status


# Node List:
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1

3. [1] Change the ownership of the /usr/sap/SID/D02 folder after the file system is
mounted.

Bash

sudo chown nw1adm:sapsys /usr/sap/NW1/D02

4. [1] Install the SAP Netweaver PAS.

Install the SAP NetWeaver PAS as a root on the first node by using a virtual host
name that maps to the IP address of the load balancer front-end configuration for
the PAS. For example, use sappas, 10.90.90.30, and the instance number that you
used for the probe of the load balancer, for example 02.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a nonroot


user to connect to sapinst.
Bash

# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<pas_virtual_hostname>

5. Update the /usr/sap/sapservices file.

To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.

Bash

sudo vi /usr/sap/sapservices

# On the node where PAS is installed, comment out the following lines.
# LD_LIBRARY_PATH=/usr/sap/NW1/D02/exe:$LD_LIBRARY_PATH;export
LD_LIBRARY_PATH;/usr/sap/NW1/D02/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_D02_sappas -D -u nw1adm

6. [1] Create the PAS cluster resource.

Bash

# If using NFS on Azure Files or NFSv3 on Azure NetApp Files


pcs resource create rsc_sap_NW1_PAS02 SAPInstance
InstanceName="NW1_D02_sappas" \
START_PROFILE=/sapmnt/NW1/profile/NW1_D02_sappas \
op monitor interval=20 timeout=60 \
--group g-NW1_PAS

# If using NFSv4.1 on Azure NetApp Files


pcs resource create rsc_sap_NW1_PAS02 SAPInstance
InstanceName="NW1_D02_sappas" \
START_PROFILE=/sapmnt/NW1/profile/NW1_D02_sappas \
op monitor interval=20 timeout=105 \
--group g-NW1_PAS

Check the status of the cluster.

Bash

sudo pcs status

# Node List:
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-
cl1

7. Configure a constraint to start the PAS resource group only after the ASCS instance
is started.

Bash

sudo pcs constraint order g-NW1_ASCS then g-NW1_PAS kind=Optional


symmetrical=false

Install an SAP Netweaver AAS instance


1. [2] Check the status of the cluster. Before you configure an AAS resource for
installation, make sure the ASCS, ERS, and PAS resources are started.

Bash
sudo pcs status

# Node List:
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-
cl1

2. [2] Create file system, virtual IP, and health probe resources for the AAS instance.

Bash

sudo pcs node unstandby sap-cl2


# Disable PAS resource as it will fail on sap-cl2 due to missing
environment variables like hdbuserstore.
sudo pcs resource disable g-NW1_PAS
sudo pcs node standby sap-cl1
# Execute below command to cleanup resource, if required
pcs resource cleanup rsc_sap_NW1_ERS01

sudo pcs resource create vip_NW1_AAS IPaddr2 ip=10.90.90.31 --group g-


NW1_AAS
sudo pcs resource create nc_NW1_AAS azure-lb port=62003 --group g-
NW1_AAS

# If using NFS on Azure files


sudo pcs resource create fs_NW1_AAS Filesystem
device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1D03' \
directory='/usr/sap/NW1/D03' fstype='nfs' force_unmount=safe
options='noresvport,vers=4,minorversion=1,sec=sys' \
op start interval=0 timeout=60 \
op stop interval=0 timeout=120 \
op monitor interval=200 timeout=40 \
--group g-NW1_AAS

# If using NFsv3 on Azure NetApp Files


sudo pcs resource create fs_NW1_AAS Filesystem
device='10.90.91.5:/sapnw1/usrsapNW1D03' \
directory='/usr/sap/NW1/D03' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 \
op stop interval=0 timeout=120 \
op monitor interval=200 timeout=40 \
--group g-NW1_AAS

# If using NFSv4.1 on Azure NetApp Files


sudo pcs resource create fs_NW1_AAS Filesystem
device='10.90.91.5:/sapnw1/usrsapNW1D03' \
directory='/usr/sap/NW1/D03' fstype='nfs' force_unmount=safe
options='sec=sys,vers=4.1' \
op start interval=0 timeout=60 \
op stop interval=0 timeout=120 \
op monitor interval=200 timeout=105 \
--group g-NW1_AAS

Make sure that the cluster status is okay and that all resources are started. It isn't
important on which node the resources are running. Because the g-NW1_PAS
resource group is stopped, all the PAS resources are stopped in the (disabled)
state.

Bash

sudo pcs status

# Node List:
# Node sap-cl1: standby
# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Stopped
(disabled)
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Stopped
(disabled)
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Stopped
(disabled)
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Stopped
(disabled)
# Resource Group: g-NW1_AAS:
# vip_NW1_AAS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# nc_NW1_AAS (ocf::heartbeat:azure-lb): Started sap-
cl2
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-
cl2

3. [2] Change the ownership of the /usr/sap/SID/D03 folder after the file system is
mounted.

Bash

sudo chown nw1adm:sapsys /usr/sap/NW1/D03

4. [2] Install an SAP Netweaver AAS.

Install an SAP NetWeaver AAS as the root on the second node by using a virtual
host name that maps to the IP address of the load balancer front-end
configuration for the PAS. For example, use sapaas, 10.90.90.31, and the instance
number that you used for the probe of the load balancer, for example, 03.

You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a nonroot


user to connect to sapinst.

Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp

sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin


SAPINST_USE_HOSTNAME=<aas_virtual_hostname>

5. Update the /usr/sap/sapservices file.

To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.

Bash

sudo vi /usr/sap/sapservices

# On the node where AAS is installed, comment out the following lines.
#LD_LIBRARY_PATH=/usr/sap/NW1/D03/exe:$LD_LIBRARY_PATH;export
LD_LIBRARY_PATH;/usr/sap/NW1/D03/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_D03_sapaas -D -u nw1adm

6. [2] Create an AAS cluster resource.

Bash

# If using NFS on Azure Files or NFSv3 on Azure NetApp Files


pcs resource create rsc_sap_NW1_AAS03 SAPInstance
InstanceName="NW1_D03_sapaas" \
START_PROFILE=/sapmnt/NW1/profile/NW1_D03_sapaas \
op monitor interval=120 timeout=60 \
--group g-NW1_AAS

# If using NFSv4.1 on Azure NetApp Files


pcs resource create rsc_sap_NW1_AAS03 SAPInstance
InstanceName="NW1_D03_sapaas" \
START_PROFILE=/sapmnt/NW1/profile/NW1_D03_sapaas \
op monitor interval=120 timeout=105 \
--group g-NW1_AAS

Check the status of the cluster.

Bash

sudo pcs status

# Node List:
# Node sap-cl1: standby
# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Stopped
(disabled)
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Stopped
(disabled)
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Stopped
(disabled)
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Stopped
(disabled)
# Resource Group: g-NW1_AAS:
# vip_NW1_AAS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# nc_NW1_AAS (ocf::heartbeat:azure-lb): Started sap-
cl2
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-
cl2
# rsc_sap_NW1_AAS03 (ocf::heartbeat:SAPInstance): Started sap-
cl2

7. Configure a constraint to start the AAS resource group only after the ASCS
instance is started.

Bash

sudo pcs constraint order g-NW1_ASCS then g-NW1_AAS kind=Optional


symmetrical=false
Post configuration for PAS and AAS instances
1. [1] For PAS and AAS to run on any cluster node (sap-cl1 or sap-cl2), the content in
$HOME/.hdb of <sid>adm from both cluster nodes needs to be copied.

Bash

# Check current content of /home/nw1adm/.hdb on sap-cl1


sap-cl1:nw1adm > ls -ltr $HOME/.hdb
drwx------. 2 nw1adm sapsys 66 Aug 8 19:11 sappas
drwx------. 2 nw1adm sapsys 84 Aug 8 19:12 sap-cl1
# Check current content of /home/nw1adm/.hdb on sap-cl2
sap-cl2:nw1adm > ls -ltr $HOME/.hdb
total 0
drwx------. 2 nw1adm sapsys 64 Aug 8 20:25 sap-cl2
drwx------. 2 nw1adm sapsys 66 Aug 8 20:26 sapaas

# As PAS and AAS is installed using virtual hostname, you need to copy
virtual hostname directory in /home/nw1adm/.hdb
# Copy sappas directory from sap-cl1 to sap-cl2
sap-cl1:nw1adm > scp -r sappas nw1adm@sap-cl2:/home/nw1adm/.hdb
# Copy sapaas directory from sap-cl2 to sap-cl1. Execute the command
from the same sap-cl1 host.
sap-cl1:nw1adm > scp -r nw1adm@sap-cl2:/home/nw1adm/.hdb/sapaas .

2. [1] To ensure the PAS and AAS instances don't run on the same nodes whenever
both nodes are running, add a negative colocation constraint with the following
command:

Bash

sudo pcs constraint colocation add g-NW1_AAS with g-NW1_PAS score=-1000


sudo pcs node unstandby sap-cl1
sudo pcs resource enable g-NW1_PAS

The score of -1000 ensures that if only one node is available, both the instances
continue to run on the other node. If you want to keep the AAS instance down in
such a situation, you can use score=-INFINITY to enforce this condition.

3. Check the status of the cluster.

Bash

sudo pcs status

# Node List:
# Online: [ sap-cl1 sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AAS:
# vip_NW1_AAS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# nc_NW1_AAS (ocf::heartbeat:azure-lb): Started sap-
cl2
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-
cl2
# rsc_sap_NW1_AAS03 (ocf::heartbeat:SAPInstance): Started sap-
cl2

Test the cluster setup


Thoroughly test your Pacemaker cluster by running the typical failover tests.
Deploy SAP ASCS/ERS with SAP HANA
high-availability VMs on RHEL
Article • 02/29/2024

This article describes how to install and configure SAP HANA along with ABAP SAP
Central Services (ASCS)/SAP Central Services (SCS) and Enqueue Replication Server (ERS)
instances on the same high-availability cluster running on Red Hat Enterprise Linux
(RHEL).

References
Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in
Pacemaker
Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL
7.5+ and RHEL 8
SAP Note 1928533 , which has:
A list of Azure virtual machine (VM) sizes that are supported for the deployment
of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 lists the recommended OS settings for Red Hat Enterprise
Linux 7.x.
SAP Note 2772999 lists the recommended OS settings for Red Hat Enterprise
Linux 8.x.
SAP Note 2009879 has SAP HANA guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in Pacemaker cluster
General RHEL documentation:
High-Availability Add-On Overview
High-Availability Add-On Administration
High Availability Add-On Reference
Azure-specific RHEL documentation:
Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure

Overview
This article describes the cost-optimization scenario where you deploy SAP HANA, SAP
ASCS/SCS, and SAP ERS instances in the same high-availability setup. To minimize the
number of VMs for a single SAP system, you want to install SAP ASCS/SCS and SAP ERS
on the same hosts where SAP HANA is running. With SAP HANA being configured in a
high-availability cluster setup, you want SAP ASCS/SCS and SAP ERS also to be managed
by cluster. The configuration is basically an addition to an already configured SAP HANA
cluster setup. In this setup, SAP ASCS/SCS and SAP ERS are installed on a virtual host
name, and its instance directory is managed by the cluster.

The presented architecture showcases NFS on Azure Files or Azure NetApp Files for a
highly available instance directory for the setup.

The example shown in this article to describe deployment uses the following system
information:

ノ Expand table

Instance name Instance Virtual host Virtual IP (Probe


number name port)

SAP HANA DB 03 saphana 10.66.0.13 (62503)

ABAP SAP Central Services 00 sapascs 10.66.0.20 (62000)


(ASCS)

Enqueue Replication Server 01 sapers 10.66.0.30 (62101)


(ERS)

SAP HANA system identifier HN1 --- ---

SAP system identifier NW1 --- ---


7 Note

Install SAP dialog instances (PAS and AAS) on separate VMs.

Important considerations for the cost-optimization


solution
SAP dialog instances (PAS and AAS) (like sapa01 and sapa02) should be installed
on separate VMs. Install SAP ASCS and ERS with virtual host names. To learn more
about how to assign a virtual host name to a VM, see the blog Use SAP Virtual
Host Names with Linux in Azure .
With an HANA DB, ASCS/SCS, and ERS deployment in the same cluster setup, the
instance number of HANA DB, ASCS/SCS, and ERS must be different.
Consider sizing your VM SKUs appropriately based on the sizing guidelines. You
must factor in the cluster behavior where multiple SAP instances (HANA DB,
ASCS/SCS, and ERS) might run on a single VM when another VM in the cluster is
unavailable.
You can use different storage (for example, Azure NetApp Files or NFS on Azure
Files) to install the SAP ASCS and ERS instances.

7 Note

For SAP J2EE systems, it's not supported to place /usr/sap/<SID>/J<nr> on


NFS on Azure Files. Database file systems like /hana/data and /hana/log aren't
supported on NFS on Azure Files.

To install more application servers on separate VMs, you can either use NFS shares
or a local managed disk for an instance directory file system. If you're installing
more application servers for SAP J2EE system, /usr/sap/<SID>/J<nr> on NFS on
Azure Files isn't supported.
See NFS on Azure Files considerations and Azure NetApp Files considerations
because the same considerations apply to this setup.

Prerequisites
The configuration described in this article is an addition to your already-configured SAP
HANA cluster setup. In this configuration, an SAP ASCS/SCS and ERS instance are
installed on a virtual host name. The instance directory is managed by the cluster.

Install a HANA database and set up a HANA system replication (HSR) and Pacemaker
cluster by following the steps in High availability of SAP HANA on Azure VMs on Red
Hat Enterprise Linux or High availability of SAP HANA Scale-up with Azure NetApp Files
on Red Hat Enterprise Linux depending on what storage option you're using.

After you install, configure, and set up the HANA Cluster, follow the next steps to install
ASCS and ERS instances.

Configure Azure Load Balancer for ASCS and


ERS
This article assumes that you already configured the load balancer for a HANA cluster
setup as described in Configure Azure Load Balancer. In the same Azure Load Balancer
instance, follow these steps to create more front-end IPs and load-balancing rules for
ASCS and ERS.

1. Open the internal load balancer that was created for SAP HANA cluster setup.
2. Frontend IP Configuration: Create two front-end IPs, one for ASCS and another for
ERS (for example, 10.66.0.20 and 10.66.0.30).
3. Backend Pool: This pool remains the same because we're deploying ASCS and ERS
on the same back-end pool.
4. Inbound rules: Create two load-balancing rules, one for ASCS and another for ERS.
Follow the same steps for both load-balancing rules.
5. Frontend IP address: Select the front-end IP.
a. Backend pool: Select the back-end pool.
b. High availability ports: Select this option.
c. Protocol: Select TCP.
d. Health Probe: Create a health probe with the following details (applies for both
ASCS and ERS):
i. Protocol: Select TCP.
ii. Port: For example, 620<Instance-no.> for ASCS and 621<Instance-no.> for
ERS.
iii. Interval: Enter 5.
iv. Probe Threshold: Enter 2.
e. Idle timeout (minutes): Enter 30.
f. Enable Floating IP: Select this option.

The health probe configuration property numberOfProbes , otherwise known as


Unhealthy threshold in the Azure portal, isn't respected. To control the number of
successful or failed consecutive probes, set the property probeThreshold to 2 . It's
currently not possible to set this property by using the Azure portal. Use either the
Azure CLI or the PowerShell command.

) Important

Floating IP isn't supported on a NIC secondary IP configuration in load-balancing


scenarios. For more information, see Azure Load Balancer limitations. If you need
more IP addresses for the VMs, deploy a second NIC.

When VMs without public IP addresses are placed in the back-end pool of an internal
(no public IP address) Standard Azure Load Balancer instance, there's no outbound
internet connectivity unless more configuration is performed to allow routing to public
endpoints. For steps on how to achieve outbound connectivity, see Public endpoint
connectivity for virtual machines using Azure Standard Load Balancer in SAP high-
availability scenarios.

) Important

Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health

probes.

SAP ASCS/SCS and ERS setup


Based on your storage, follow the steps described in the following articles to configure a
SAPInstance resource for the SAP ASCS/SCS and SAP ERS instance in the cluster.

NFS on Azure Files: Azure VMs high availability for SAP NW on RHEL with NFS on
Azure Files
Azure NetApp Files: Azure VMs high availability for SAP NW on RHEL with Azure
NetApp Files

Test the cluster setup


Thoroughly test your Pacemaker cluster:

Run the typical Netweaver failover tests


Run the typical HANA DB failover tests
Add an HSR third site to a HANA
Pacemaker cluster
Article • 02/27/2024

This article describes requirements and setup of a third HANA replication site to
complement an existing Pacemaker cluster. Both SUSE Linux Enterprise Server (SLES) and
RedHat Enterprise Linux (RHEL) specifics are covered.

Overview
SAP HANA supports system replication (HSR) with more than two sites connected. You
can add a third site to an existing HSR pair, managed by Pacemaker in a highly available
setup. You can deploy the third site in a second Azure region for disaster recovery (DR)
purposes.

Pacemaker and the HANA cluster resource agent manage the first two sites. The
Pacemaker cluster doesn't control the third site.

SAP HANA supports a third system replication site in two modes:

Multitarget replicates data changes from primary to more than one target
system. The third site is connected to primary replication in a star topology.
Multitier is a two-tier replication. A cascading, or chained, setup of three
different HANA tiers. The third site connects to the secondary.

For more conceptual details about HANA HSR within one region and across different
regions, see SAP HANA availability across Azure regions.

Prerequisites for SLES


Requirements for a third HSR site are different for HANA scale-up and HANA scale-out.

7 Note

Requirements in this article are only valid for a Pacemaker-enabled landscape.


Without Pacemaker, SAP HANA version requirements apply to the chosen
replication mode. Pacemaker and the HANA cluster resource agent manage only
two sites. The third HSR site isn't controlled by the Pacemaker cluster.
Both scale-up and scale-out: SAP HANA SPS 04 or newer is required to use
multitarget HSR with a Pacemaker cluster.
Both scale-up and scale-out: Maximum of one SAP HANA system replication
connected from outside the Linux cluster.
HANA scale-out only: SLES 15 SP1 or higher.
HANA scale-out only: Operating system (OS) package SAPHanaSR-ScaleOut
version 0.180 or higher.
HANA scale-out only: SAP HANA high-availability (HA) hook
SAPHanaSrMultiTarget in use. Preview HANA HA hook SAPHanaSR isn't multitarget
aware for scale-out.

Prerequisites for RHEL


Requirements for a third HSR site are different for HANA scale-up and HANA scale-out.

7 Note

Requirements in this article are only valid for a Pacemaker-enabled landscape.


Without Pacemaker, SAP HANA version requirements apply for the chosen
replication mode. Pacemaker and the HANA cluster resource agent manage only
two sites. The third HSR site isn't controlled by the Pacemaker cluster.

HANA scale-up only: See RedHat support policies for RHEL HA clusters for
details on the minimum OS, SAP HANA, and cluster resource agents version.
HANA scale-out only: HANA multitarget replication isn't supported on Azure with
a Pacemaker cluster.

HANA scale-up: Add HANA multitarget system


replication for DR purposes
With SAP HANA HA hook SAPHanaSR for SLES and RHEL, you can add a third node for
DR purposes. The Pacemaker environment is aware of a HANA multitarget DR setup.

Failure of the third node won't trigger any cluster action. The cluster detects the
replication status of connected sites and the monitored attribute for the third site can
change between SOK and SFAIL states. Any takeover tests to the third/DR site or
executing your DR exercise process should first place the cluster resources into
maintenance mode to prevent any undesired cluster action.
The following example shows a multitarget system replication system. For more
information, see SAP documentation .

1. Deploy Azure resources for the third node. Depending on your requirements, you
can use a different Azure region for DR purposes.

Steps required for the third site are similar to virtual machines (VMs) for HANA
scale-up cluster. The third site uses Azure infrastructure. The OS and HANA version
match the existing Pacemaker cluster, with the following exceptions:

No load balancer is deployed for the third site. There's no integration with
the existing cluster load balancer for the VM of the third site.
Don't install OS packages SAPHanaSR, SAPHanaSR-doc, and the OS package
pattern ha_sles on the third site VM.
No integration into the cluster for VM or HANA resources of the third site.
No HANA HA hook setup for the third site in global.ini.

2. Install SAP HANA on the third node.

The same HANA SID and HANA installation number must be used for the third site.

3. With SAP HANA on the third site installed and running, register the third site with
the primary site.

The following example uses SITE-DR as the name for the third site.

Bash

# Execute on the third site


su - hn1adm
# Register the HANA third site to the primary. Switch --online will
shutdown the HANA instance on third site.
hdbnsutil -sr_register --name=SITE-DR --remoteHost=hn1-db-0 --
remoteInstance=03 --replicationMode=async --online

4. Verify that the HANA system replication shows the secondary site and the third
site.

Bash

# Verify HANA HSR is in sync, execute on primary


sudo su - hn1adm -c "python
/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

5. Check the SAPHanaSR attribute for the third site. SITE-DR should show up with the
status SOK in the Sites section.

Bash

# Check SAPHanaSR attribute on any cluster managed host (first or


second site)
sudo SAPHanaSR-showAttr
# Example result
# Global cib-time maintenance
# --------------------------------------------
# global Tue Feb 21 19:28:21 2023 false
#
# Sites srHook
# -----------------
# HN1-SITE1 PRIM
# HN1-SITE2 SOK
# SITE-DR SOK

The cluster detects the replication status of connected sites. The monitored
attributes can change between SOK and SFAIL . There's no cluster action if the
replication to the DR site fails.

HANA scale-out: Add HANA multitarget system


replication for DR purposes
With the SAP HANA HA provider SAPHanaSrMultiTarget, you can add a third HANA
scale-out site. This third site is often used for DR in another Azure region. The
Pacemaker environment is aware of a HANA multitarget DR setup. This section applies
to systems running Pacemaker on SUSE only. See the "Prerequisites" section in this
document for details.
Failure of the third node won't trigger any cluster action. The cluster detects the
replication status of connected sites and the monitored attribute for the third site can
change between the SOK and SFAIL states. Any takeover tests to the third/DR site or
executing your DR exercise process should first place the cluster resources into
maintenance mode to prevent any undesired cluster action.

The following example shows a multitarget system replication system. For more
information, see SAP documentation .

1. Deploy Azure resources for the third site. Depending on your requirements, you
can use a different Azure region for DR purposes.

Steps required for the HANA scale-out on the third site mirror the steps to deploy
the HANA scale-out cluster. The third site uses Azure infrastructure, OS, and HANA
installation steps for SITE1 of the scale-out cluster, with the following exceptions:

No load balancer is deployed for the third site. There's no integration with
the existing cluster load balancer for the VMs of the third site.
Don't install the OS packages SAPHanaSR-ScaleOut, SAPHanaSR-ScaleOut-
doc, and the OS package pattern ha_sles on the third site VMs.
No majority maker VM for the third site because there's no cluster
integration.
Create the NFS volume /hana/shared for the third site's exclusive use.
No integration into the cluster for the VMs or HANA resources of the third
site.
No HANA HA hook setup for the third site in global.ini.
You must use the same HANA SID and HANA installation number for the third site.

2. With SAP HANA scale-out on the third site installed and running, register the third
site with the primary site.

The following example uses SITE-DR as the name for the third site.

Bash

# Execute on the third site


su - hn1adm
# Register the HANA third site to the primary. Switch --online will
shutdown the HANA instance on third site.
hdbnsutil -sr_register --name=SITE-DR --remoteHost=hana-s1-db1 --
remoteInstance=03 --replicationMode=async --online

3. Verify that the HANA system replication shows the secondary site and the third
site.

Bash

# Verify HANA HSR is in sync, execute on primary


sudo su - hn1adm -c "python
/usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"

4. Check the SAPHanaSR attribute for the third site. SITE-DR should show up with the
status SOK in the Sites section.

Bash

# Check SAPHanaSR attribute on any cluster managed host (first or


second site)
sudo SAPHanaSR-showAttr
# Expected result
# Global cib-time maintenance prim sec sync_state upd
# ---------------------------------------------------------------------
# HN1 Fri Jan 27 10:38:46 2023 false HANA_S1 - SOK ok
#
# Sites lpt lss mns srHook srr
# ------------------------------------------------
# SITE-DR SOK
# HANA_S1 1674815869 4 hana-s1-db1 PRIM P
# HANA_S2 30 4 hana-s2-db1 SOK S

The cluster detects the replication status of connected sites. The monitored
attribute can change between SOK and SFAIL . There's no cluster action if the
replication to the DR site fails.
Autoregister the third site
During a planned or unplanned takeover event between the two Pacemaker cluster sites,
HSR to the third site is also interrupted. Pacemaker doesn't modify HANA replication to
the third site.

SAP provides since the HANA 2 SPS 04 parameter register_secondaries_on_takeover .


With the parameter set to the value true , after the HSR takeover between cluster sites 1
and 2, HANA registers the third site on the new primary automatically to keep an HSR
multitarget setup. Configure the HANA parameter register_secondaries_on_takeover =
true that's configured in the [system_replication] block of global.ini on both SAP

HANA sites in the Linux cluster. Both SITE1 and SITE2 need the parameter in the
respective HANA global.ini configuration file. The parameter can also be used outside a
Pacemaker cluster.

For HSR multitier , no automatic SAP HANA registration of the third site exists. You
need to manually register the third site to the current secondary to keep the HSR
replication chain for multitier.

Next steps
Disaster recovery overview and infrastructure
Disaster recovery for SAP workloads
High-availability architecture and scenarios for SAP NetWeaver
Exchange Online Integration for Email-
Outbound from SAP NetWeaver
Article • 02/10/2023

Sending emails from your SAP backend is a standard feature widely distributed for use
cases such as alerting for batch jobs, SAP workflow state changes or invoice distribution.
Many customers established the setup using Exchange Server On-Premises. With a shift
to Microsoft 365 and Exchange Online comes a set of cloud-native approaches
impacting that setup.

This article describes the setup for outbound email-communication from NetWeaver-
based SAP systems to Exchange Online. That applies to SAP ECC, S/4HANA, SAP RISE
managed, and any other NetWeaver based system.

Overview
Existing implementations relied on SMTP Auth and elevated trust relationship because
the legacy Exchange Server on-premises could live close to the SAP system itself and
was governed by customers themselves. With Exchange Online there's a shift in
responsibilities and connectivity paradigm. Microsoft supplies Exchange Online as a
Software-as-a-Service offering built to be consumed securely and as effectively as
possible from anywhere in the world over the public Internet.

Follow our standard guide to understand the general configuration of a "device" that
wants to send email via Microsoft 365.

) Important

Microsoft disabled Basic Authentication for Exchange online as of 2020 for newly
created Microsoft 365 tenants. In addition to that, the feature gets disabled for
existing tenants with no prior usage of Basic Authentication starting October 2020.
See our developer blog for reference.

) Important

SMTP Auth was exempted from the Basic Auth feature sunset process. However,
this is a security risk for your estate, so we advise against it. See the latest post by
our Exchange Team on the matter.
) Important

Current OAuth support for SMTP is described on our Exchange Server


documentation for legacy protocols.

Setup considerations
Given the sunset-exception of SMTP Auth there are four different options supported by
SAP NetWeaver that we want to describe. The first three correlate with the scenarios
described in the Exchange Online documentation.

1. SMTP Authentication Client Submission


2. SMTP Direct Send
3. Using Exchange Online SMTP relay connector
4. Using SMTP relay server as intermediary to Exchange Online

For brevity we'll refer to the SAP Connect administration tool used for the mail server
setup only by its transaction code SCOT.

Option 1: SMTP Authentication Client


Submission
Choose this option when you want to send mail to people inside and outside your
organization.

Connect SAP applications directly to Microsoft 365 using SMTP Auth endpoint
smtp.office365.com in SCOT.

A valid email address will be required to authenticate with Microsoft 365. The email
address of the account that's used to authenticate with Microsoft 365 will appear as the
sender of messages from the SAP application.

Requirements for SMTP AUTH


SMTP AUTH: Needs to be enabled for the mailbox being used. SMTP AUTH is
disabled for organizations created after January 2020 but can be enabled per-
mailbox. For more information, see Enable or disable authenticated client SMTP
submission (SMTP AUTH) in Exchange Online.
Authentication: Use Basic Authentication (which is simply a username and
password) to send email from SAP application. If SMTP AUTH is intentionally
disabled for the organization, you must use Option 2, 3 or 4 below.
Mailbox: You must have a licensed Microsoft 365 mailbox to send email from.
Transport Layer Security (TLS): Your SAP Application must be able to use TLS
version 1.2 and above.
Port: Port 587 (recommended) or port 25 is required and must be unblocked on
your network. Some network firewalls or Internet Service Providers block ports,
especially port 25, because that's the port that email servers use to send mail.
DNS: Use the DNS name smtp.office365.com. Don't use an IP address for the
Microsoft 365 server, as IP Addresses aren't supported.

How to Enable SMTP auth for mailboxes in Exchange


Online
There are two ways to enable SMTP AUTH in Exchange online:

1. For a single account (per mailbox) that overrides the tenant-wide setting or
2. at organization level.

7 Note

if your authentication policy disables basic authentication for SMTP, clients cannot
use the SMTP AUTH protocol even if you enable the settings outlined in this article.

The per-mailbox setting to enable SMTP AUTH is available in the Microsoft 365 Admin
Center or via Exchange Online PowerShell.

1. Open the Microsoft 365 admin center and go to Users -> Active users.

2. Select the user, follow the wizard, click Mail.

3. In the Email apps section, click Manage email apps.


4. Verify the Authenticated SMTP setting (unchecked = disabled, checked = enabled)
5. Save changes.

This will enable SMTP AUTH for that individual user in Exchange Online that you require
for SCOT.

Configure SMTP Auth with SCOT


1. Ping or telnet smtp.office365.com on port 587 from your SAP application server to
make sure ports are open and accessible.

2. Make sure SAP Internet Communication Manager (ICM) parameter is set in your
instance profile. See below an example:

parameter value

icm/server-port-1 PROT=SMTP,PORT=25000,TIMEOUT=180,TLS=1

3. Restart ICM service from SMICM transaction and make sure SMTP service is active.
4. Activate SAPConnect service in SICF transaction.

5. Go to SCOT and select SMTP node (double click) as shown below to proceed with
configuration:

Add mail host smtp.office365.com with port 587. Check the Exchange Online docs
for reference.
Click on the "Settings" button (next to the Security field) to add TLS settings and
basic authentication details as mentioned in point 2 if required. Make sure your
ICM parameter is set accordingly.

Make sure to use a valid Microsoft 365 email ID and password. In addition to that
it needs to be the same user that you've enabled for SMTP Auth at the beginning.
This email ID will show up as the sender.
Coming back to the previous screen: Click on "Set" button and check "Internet"
under "Supported Address Types". Using the wildcard "*" option will allow to send
emails to all domains without restriction.
Next Step: set default Domain in SCOT.
6. Schedule Job to send email to the submission queue. From SCOT select "Send
Job":

Provide a Job name and variant if appropriate.


Test mail submission using transaction code SBWP and check the status using
SOST transaction.

Limitations of SMTP AUTH client submission


SCOT stores login credentials for only one user, so only one Microsoft 365 mailbox
can be configured this way. Sending mail via individual SAP users requires to
implement the "Send As permission" offered by Microsoft 365.
Microsoft 365 imposes some sending limits. See Exchange Online limits - Receiving
and sending limits for more information.

Option 2: SMTP Direct Send


Microsoft 365 offers the ability to configure direct send from the SAP application server.
This option is limited in that it only permits mail to be routed to addresses in your own
Microsoft 365 organization with a valid e-mail address therefore cannot be used for
external recipients (e.g., vendors or customers).
Option 3: Using Microsoft 365 SMTP Relay
Connector
Only choose this option when:

Your Microsoft 365 environment has SMTP AUTH disabled.


SMTP client submission (Option 1) isn't compatible with your business needs or
with your SAP Application.
You can't use direct send (Option 2) because you must send email to external
recipients.

SMTP relay lets Microsoft 365 relay emails on your behalf by using a connector that's
configured with your public IP address or a TLS certificate. Compared to the other
options, the connector setup increases complexity.

Requirements for SMTP Relay


SAP Parameter: SAP instance parameter configured and SMTP service are
activated as explained in option 1, follow steps 2 to 4 from "Configure SMTP Auth
with SCOT" section.
Email Address: Any email address in one of your Microsoft 365 verified domains.
This email address doesn't need a mailbox. For example,
noreply@*yourdomain*.com .

Transport Layer Security (TLS): SAP application must be able to use TLS version 1.2
and above.
Port: port 25 is required and must be unblocked on your network. Some network
firewalls or ISPs block ports, especially port 25 due to the risk of misuse for
spamming.
MX record: your Mail Exchanger (MX) endpoint, for e.g.,
yourdomain.mail.protection.outlook.com. Find more information on the next
section.
Relay Access: A Public IP address or SSL certificate is required to authenticate
against the relay connector. To avoid configuring direct access it's recommended
to use Source Network Translation (SNAT) as described in this article. Use Source
Network Address Translation (SNAT) for outbound connections.

Step-by-step configuration instructions for SMTP relay in


Microsoft 365
1. Obtain the public (static) IP address of the endpoint which will be sending the mail
using one of the methods listed in the article above. A dynamic IP address isn't
supported or allowed. You can share your static IP address with other devices and
users, but don't share the IP address with anyone outside of your company. Make
a note of this IP address for later.

7 Note

Find above information on the Azure portal using the Virtual Machine overview of
the SAP application server.

2. Sign in to the Microsoft 365 Admin Center .

3. Go to Settings -> Domains, select your domain (for example, contoso.com), and
find the Mail Exchanger (MX) record.
The Mail Exchanger (MX) record will have data for Points to address or value that
looks similar to yourdomain.mail.protection.outlook.com .

4. Make a note of the data of Points to address or value for the Mail Exchanger (MX)
record, which we refer to as your MX endpoint.

5. In Microsoft 365, select Admin and then Exchange to go to the new Exchange
Admin Center.

6. New Exchange Admin Center (EAC) portal will open.


7. In the Exchange Admin Center (EAC), go to Mail flow -> Connectors. The
Connectors screen is depicted below. If you are working with the classical EAC
follow step 8 as described on our docs.

8. Click Add a connector

Choose "Your organization's email server".


9. Click Next. The Connector name screen appears.
10. Provide a name for the connector and click Next. The Authenticating sent email
screen appears.

Choose By verifying that the IP address of the sending server matches one of these IP
addresses which belong exclusively to your organization and add the IP address
from Step 1 of the Step-by-step configuration instructions for SMTP relay in
Microsoft 365 section.
Review and click on Create connector.
11. Now that you're done with configuring your Microsoft 365 settings, go to your
domain registrar's website to update your DNS records. Edit your Sender Policy
Framework (SPF) record. Include the IP address that you noted in step 1. The
finished string should look similar to this v=spf1 ip4:10.5.3.2
include:spf.protection.outlook.com \~all , where 10.5.3.2 is your public IP
address. Skipping this step may cause emails to be flagged as spam and end up in
the recipient's Junk Email folder.

Steps in SAP Application server


1. Make sure SAP ICM Parameter and SMTP service is activated as explained in
Option 1 (steps 2-4)
2. Go to SCOT transaction in SMTP node as shown in previous steps of Option 1.
3. Add mail Host as Mail Exchanger (MX) record value noted in Step 4 (i.e.
yourdomain.mail.protection.outlook.com).

Mail host: yourdomain.mail.protection.outlook.com

Port: 25

4. Click "Settings" next to the Security field and make sure TLS is enabled if possible.
Also make sure no prior logon data regarding SMTP AUTH is present. Otherwise
delete existing records with the corresponding button underneath.
5. Test the configuration using a test email from your SAP application with
transaction SBWP and check the status in SOST transaction.

Option 4: Using SMTP relay server as


intermediary to Exchange Online
An intermediate relay server can be an alternative to a direct connection from the SAP
application server to Microsoft 365. This server can be based on any mail server that will
allow direct authentication and relay services.

The advantage of this solution is that it can be deployed in the hub of a hub-spoke
virtual network within your Azure environment or within a DMZ to protect your SAP
application hosts from direct access. It also allows for centralized outbound routing to
immediately offload all mail traffic to a central relay when sending from multiple
application servers.

The configuration steps are the same as for the Microsoft 365 SMTP Relay Connector
(Option 3) with the only differences being that the SCOT configuration should reference
the mail host that will perform the relay rather than direct to Microsoft 365. Depending
on the mail system that is being used for the relay it will also be configured directly to
connect to Microsoft 365 using one of the supported methods and a valid user with
password. It is recommended to send a test mail from the relay directly to ensure it can
communicate successfully with Microsoft 365 before completing the SAP SCOT
configuration and testing as normal.
The example architecture shown illustrates multiple SAP application servers with a single
mail relay host in the hub. Depending on the volume of mail to be sent it is
recommended to follow a detailed sizing guide for the mail vendor to be used as the
relay. This may require multiple mail relay hosts which operate with an Azure Load
Balancer.

Next Steps
Understand mass-mailing with Azure Twilio - SendGrid

Understand Exchange Online Service limitations (e.g., attachment size, message limits,
throttling etc.)
Scenario - Using Microsoft Entra ID to
secure access to SAP platforms and
applications
Article • 10/23/2023

This document provides advice on the technical design and configuration of SAP
platforms and applications when using Microsoft Entra ID as the primary user
authentication service for SAP Cloud Identity Services . SAP Cloud Identity Services
includes Identity Authentication, Identity Provisioning, Identity Directory, and
Authorization Management. Learn more about the initial setup for authentication in the
Microsoft Entra single sign-on (SSO) integration with SAP Cloud Identity Services
tutorial. For more information on provisioning and other scenarios, see plan deploying
Microsoft Entra for user provisioning with SAP source and target applications and
manage access to your SAP applications.

Terminology used in this guide


ノ Expand table

Abbreviation Description

BTP SAP Business Technology Platform is an innovation platform optimized for SAP
applications in the cloud. Most of the SAP technologies discussed here are part of
BTP. The products formally known as SAP Cloud Platform are part of SAP BTP.

IAS SAP Cloud Identity Services - Identity Authentication, a component of SAP Cloud
Identity Services, is a cloud service for authentication, single sign-on and user
management in SAP cloud and on-premises applications. IAS helps users
authenticate to their own SAP BTP service instances, as a proxy that integrates
with Microsoft Entra single-sign on.

IPS SAP Cloud Identity Services - Identity Provisioning, a component of SAP Cloud
Identity Services, is a cloud service that helps you provision identities and their
authorization to SAP cloud and on-premises application.

XSUAA Extended Services for Cloud Foundry User Account and Authentication. Cloud
Foundry , a platform as a service (PaaS) that can be deployed on different
infrastructures, is the environment on which SAP built SAP Business Technology
Platform. XSUAA is a multitenant OAuth authorization server that is the central
infrastructure component of the Cloud Foundry environment. XSUAA provides for
business user authentication and authorization within the SAP BTP.
Abbreviation Description

Fiori The web-based user experience of SAP (as opposed to the desktop-based
experience).

Overview
There are many services and components in the SAP and Microsoft technology stack
that play a role in user authentication and authorization scenarios. The main services are
listed in the diagram below.

Since there are many permutations of possible scenarios to be configured, we focus on


one scenario that is in-line with a Microsoft Entra identity first strategy. We'll make the
following assumptions:

You want to govern all your identities centrally and only from Microsoft Entra ID.
You want to reduce maintenance efforts as much as possible and automate
authentication and app access across Microsoft and SAP.
The general guidance for Microsoft Entra ID with IAS applies for apps deployed on
BTP and SAP SaaS apps configured in IAS. Specific recommendations will also be
provided where applicable to BTP (for example, using role mappings with
Microsoft Entra groups) and SAP SaaS apps (for example, using identity
provisioning service for role-based authorization).
We also assume that users are already provisioned in Microsoft Entra ID and
towards any SAP systems that require users to be provisioned to function.
Regardless of how that was achieved: provisioning could have been through
manually, from on-premises Active Directory through Microsoft Entra Connect, or
through HR systems like SAP SuccessFactors. In this document therefore,
SuccessFactors is considered to be an application like any other that (existing)
users will sign on to. We don't cover actual provisioning of users from
SuccessFactors into Microsoft Entra ID.

Based on these assumptions, we focus mostly on the products and services presented in
the diagram below. These are the various components that are most relevant to
authentication and authorization in a cloud-based environment.

7 Note

Most of the guidance here applies to Azure Active Directory B2C as well, but there
are some important differences. For more information, see Using Azure AD B2C as
the Identity Provider.

2 Warning

Be aware of the SAP SAML assertion limits and impact of the length of SAP Cloud
Foundry role collection names and amount of collections proxied by groups in SAP
Cloud Identity Service. For more information, see SAP note 2732890 in SAP for
Me. Exceeded limits result in authorization issues.

Recommendations

Summary
1 - Use Federated Authentication in SAP Business Technology Platform and SAP
SaaS applications through SAP Identity Authentication Service
2 - Use Microsoft Entra ID for Authentication and IAS/BTP for Authorization
3 - Use Microsoft Entra groups for Authorization through Role Collections in
IAS/BTP
4 - Use a single BTP Subaccount only for applications that have similar Identity
requirements
5 - Use the Production IAS tenant for all end user Authentication and Authorization
6 - Define a Process for Rollover of SAML Signing Certificates

1 - Use Federated Authentication in SAP Business


Technology Platform and SAP SaaS applications through
SAP Identity Authentication Service

Context

Your applications in BTP can use identity providers through Trust Configurations to
authenticate users by using the SAML 2.0 protocol between BTP/XSUAA and the identity
provider. Note that only SAML 2.0 is supported, even though the OpenID Connect
protocol is used between the application itself and BTP/XSUAA (not relevant in this
context).

In BTP, you can choose to set up a trust configuration towards SAP Cloud Identity
Services - Identity Authentication (which is the default) but when your authoritative user
directory is Microsoft Entra ID, you can set up federation so that users can sign in with
their existing Microsoft Entra accounts.

On top of federation, you can optionally also set up user provisioning so that Microsoft
Entra users are provisioned upfront in BTP. However, there's no native support for this
(only for Microsoft Entra ID -> SAP Identity Authentication Service); an integrated
solution with native support would be the BTP Identity Provisioning Service. Provisioning
user accounts upfront could be useful for authorization purposes (for example, to add
users to roles). Depending on requirements however, you can also achieve this with
Microsoft Entra groups (see below) which could mean you don't need user provisioning
at all.

When setting up the federation relationship, there are multiple options:

You can choose to federate towards Microsoft Entra ID directly from BTP/XSUAA.
You can choose to federate with IAS that in turn is set up to federate with
Microsoft Entra ID as a Corporate Identity Provider (also known as "SAML
Proxying").

For SAP SaaS applications IAS is provisioned and pre-configured for easy onboarding of
end users. (Examples of this include SuccessFactors, Marketing Cloud, Cloud for
Customer, Sales Cloud, and others.) This scenario is less complex, because IAS is directly
connected with the target app and not proxied to XSUAA. In any case, the same rules
apply for this setup as for Microsoft Entra ID with IAS in general.

What are we recommending?


When your authoritative user directory is Microsoft Entra ID, we recommend setting up
a trust configuration in BTP towards IAS. IAS in turn is set up to federate with Microsoft
Entra ID as a Corporate Identity Provider.

On the trust configuration in BTP, we recommend that "Create Shadow Users During
Logon" is enabled. This way, users who haven't yet been created in BTP, automatically
get an account when they sign in through IAS / Microsoft Entra ID for the first time. If
this setting would be disabled, only pre-provisioned users would be allowed to sign in.

Why this recommendation?


When using federation, you can choose to define the trust configuration at the BTP
Subaccount level. In that case, you must repeat the configuration for each other
Subaccount you're using. By using IAS as an intermediate trust configuration, you
benefit from centralized configuration across multiple Subaccounts and you can use IAS
features such as risk-based authentication and centralized enrichment of assertion
attributes . To safeguard the user experience, these advanced security features should
only be enforced at a single location. This could either be IAS or when keeping
Microsoft Entra ID as the single authoritative user store (as is the premise of this paper),
this would centrally be handled by Microsoft Entra Conditional Access Management.
Note: to IAS, every Subaccount is considered to be an "application", even though within
that Subaccount one or more applications could be deployed. Within IAS, every such
application can be set up for federation with the same corporate identity provider
(Microsoft Entra ID in this case).

Summary of implementation

In Microsoft Entra ID:

Optionally configure Microsoft Entra ID for seamless single sign-on (Seamless


SSO), which automatically signs users in when they are on their corporate devices
connected to your corporate network. When enabled, users don't need to type in
their passwords to sign in to Microsoft Entra ID, and usually, even type in their
usernames.

In Microsoft Entra ID and IAS:

Follow the documentation to connect Microsoft Entra ID to IAS in federation


(proxy) mode (SAP doc , Microsoft doc). Watch out for the NameID setting on
your SSO config in Microsoft Entra ID, because UPNs aren't necessarily email-
addresses.
Configure the "Bundled Application" to use Microsoft Entra ID by going to the
"Conditional Authentication " page and setting the "Default Authenticating
Identity Provider" to the Corporate Identity Provider representing your Microsoft
Entra directory.

In BTP:

Set up a trust configuration towards IAS (SAP doc ) and ensure that "Available for
User Logon " and "Create Shadow Users During Logon" are both enabled.
Optionally, disable "Available for User Logon" on the default "SAP ID Service" trust
configuration so that users always authenticate via Microsoft Entra ID and aren't
presented with a screen to choose their identity provider.

2 - Use Microsoft Entra ID for Authentication and IAS/BTP


for Authorization

Context

When BTP and IAS have been configured for user authentication via federation towards
Microsoft Entra ID, there are multiple options for configuring authorization:
In Microsoft Entra ID, you can assign Microsoft Entra users and groups to the
Enterprise Application representing your SAP IAS instance in Microsoft Entra ID.
In IAS, you can use Risk-based Authentication to allow or block sign-ins and by
doing that preventing access to the application in BTP.
In BTP, you can use Role Collections to define which users and groups can access
the application and get certain roles.

What are we recommending?


We recommend that you don't put any authorization directly in Microsoft Entra itself
and explicitly turn off "User assignment required" on the Enterprise Application in
Microsoft Entra ID. Note that for SAML applications, this setting is enabled by default, so
you must take explicit action to disable it.

Why this recommendation?

When the application is federated through IAS, from the point of view of Microsoft Entra
ID the user is essentially "authenticating to IAS" during the sign-in flow. This means that
Microsoft Entra ID has no information about which final BTP application the user is
trying to sign in to. That also implies that authorization in Microsoft Entra ID can only be
used to do very coarse-grained authorization, for example allowing the user to sign in to
any application in BTP, or to none. This also emphasizes SAP's strategy to isolate apps
and authentication mechanisms on the BTP Subaccount level.

While that could be a valid reason for using "User assignment required", it does mean
there are now potentially two different places where authorization information needs to
be maintained: both in Microsoft Entra ID on the Enterprise Application (where it applies
to all BTP applications), as well as in each BTP Subaccount. This could lead to confusion
and misconfigurations where authorization settings are updated in one place but not
the other. For example: a user was allowed in BTP but not assigned to the application in
Microsoft Entra ID resulting in a failed authentication.

Summary of implementation

On the Microsoft Entra Enterprise Application representing the federation relation with
IAS, disable "User assignment required". This also means you can safely skip assignment
of users.

3 - Use Microsoft Entra groups for Authorization through


Role Collections in IAS/BTP
Context
When you want to configure authorization for your BTP applications, there are multiple
options:

You can configure fine-grained access control inside the application itself, based
on the signed-in user.
You can specify access through Roles and Role Collections in BTP, based on user
assignments or group assignments.

The final implementation can use a combination of both strategies. However, for the
assignment through Role Collections, this can be done on a user-by-user basis, or one
can use groups of the configured identity provider.

What are we recommending?


If you want to use Microsoft Entra ID as the authoritative source for fine-grained
authorization, we recommend using Microsoft Entra groups and assigning them to Role
Collections in BTP. Granting users access to certain applications then simply means
adding them to the relevant Microsoft Entra group(s) without any further configuration
required in IAS/BTP.

With this configuration, we recommend using the Microsoft Entra group's Group ID
(Object ID) as the unique identifier of the group, not the display name
("sAMAccountName"). This means you must use the Group ID as the "Groups" assertion
in the SAML token issued by Microsoft Entra ID. In addition the Group ID is used for the
assignment to the Role Collection in BTP.
Why this recommendation?
If you would assign users directly to Role Collections in BTP, you aren't centralizing
authorization decisions in Microsoft Entra ID. It also means the user must already exist in
IAS before they can be assigned to a Role Collection in BTP - and given that we
recommend federation instead of user provisioning this means the user's shadow
account may not exist yet in IAS at the time you want to do the user assignment. Using
Microsoft Entra groups and assigning them to Role Collections eliminates these issues.

Assigning groups to Role Collections may seem to contradict the prior recommendation
to not use Microsoft Entra ID for authorization. Even in this case however, the
authorization decision is still being taken in BTP, it's just that the decision is now based
on group membership maintained in Microsoft Entra ID.

We recommend using the Microsoft Entra group's Group ID rather than its name
because the Group ID is globally unique, immutable and can never be reused for
another group later on; whereas using the group name could lead to issues when the
name is changed, and there's a security risk in having a group being deleted and
another one getting created with the same name but with users in it that should have
no access to the application.

Summary of implementation
In Microsoft Entra ID:
Create groups to which users can be added that need access to applications in BTP
(for example, create a Microsoft Entra group for each Role Collection in BTP).
On the Microsoft Entra Enterprise Application representing the federation relation
with IAS, configure the SAML User Attributes & Claims to add a group claim for
security groups:

Set the Source attribute to "Group ID" and the Name to Groups (spelled exactly
like this, with upper case 'G').

Further, in order to keep claims payloads small and to avoid running into the
limitation whereby Microsoft Entra ID will limit the number of group claims to
150 in SAML assertions, we highly recommend limiting the groups returned in
the claims to only those groups that explicitly were assigned:
Under "Which groups associated with the user should be returned in the
claim?" answer with "Groups assigned to the application".Then for the groups
you want to include as claims, assign them to the Enterprise Application
using the "Users and Groups" section and selecting "Add user/group".

In IAS:

On the Corporate Identity Provider configuration, under the Identity Federation


options, ensure that you disable "Use Identity Authentication user store ";
otherwise, the group information from Microsoft Entra ID would not be preserved
in the SAML token towards BTP and authorization would fail.

7 Note

If you need to use the Identity Authentication user store (for example, to include
claims which cannot be sourced from Microsoft Entra ID but that are available in
the IAS user store), you can keep this setting enabled. In that case however, you will
need to configure the Default Attributes sent to the application to include the
relevant claims coming from Microsoft Entra ID (for example with the
${corporateIdP.Groups} format).

In BTP:

On the Role Collections that are used by the applications in that Subaccount, map
the Role Collections to User Groups by adding a configuration for the IAS
Identity Provider and setting the Name to the Group ID (Object ID) of the
Microsoft Entra group.

7 Note

In case you would have another claim in Microsoft Entra ID to contain the
authorization information to be used in BTP, you don't have to use the Groups
claim name. This is what BTP uses when you map the Role Collections to user
groups as above, but you can also map the Role Collections to User Attributes
which gives you a bit more flexibility.

4 - Use a single BTP Subaccount only for applications that


have similar Identity requirements

Context
Within BTP, each Subaccount can contain multiple applications. However, from the IAS
point of view a "Bundled Application" is a complete BTP Subaccount, not the more
granular applications within it. This means that all Trust settings, Authentication, and
Access configuration as well as Branding and Layout options in IAS applies to all
applications within that Subaccount. Similarly, all Trust Configurations and Role
Collections in BTP also apply to all applications within that Subaccount.

What are we recommending?


We recommend that you combine multiple applications in a single BTP Subaccount only
if they have similar requirements on the identity level (users, groups, identity providers,
roles, trust configuration, branding, ...).

Why this recommendation?


By combining multiple applications that have very different identity requirements into a
single Subaccount in BTP, you could end up with a configuration which is insecure or
can be more easily misconfigured. For example: when a configuration change to a
shared resource like an identity provider is made for a single application in BTP, this
affects all applications relying on this shared resource.

Summary of implementation
Carefully consider how you want to group multiple applications across Subaccounts in
BTP. For more information, see the SAP Account Model documentation .

5 - Use the Production IAS tenant for all end user


Authentication and Authorization

Context

When working with IAS, you typically have a Production and a Dev/Test tenant. For
different Subaccounts or applications in BTP, you can choose which identity provider
(IAS tenant) to use.

What are we recommending?


We recommend to always use the Production IAS tenant for any interaction with end
users, even in the context of a dev/test version or environment of the application they
have to sign in to.

We recommend using other IAS tenants only for testing of identity-related


configuration, which must be done in isolation from the Production tenant.

Why this recommendation?

Because IAS is the centralized component which has been set up to federate with
Microsoft Entra ID, there's only a single place where the federation and identity
configuration must be set up and maintained. Duplicating this in other IAS tenants can
lead to misconfigurations or inconsistencies in end user access between environments.

6 - Define a Process for Rollover of SAML Signing


Certificates
Context
When configuring federation between Microsoft Entra ID and IAS, as well as between
IAS and BTP, SAML metadata is exchanged which contains X.509 certificates used for
encryption and cryptographic signatures of the SAML tokens being sent between both
parties. These certificates have expiration dates and must be updated periodically (even
in emergency situations when a certificate was compromised for example).

Note: the default validity period of the initial Microsoft Entra certificate used to sign
SAML assertions is 3 years (and note that the certificate is specific to the Enterprise
Application, unlike OpenID Connect and OAuth 2.0 tokens which are signed by a global
certificate in Microsoft Entra ID). You can choose to generate a new certificate with a
different expiration date, or create and import your own certificate.

When certificates expire, they can no longer be used, and new certificates must be
configured. Therefore, a process must be established to keep the certificate
configuration inside the relying party (which needs to validate the signatures) up to date
with the actual certificates being used to sign the SAML tokens.

In some cases, the relying party can do this automatically by providing it with a
metadata endpoint which returns the latest metadata information dynamically - i.e.,
typically a publicly accessible URL from which the relying party can periodically retrieve
the metadata and update its internal configuration store.

However, IAS only allows Corporate Identity Providers to be set up through an import of
the metadata XML file, it does not support providing a metadata endpoint for dynamic
retrieval of the Microsoft Entra metadata (for example
https://login.microsoftonline.com/my-azuread-tenant/federationmetadata/2007-
06/federationmetadata.xml?appid=my-app-id ). Similarly, BTP does not allow a new Trust

Configuration to be set up from the IAS metadata endpoint (for example https://my-
ias-tenant.accounts.ondemand.com/saml2/metadata ), it also needs a one-time upload of a

metadata XML file.

What are we recommending?


When setting up identity federation between any two systems (for example, Microsoft
Entra ID and IAS as well as IAS and BTP), ensure that you capture the expiration date of
the certificates being used. Ensure that these certificates can be replaced well in
advance, and that there is a documented process to update the new metadata in all
relying parties that depend on these certificates.
As discussed before, we recommend setting up a trust configuration in BTP towards IAS,
which in turn is set up to federate with Microsoft Entra ID as a Corporate Identity
Provider. In this case, the following certificates (which are used for SAML signing and
encryption) are important:

The Subaccount certificate in BTP: when this changes, the Application's SAML 2.0
Configuration in IAS must be updated.
The tenant certificate in IAS: when this changes, both the Enterprise Application's
SAML 2.0 Configuration in Microsoft Entra ID and the Trust Configuration in BTP
must be updated.
The Enterprise Application certificate in Microsoft Entra ID: when this changes, the
Corporate Identity Provider's SAML 2.0 Configuration in IAS must be updated.

SAP has example implementations for client certificate notifications with SAP Cloud
Integration and near-expiry handling . This could be adapted with Azure Integration
Services or PowerAutomate. However, they would need to be adapted to work with
server certificates. Such approach requires a custom implementation.

Why this recommendation?


If the certificates are allowed to expire, or when they are replaced in time but the relying
parties that depend on them are not updated with the new certificate information, users
will no longer be able to sign in to any application through federation. This can mean
significant downtime for all users while you restore the service by reconfiguring the
metadata.

Summary of implementation
Add an email notification address for certificate expiration in Microsoft Entra ID and set
it to a group mailbox so that it isn't sent to a single individual (who may even no longer
have an account by the time the certificate is about to expire). By default, only the user
who created the Enterprise Application will receive a notification.

Consider building automation to execute the entire certificate rollover process. For
example, one can periodically check for expiring certificates and replace them while
updating all relying parties with the new metadata.

Using Azure AD B2C as the Identity Provider


Azure Active Directory B2C provides business-to-customer identity as a service. Given
that the integration with Azure AD B2C is similar to how you would allow enterprise
users to sign in with Microsoft Entra ID, the recommendations above still mostly apply
when you want to use Azure AD B2C for your customers, consumers or citizens and
allow them to use their preferred social, enterprise, or local account identities.

There are a few important differences, however. Setting up Azure AD B2C as a corporate
identity provider in IAS and configuring federation between both tenants is described in
more detail in this blog post .

Registering a SAML application in Azure AD B2C


Azure AD B2C doesn't have a gallery of enterprise applications that you can use to easily
configure the trust relationship towards the Corporate Identity Provider in IAS. Instead,
you will have to use custom policies to register a SAML application in Azure AD B2C.
This SAML application plays the same logical role as the enterprise application in
Microsoft Entra ID, however, so the same guidance around rollover of SAML certificates
applies, for example.

Authorization with Azure AD B2C


Azure AD B2C doesn't natively support the use of groups to create collections of users
that you can assign access to, which means that the guidance to use Microsoft Entra
groups for authorization through Role Collections in BTP has to be implemented
differently.

Fortunately, Azure AD B2C is highly customizable, so you can configure the SAML
tokens it sends to IAS to include any custom information. For various options on
supporting authorization claims, see the documentation accompanying the Azure AD
B2C App Roles sample , but in summary: through its API Connector extensibility
mechanism you can optionally still use groups, app roles, or even a custom database to
determine what the user is allowed to access.

Regardless of where the authorization information comes from, it can then be emitted
as the Groups attribute inside the SAML token by configuring that attribute name as the
default partner claim type on the claims schema or by overriding the partner claim type
on the output claims. Note however that BTP allows you to map Role Collections to User
Attributes , which means that any attribute name can be used for authorization
decisions, even if you don't use the Groups attribute name.

Next Steps
Learn more about the initial setup in this tutorial
plan deploying Microsoft Entra for user provisioning with SAP source and target
applications and
manage access to your SAP applications
Discover additional SAP integration scenarios with Microsoft Entra ID and beyond
Expose SAP legacy middleware securely
with Azure PaaS
Article • 02/10/2023

Enabling internal systems and external partners to interact with SAP back ends is a
common requirement. Existing SAP landscapes often rely on the legacy middleware SAP
Process Orchestration (PO) or Process Integration (PI) for their integration and
transformation needs. For simplicity, this article uses the term SAP Process Orchestration
to refer to both offerings.

This article describes configuration options on Azure, with emphasis on internet-facing


implementations.

7 Note

SAP mentions SAP Integration Suite --specifically, SAP Cloud Integration --


running on Business Technology Platform (BTP) as the successor for SAP PO and
PI. Both the BTP platform and the services are available on Azure. For more
information, see SAP Discovery Center . For more info about the maintenance
support timeline for the legacy components, see SAP OSS note 1648480 .

Overview
Existing implementations based on SAP middleware have often relied on SAP's
proprietary dispatching technology called SAP Web Dispatcher . This technology
operates on layer 7 of the OSI model . It acts as a reverse proxy and addresses load-
balancing needs for downstream SAP application workloads like SAP Enterprise
Resource Planning (ERP), SAP Gateway, or SAP Process Orchestration.

Dispatching approaches include traditional reverse proxies like Apache, platform as a


service (PaaS) options like Azure Load Balancer, and the opinionated SAP Web
Dispatcher. The overall concepts described in this article apply to the options
mentioned. For guidance on using non-SAP load balancers, see SAP's wiki .

7 Note

All described setups in this article assume a hub-and-spoke network topology,


where shared services are deployed into the hub. Based on the criticality of SAP,
you might need even more isolation. For more information, see the SAP design
guide for perimeter networks.

Primary Azure services


Azure Application Gateway handles public internet-based and internal private HTTP
routing, along with encrypted tunneling across Azure subscriptions. Examples include
security and autoscaling.

Azure Application Gateway is focused on exposing web applications, so it offers a web


application firewall (WAF). Workloads in other virtual networks that will communicate
with SAP through Azure Application Gateway can be connected via private links, even
across tenants.

Azure Firewall handles public internet-based and internal private routing for traffic types
on layers 4 to 7 of the OSI model. It offers filtering and threat intelligence that feed
directly from Microsoft Security.

Azure API Management handles public internet-based and internal private routing
specifically for APIs. It offers request throttling, usage quota and limits, governance
features like policies, and API keys to break down services per client.

Azure VPN Gateway and Azure ExpressRoute serve as entry points to on-premises
networks. They're abbreviated in the diagrams as VPN and XR.

Setup considerations
Integration architecture needs differ, depending on the interface that an organization
uses. SAP-proprietary technologies like Intermediate Document (IDoc) framework ,
Business Application Programming Interface (BAPI) , transactional Remote Function
Calls (tRFCs) , or plain RFCs require a specific runtime environment. They operate on
layers 4 to 7 of the OSI model, unlike modern APIs that typically rely on HTP-based
communication (layer 7 of the OSI model). Because of that, the interfaces can't be
treated the same way.

This article focuses on modern APIs and HTTP, including integration scenarios like
Applicability Statement 2 (AS2) . File Transfer Protocol (FTP) serves as an example to
handle non-HTTP integration needs. For more information about Microsoft load-
balancing solutions, see Load-balancing options.

7 Note

SAP publishes dedicated connectors for its proprietary interfaces. Check SAP's
documentation for Java and .NET , for example. They're supported by
Microsoft gateways too. Be aware that IDocs can also be posted via HTTP .

Security concerns require the usage of firewalls for lower-level protocols and WAFs to
address HTTP-based traffic with Transport Layer Security (TLS) . To be effective, TLS
sessions need to be terminated at the WAF level. To support zero-trust approaches, we
recommend that you re-encrypt again afterward to provide end-to-encryption.

Integration protocols such as AS2 can raise alerts by using standard WAF rules. We
recommend using the Application Gateway WAF triage workbook to identify and
better understand why the rule is triggered, so you can remediate effectively and
securely. Open Web Application Security Project (OWASP) provides the standard rules.
For a detailed video session on this topic with emphasis on SAP Fiori exposure, see the
SAP on Azure webcast .

You can further enhance security by using mutual TLS (mTLS), which is also called
mutual authentication. Unlike normal TLS, it verifies the client identity.

7 Note

Virtual machine (VM) pools require a load balancer. For better readability, the
diagrams in this article don't show a load balancer.

7 Note

If you don't need SAP-specific balancing features that SAP Web Dispatcher
provides, you can replace them with Azure Load Balancer. This replacement gives
the benefit of a managed PaaS offering instead of an infrastructure as a service
(IaaS) setup.

Scenario: Inbound HTTP connectivity focused


SAP Web Dispatcher doesn't offer a WAF. Because of that, we recommend Azure
Application Gateway for a more secure setup. SAP Web Dispatcher and Process
Orchestration remain in charge to help protect the SAP back end from request overload
with sizing guidance and concurrent request limits . No throttling capability is
available in the SAP workloads.

You can avoid unintentional access through access control lists on SAP Web
Dispatcher.

One of the scenarios for SAP Process Orchestration communication is inbound flow.
Traffic might originate from on-premises, external apps or users, or an internal system.
The following example focuses on HTTPS.

Scenario: Outbound HTTP/FTP connectivity


focused
For the reverse communication direction, SAP Process Orchestration can use virtual
network routing to reach on-premises workloads or internet-based targets via the
internet breakout. Azure Application Gateway acts as a reverse proxy in such scenarios.
For non-HTTP communication, consider adding Azure Firewall. For more information,
see Scenario: File based and Comparison of Gateway components later in this article.

The following outbound scenario shows two possible methods. One uses HTTPS via
Azure Application Gateway calling a web service (for example, SOAP adapter). The other
uses FTP over SSH (SFTP) via Azure Firewall transferring files to a business partner's SFTP
server.
Scenario: API Management focused
Compared to the scenarios for inbound and outbound connectivity, the introduction of
Azure API Management in internal mode (private IP only and virtual network
integration) adds built-in capabilities like:

Throttling.
API governance.
Additional security options like modern authentication flows.
Azure Active Directory integration.
The opportunity to add SAP APIs to a central API solution across the company.

When you don't need a WAF, you can deploy Azure API Management in external mode
by using a public IP address. That deployment simplifies the setup while keeping the
throttling and API governance capabilities. Basic protection is implemented for all Azure
PaaS offerings.
Scenario: Global reach
Azure Application Gateway is a region-bound service. Compared to the preceding
scenarios, Azure Front Door ensures cross-region global routing, including a web
application firewall. For details about the differences, see this comparison.

The following diagram condenses SAP Web Dispatcher, SAP Process Orchestration, and
the back end into single image for better readability.

Scenario: File-based
Non-HTTP protocols like FTP can't be addressed with Azure API Management,
Application Gateway, or Azure Front Door as shown in the preceding scenarios. Instead,
the managed Azure Firewall instance or the equivalent network virtual appliance (NVA)
takes over the role of securing inbound requests.

Files need to be stored before SAP can process them. We recommend that you use
SFTP. Azure Blob Storage supports SFTP natively.
Alternative SFTP options are available in Azure Marketplace if necessary.

The following diagram shows a variation of this scenario with integration targets
externally and on-premises. Different types of secure FTP illustrate the communication
path.

For insights into Network File System (NFS) file shares as an alternative to Blob Storage,
see NFS file shares in Azure Files.

Scenario: SAP RISE specific


SAP RISE deployments are technically identical to the scenarios described earlier, with
the exception that SAP itself manages the target SAP workload. The described concepts
can be applied here.

The following diagrams show two setups as examples. For more information, see the
SAP RISE reference guide.
) Important

Contact SAP to ensure that communication ports for your scenario are allowed and
opened in NSGs.

HTTP inbound
In the first setup, the customer governs the integration layer, including SAP Process
Orchestration and the complete inbound path. Only the final SAP target runs on the
RISE subscription. Communication to the RISE-hosted workload is configured through
virtual network peering, typically over the hub. A potential integration could be IDocs
posted to the SAP ERP web service /sap/bc/idoc_xml by an external party.

This second example shows a setup where SAP RISE runs the whole integration chain,
except for the API Management layer.

File outbound
In this scenario, the SAP-managed Process Orchestration instance writes files to the
customer-managed file share on Azure or to a workload sitting on-premises. The
customer handles the breakout.
Comparison of gateway setups

7 Note

Performance and cost metrics assume production-grade tiers. For more


information, see the Azure pricing calculator . Also see the following articles:
Azure Firewall performance, Application Gateway high-traffic support, and
Capacity of an Azure API Management instance.

Depending on the integration protocols you're using, you might need multiple
components. For more information about the benefits of the various combinations of
chaining Azure Application Gateway with Azure Firewall, see Azure Firewall and
Application Gateway for virtual networks.

Integration rule of thumb


To determine which integration scenarios described in this article best fit your
requirements, evaluate them on a case-by-case basis. Consider enabling the following
capabilities:

Request throttling by using API Management

Concurrent request limits on SAP Web Dispatcher

Mutual TLS to verify the client and the receiver

WAF and re-encryption after TLS termination

Azure Firewall for non-HTTP integrations

High availability and disaster recovery for VM-based SAP integration workloads

Modern authentication mechanisms like OAuth2, where applicable

A managed key store like Azure Key Vault for all involved credentials, certificates,
and keys

Alternatives to SAP Process Orchestration with


Azure Integration Services
With the Azure Integration Services portfolio , you can natively address the integration
scenarios that SAP Process Orchestration covers. For insights on how to design SAP
IFlow patterns through cloud-native means, see this blog series . The connector guide
contains more details about AS2 and EDIFACT.

For more information, view the Azure Logic Apps connectors for your desired SAP
interfaces.

Next steps
Protect APIs with Application Gateway and API Management

Integrate API Management in an internal virtual network with Application Gateway


Deploy the Application Gateway WAF triage workbook to better understand SAP-related
WAF alerts

Understand the Application Gateway WAF for SAP

Understand implications of combining Azure Firewall and Azure Application Gateway

Work with SAP OData APIs in Azure API Management


Enable SAP Principal Propagation for
live OData feeds with Power Query
Article • 10/12/2023

Working with SAP datasets in Microsoft Excel or Power BI is a common requirement for
customers.

This article describes the required configurations and components to enable SAP
dataset consumption via OData with Power Query. The SAP data integration is
considered "live" because it can be refreshed from clients such as Microsoft Excel or
Power BI on-demand, unlike data exports (like SAP List Viewer (ALV) CSV exports) for
instance. Those exports are static by nature and have no continuous relationship with
the data origin.

The article puts emphasis on end-to-end user mapping between the known Microsoft
Entra identity in Power Query and the SAP backend user. This mechanism is often
referred to as SAP Principal Propagation.

The focus of the described configuration is on the Azure API Management, SAP
Gateway , SAP OAuth 2.0 Server with AS ABAP , and OData sources, but the concepts
used apply to any web-based resource.

) Important

Note: SAP Principal Propagation ensures user-mapping to the licensed named SAP
user. For any SAP license related questions please contact your SAP representative.

Overview of Microsoft products with SAP


integration
Integrations between SAP products and the Microsoft 365 portfolio range from custom
codes, and partner add-ons to fully customized Office products. Here are a couple of
examples:

SAP Analysis for Microsoft Office Excel and PowerPoint

SAP Analytics Cloud, add-in for Microsoft Office

Access SAP Data Warehouse Cloud with Microsoft Excel


SAP HANA Connector for Power Query

Custom Excel Macros to interact with SAP back ends

Export from SAP List Viewer (ALV) to Microsoft Excel

The mechanism described in this article uses the standard built-in OData capabilities of
Power Query and puts emphasis for SAP landscapes deployed on Azure. Address on-
premises landscapes with the Azure API Management self-hosted Gateway.

For more information on which Microsoft products support Power Query in general, see
the Power Query documentation.

Setup considerations
End users have a choice between local desktop or web-based clients (for instance Excel
or Power BI). The client execution environment needs to be considered for the network
path between the client application and the target SAP workload. Network access
solutions such as VPN aren't in scope for apps like Excel for the web.

Azure API Management reflects local and web-based environment needs with different
deployment modes that can be applied to Azure landscapes (internal or external).
Internal refers to instances that are fully restricted to a private virtual network whereas

external retains public access to Azure API Management. On-premises installations

require a hybrid deployment to apply the approach as is using the Azure API
Management self-hosted Gateway.

Power Query requires matching API service URL and Microsoft Entra application ID URL.
Configure a custom domain for Azure API Management to meet the requirement.

SAP Gateway needs to be configured to expose the desired target OData services.
Discover and activate available services via SAP transaction code /IWFND/MAINT_SERVICE .
For more information, see SAP's OData configuration .

Azure API Management custom domain


configuration
See below the screenshot of an example configuration in API Management using a
custom domain called api.custom-apim.domain.com with a managed certificate and
Azure App Service Domain. For more domain certificate options, see the Azure API
Management documentation.
Complete the setup of your custom domain as per the domain requirements. For more
information, see the custom domain documentation. To prove domain name ownership
and grant access to the certificate, add those DNS records to your Azure App Service
Domain custom-apim.domain.com as below:
The respective Microsoft Entra application registration for the Azure API Management
tenant would look like below.
7 Note

If custom domain for Azure API Management isn't an option for you, you need to
use a custom Power Query Connector instead.

Azure API Management policy design for


Power Query
Use this Azure API Management policy for your target OData API to support Power
Query's authentication flow. See below a snippet from that policy highlighting the
authentication mechanism. Find the used client ID for Power Query here.

XML

<!-- if empty Bearer token supplied assume Power Query sign-in request as
described [here:](/power-query/connectorauthentication#supported-workflow) -
->
<when
condition="@(context.Request.Headers.GetValueOrDefault("Authorization","").T
rim().Equals("Bearer"))">
<return-response>
<set-status code="401" reason="Unauthorized" />
<set-header name="WWW-Authenticate" exists-action="override">
<!-- Check the client ID for Power Query [here:](/power-
query/connectorauthentication#supported-workflow) -->
<value>Bearer
authorization_uri=https://login.microsoftonline.com/{{AADTenantId}}/oauth2/a
uthorize?response_type=code%26client_id=a672d62c-fc7b-4e81-a576-
e60dc46e951d</value>
</set-header>
</return-response>
</when>

In addition to the support of the Organizational Account login flow, the policy supports
OData URL response rewriting because the target server replies with original URLs. See
below a snippet from the mentioned policy:

XML

<!-- URL rewrite in body only required for GET operations -->
<when condition="@(context.Request.Method == "GET")">
<!-- ensure downstream API metadata matches Azure API Management caller
domain in Power Query -->
<find-and-replace from="@(context.Api.ServiceUrl.Host +":"+
context.Api.ServiceUrl.Port + context.Api.ServiceUrl.Path)"
to="@(context.Request.OriginalUrl.Host + ":" +
context.Request.OriginalUrl.Port + context.Api.Path)" />
</when>

7 Note

For more information about secure SAP access from the Internet and SAP perimeter
network design, see this guide. Regarding securing SAP APIs with Azure, see this
article.

SAP OData authentication via Power Query on


Excel Desktop
With the given configuration, the built-in authentication mechanism of Power Query
becomes available to the exposed OData APIs. Add a new OData source to the Excel
sheet via the Data ribbon (Get Data -> From Other Sources -> From OData Feed).
Maintain your target service URL. Below example uses the SAP Gateway demo service
GWSAMPLE_BASIC. Discover or activate it using SAP transaction /IWFND/MAINT_SERVICE .
Finally add it to Azure API Management using the official OData import guide.
Retrieve the Base URL and insert in your target application. Below example shows the
integration experience with Excel Desktop.

Switch the login method to Organizational account and click Sign in. Supply the
Microsoft Entra account that is mapped to the named SAP user on the SAP Gateway
using SAP Principal Propagation. For more information about the configuration, see this
Microsoft tutorial. Learn more about SAP Principal Propagation from this SAP
community post and this video series .
Continue to choose at which level the authentication settings should be applied by
Power Query on Excel. Below example shows a setting that would apply to all OData
services hosted on the target SAP system (not only to the sample service
GWSAMPLE_BASIC).

7 Note

The authorization scope setting on URL level in below screen is independent of the
actual authorizations on the SAP backend. SAP Gateway remains the final validator
of each request and associated authorizations of a mapped named SAP user.

) Important

The above guidance focusses on the process of obtaining a valid authentication


token from Microsoft Entra ID via Power Query. This token needs to be further
processed for SAP Principal Propagation.

Configure SAP Principal Propagation with


Azure API Management
Use this second Azure API Management policy for SAP to complete the configuration
for SAP Principal Propagation on the middle layer. For more information about the
configuration of the SAP Gateway backend, see this Microsoft tutorial.
7 Note

Learn more about SAP Principal Propagation from this SAP community post and
this video series .

The policy relies on an established SSO setup between Microsoft Entra ID and SAP
Gateway (use SAP NetWeaver from the Microsoft Entra gallery). See below an example
with the demo user Adele Vance. User mapping between Microsoft Entra ID and the SAP
system happens based on the user principal name (UPN) as the unique user identifier.
The UPN mapping is maintained on the SAP back end using transaction SAML2.
According to this configuration named SAP users will be mapped to the respective
Microsoft Entra user. See below an example configuration from the SAP back end using
transaction code SU01.
For more information about the required SAP OAuth 2.0 Server with AS ABAP
configuration, see this Microsoft tutorial about SSO with SAP NetWeaver using OAuth.

Using the described Azure API Management policies any Power Query enabled
Microsoft product may call SAP hosted OData services, while honoring the SAP named
user mapping.
SAP OData access via other Power Query
enabled applications and services
Above example shows the flow for Excel Desktop, but the approach is applicable to any
Power Query OData enabled Microsoft product. For more information on the OData
connector of Power Query and which products support it, see the Power Query
Connectors documentation. For more information which products support Power Query
in general, see the Power Query documentation.

Popular consumers are Power BI, Excel for the web , Power Apps (Dataflows) and
Analysis Service.

Tackle SAP write-back scenarios with Power


Automate
The described approach is also applicable to write-back scenarios. For example, you can
use Power Automate to update a business partner in SAP using OData with the http-
enabled connectors (alternatively use RFCs or BAPIs). See below an example of a Power
BI service dashboard that is connected to Power Automate through value-based alerts
and a button (highlighted on the screenshot). Learn more about triggering flows from
Power BI reports on the Power Automate documentation.
The highlighted button triggers a flow that forwards the OData PATCH request to the
SAP Gateway to change the business partner role.

7 Note

Use the Azure API Management policy for SAP to handle the authentication,
refresh tokens, CSRF tokens and overall caching of tokens outside of the flow.
Next steps
Learn from where you can use OData with Power Query

Work with SAP OData APIs in Azure API Management

Configure Azure API Management for SAP APIs

Tutorial: Analyze sales data from Excel and an OData feed

Protect APIs with Application Gateway and API Management

Integrate API Management in an internal virtual network with Application Gateway

Understand Azure Application Gateway and Web Application Firewall for SAP

Automate API deployments with APIOps


SAP front-end printing with Universal
Print
Article • 02/01/2024

Printing from your SAP landscape is a requirement for many customers. Depending on
your business, printing needs can come in different areas and SAP applications.
Examples can be data list printing, mass- or label printing. Such production and batch
print scenarios are often solved with specialized hardware, drivers and printing
solutions. This article addresses options to use Universal Print for SAP front-end printing
of the SAP users.

Universal Print is a cloud-based print solution that enables organizations to manage


printers and printer drivers in a centralized manner. Removes the need to use dedicated
printer servers and available for use by company employees and applications. While
Universal Print runs entirely on Microsoft Azure, for use with SAP systems there's no
such requirement. Your SAP landscape can run on Azure, be located on-premises or
operate in any other cloud environment. You can use SAP systems deployed by SAP
RISE. Similarly, SAP cloud services, which are browser based can be used with Universal
Print in most front-end printing scenarios.

Prerequisites
SAP front-end printing sends an output to a printer available for the user on their
front-end device. In other words, a printer accessible by the operating system. Same
client computer runs SAP GUI or browser. To use Universal Print, you need to have
access to such printer(s).

Client OS with support for Universal Print


Add Universal Print printer to your Windows client
Able to print on Universal Print printer from OS

See the Universal Print documentation for details on these prerequisites. As a result, one
or more Universal Print printers are visible in your device’s printer list. For SAP front-end
printing, it's not necessary to make it your default printer.

SAP web applications


A web application such as SAP Fiori or SAP Web GUI is used to access SAP data and
display it. It doesn’t matter if you access the SAP system through an internal network,
public URL or if your SAP system is an ABAP or Java system, or SAP application running
within SAP Business Technology Platform. All SAP application data displayed within a
browser can be printed. The print job creation in Universal Print is done by the operating
system and doesn't require any SAP configuration at all. There's no SAP integration and
communication with Universal Print directly.

SAP GUI printing


For SAP front-end printing, Universal Print relies on SAP GUI and SAP printer access
method G . Your SAP system likely has one or more SAP printers defined already for
such purpose. An example, SAP printer LOCL, defined in SAP transaction code SPAD.
For Universal Print use, it’s important the access method (1) is set to ‘G’, as this uses SAP
GUI’s integration into the operating system. For host printer field (2), value of __DEFAULT
calls the relevant default printer name. Leaving option “No device selection at front end”
unchecked (3), you're prompted to select the printer from your OS printer list. With the
option checked, print output goes directly to the OS default printer without extra user
input.

With such SAP printer definition, SAP GUI uses the operating system printer details. The
operating system already knows your added Universal Print printers. As with SAP web
applications, there's no direct communication between the SAP system and Universal
Print APIs. No settings to configure for your SAP system beyond the available output
device for front-end printing.

When using SAP GUI for HTML and front-end printing, you can print to an SAP defined
printer, too. In the SAP system, you need a front-end printer with access method ‘G’ and
a device type of PDF or derivate. For more information, see SAP’s documentation .
Such print output is displayed in browser as a PDF from the SAP system. You open the
common OS printing dialog and select a Universal Print printer installed on your
computer.

Limitations
SAP defines front-end printing with several constraints . It can't be used for
background printing, nor should it be relied upon for production or mass printing. See if
your SAP printer definition is correct, as printers with access method ‘F’ don't work
correctly with current SAP releases. More details can be found in SAP note 2028598 -
Technical changes for front-end printing with access method F .

Next steps
Check out the documentation:

SAP’s print queue API


Universal Print API
az hanainstance
Reference

7 Note

This reference is part of the sap-hana extension for the Azure CLI (version 2.0.46 or
higher). The extension will automatically install the first time you run an az
hanainstance command. Learn more about extensions.

(PREVIEW) Manage Azure SAP HANA Instance.

Commands
ノ Expand table

Name Description Type Status

az hanainstance create Create a new SAP HANA Instance. Extension GA

az hanainstance delete Delete a SAP HANA Instance. Extension GA

az hanainstance list List SAP HANA Instances. Extension GA

az hanainstance restart Restart a SAP HANA Instance. Extension GA

az hanainstance show Get the details of a SAP HANA Instance. Extension GA

az hanainstance shutdown Shutdown a SAP HANA Instance. Extension GA

az hanainstance start Start a SAP HANA Instance. Extension GA

az hanainstance update Update the Tags field of a SAP HANA Instance. Extension GA

az hanainstance create

Create a new SAP HANA Instance.

Azure CLI

az hanainstance create --instance-name


--ip-address
--location
--os-computer-name
--partner-node-id
--resource-group
--ssh-public-key

Required Parameters

--instance-name -n

The name of the SAP HANA instance.

--ip-address

IP address to connect to the SAP HANA instance.

--location -l

Location of the SAP HANA instance. Default is the location of target resource group.

--os-computer-name

OS computer name of the SAP HANA instance.

--partner-node-id

ARM ID of a HANA Instance on the network to connect the SAP HANA instance.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--ssh-public-key

SSH public key to connect to the SAP HANA instance.

Global Parameters

--debug

Increase logging verbosity to show all debug logs.


--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hanainstance delete

Delete a SAP HANA Instance.

Azure CLI

az hanainstance delete [--ids]


[--instance-name]
[--resource-group]
[--subscription]

Optional Parameters
--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--instance-name -n

The name of the SAP HANA instance.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query
JMESPath query string. See http://jmespath.org/ for more information and
examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hanainstance list

List SAP HANA Instances.

Azure CLI

az hanainstance list [--resource-group]

Optional Parameters

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors
Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hanainstance restart

Restart a SAP HANA Instance.

Azure CLI

az hanainstance restart [--ids]


[--instance-name]
[--resource-group]
[--subscription]

Optional Parameters

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.
--instance-name -n

The name of the SAP HANA instance.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription
Name or ID of subscription. You can configure the default subscription using az
account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hanainstance show

Get the details of a SAP HANA Instance.

Azure CLI

az hanainstance show [--ids]


[--instance-name]
[--resource-group]
[--subscription]

Optional Parameters

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--instance-name -n

The name of the SAP HANA instance.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .
Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hanainstance shutdown

Shutdown a SAP HANA Instance.

Azure CLI

az hanainstance shutdown [--ids]


[--instance-name]
[--resource-group]
[--subscription]

Optional Parameters

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--instance-name -n

The name of the SAP HANA instance.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hanainstance start

Start a SAP HANA Instance.

Azure CLI

az hanainstance start [--ids]


[--instance-name]
[--resource-group]
[--subscription]

Optional Parameters

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--instance-name -n

The name of the SAP HANA instance.


--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.


az hanainstance update

Update the Tags field of a SAP HANA Instance.

Azure CLI

az hanainstance update [--add]


[--force-string]
[--ids]
[--instance-name]
[--no-wait]
[--remove]
[--resource-group]
[--set]
[--subscription]

Optional Parameters

--add

Add an object to a list of objects by specifying a path and key value pairs. Example: -
-add property.listProperty <key=value, string or JSON string> .
default value: []

--force-string

When using 'set' or 'add', preserve string literals instead of attempting to convert to
JSON.
default value: False

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--instance-name -n

The name of the SAP HANA instance.

--no-wait
Do not wait for the long-running operation to finish.
default value: False

--remove

Remove a property or an element from a list. Example: --remove property.list


<indexToRemove> OR --remove propertyToRemove .
default value: []

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--set

Update an object by specifying a property path and value to set. Example: --set
property1.property2=<value> .
default value: []

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.


Az.HanaOnAzure
Reference

Microsoft Azure PowerShell: HanaOn cmdlets

SAP HANA on Azure


ノ Expand table

Get-AzSapMonitor Gets properties of a SAP monitor for the specified


subscription, resource group, and resource name.

Get- Gets properties of a provider instance for the specified


AzSapMonitorProviderInstance subscription, resource group, SapMonitor name, and
resource name.

New-AzSapMonitor Creates a SAP monitor for the specified subscription,


resource group, and resource name.

New- Creates a provider instance for the specified subscription,


AzSapMonitorProviderInstance resource group, SapMonitor name, and resource name.

Remove-AzSapMonitor Deletes a SAP monitor with the specified subscription,


resource group, and monitor name.

Remove- Deletes a provider instance for the specified subscription,


AzSapMonitorProviderInstance resource group, SapMonitor name, and resource name.

Update-AzSapMonitor Patches the Tags field of a SAP monitor for the specified
subscription, resource group, and monitor name.

6 Collaborate with us on
Azure PowerShell feedback
GitHub
Azure PowerShell is an open source
The source for this content can project. Select a link to provide
be found on GitHub, where you feedback:
can also create and review
issues and pull requests. For  Open a documentation issue
more information, see our
contributor guide.  Provide product feedback
SAP HANA database
Article • 01/24/2024

Summary
ノ Expand table

Item Description

Release State General Availability

Products Excel
Power BI (Semantic models)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Analysis Services

Authentication Types Supported Basic


Database
Windows

Function Reference Documentation SapHana.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You'll need an SAP account to sign in to the website and download the drivers. If you're
unsure, contact the SAP administrator in your organization.

To use SAP HANA in Power BI Desktop or Excel, you must have the SAP HANA ODBC
driver installed on the local client computer for the SAP HANA data connection to work
properly. You can download the SAP HANA Client tools from SAP Development Tools ,
which contains the necessary ODBC driver. Or you can get it from the SAP Software
Download Center . In the Software portal, search for the SAP HANA CLIENT for
Windows computers. Since the SAP Software Download Center changes its structure
frequently, more specific guidance for navigating that site isn't available. For instructions
about installing the SAP HANA ODBC driver, go to Installing SAP HANA ODBC Driver on
Windows 64 Bits .

To use SAP HANA in Excel, you must have either the 32-bit or 64-bit SAP HANA ODBC
driver (depending on whether you're using the 32-bit or 64-bit version of Excel) installed
on the local client computer.

This feature is only available in Excel for Windows if you have Office 2019 or a Microsoft
365 subscription . If you're a Microsoft 365 subscriber, make sure you have the latest
version of Office .

HANA 1.0 SPS 12rev122.09, 2.0 SPS 3rev30 and BW/4HANA 2.0 is supported.

Capabilities Supported
Import
Direct Query (Power BI semantic models)
Advanced
SQL Statement

Connect to an SAP HANA database from Power


Query Desktop
To connect to an SAP HANA database from Power Query Desktop:

1. Select Get Data > SAP HANA database in Power BI Desktop or From Database >
From SAP HANA Database in the Data ribbon in Excel.

2. Enter the name and port of the SAP HANA server you want to connect to. The
example in the following figure uses SAPHANATestServer on port 30015 .
By default, the port number is set to support a single container database. If your
SAP HANA database can contain more than one multitenant database container,
select Multi-container system database (30013). If you want to connect to a
tenant database or a database with a non-default instance number, select Custom
from the Port drop-down menu.

If you're connecting to an SAP HANA database from Power BI Desktop, you're also
given the option of selecting either Import or DirectQuery. The example in this
article uses Import, which is the default (and the only mode for Excel). For more
information about connecting to the database using DirectQuery in Power BI
Desktop, go to Connect to SAP HANA data sources by using DirectQuery in Power
BI.

You can also enter an SQL statement or enable column binding from Advanced
options. More information, Connect using advanced options

Once you've entered all of your options, select OK.

3. If you're accessing a database for the first time, you'll be asked to enter your
credentials for authentication. In this example, the SAP HANA server requires
database user credentials, so select Database and enter your user name and
password. If necessary, enter your server certificate information.

Also, you may need to validate the server certificate. For more information about
using validate server certificate selections, see Using SAP HANA encryption. In
Power BI Desktop and Excel, the validate server certificate selection is enabled by
default. If you've already set up these selections in ODBC Data Source
Administrator, clear the Validate server certificate check box. To learn more about
using ODBC Data Source Administrator to set up these selections, go to Configure
SSL for ODBC client access to SAP HANA.

For more information about authentication, go to Authentication with a data


source.

Once you've filled in all required information, select Connect.

4. From the Navigator dialog box, you can either transform the data in the Power
Query editor by selecting Transform Data, or load the data by selecting Load.

Connect to an SAP HANA database from Power


Query Online
To connect to SAP HANA data from Power Query Online:

1. From the Data sources page, select SAP HANA database.

2. Enter the name and port of the SAP HANA server you want to connect to. The
example in the following figure uses SAPHANATestServer on port 30015 .

3. Optionally, enter an SQL statement from Advanced options. More information,


Connect using advanced options

4. Select the name of the on-premises data gateway to use for accessing the
database.

7 Note

You must use an on-premises data gateway with this connector, whether your
data is local or online.

5. Choose the authentication kind you want to use to access your data. You'll also
need to enter a username and password.

7 Note

Currently, Power Query Online only supports Basic authentication.


6. Select Use Encrypted Connection if you're using any encrypted connection, then
choose the SSL crypto provider. If you're not using an encrypted connection, clear
Use Encrypted Connection. More information: Enable encryption for SAP HANA

7. Select Next to continue.

8. From the Navigator dialog box, you can either transform the data in the Power
Query editor by selecting Transform Data, or load the data by selecting Load.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.

The following table describes all of the advanced options you can set in Power Query.
ノ Expand table

Advanced option Description

SQL Statement More information, Import data from a database using native database
query

Enable column Binds variables to the columns of a SAP HANA result set when fetching
binding data. May potentially improve performance at the cost of slightly higher
memory utilization. This option is only available in Power Query Desktop.
More information: Enable column binding

ConnectionTimeout A duration that controls how long to wait before abandoning an attempt
to make a connection to the server. The default value is 15 seconds.

CommandTimeout A duration that controls how long the server-side query is allowed to run
before it is canceled. The default value is ten minutes.

Supported features for SAP HANA


The following list shows the supported features for SAP HANA. Not all features listed
here are supported in all implementations of the SAP HANA database connector.

Both the Power BI Desktop and Excel connector for an SAP HANA database use the
SAP ODBC driver to provide the best user experience.

In Power BI Desktop, SAP HANA supports both DirectQuery and Import options.

Power BI Desktop supports HANA information models, such as Analytic and


Calculation Views, and has optimized navigation.

With SAP HANA, you can also use SQL commands in the native database query
SQL statement to connect to Row and Column Tables in HANA Catalog tables,
which aren't included in the Analytic/Calculation Views provided by the Navigator
experience. You can also use the ODBC connector to query these tables.

Power BI Desktop includes Optimized Navigation for HANA Models.

Power BI Desktop supports SAP HANA Variables and Input parameters.

Power BI Desktop supports HDI-container-based Calculation Views.

The SapHana.Database function now supports connection and command timeouts.


More information: Connect using advanced options

To access your HDI-container-based Calculation Views in Power BI, ensure that


the HANA database users you use with Power BI have permission to access the
HDI runtime container that stores the views you want to access. To grant this
access, create a Role that allows access to your HDI container. Then assign the
role to the HANA database user you'll use with Power BI. (This user must also
have permission to read from the system tables in the _SYS_BI schema, as
usual.) Consult the official SAP documentation for detailed instructions on how
to create and assign database roles. This SAP blog post may be a good place
to start.

There are currently some limitations for HANA variables attached to HDI-based
Calculation Views. These limitations are because of errors on the HANA side.
First, it isn't possible to apply a HANA variable to a shared column of an HDI-
container-based Calculation View. To fix this limitation, upgrade to HANA 2
version 37.02 and onwards or to HANA 2 version 42 and onwards. Second,
multi-entry default values for variables and parameters currently don't show up
in the Power BI UI. An error in SAP HANA causes this limitation, but SAP hasn't
announced a fix yet.

Enable column binding


Data fetched from the data source is returned to the application in variables that the
application has allocated for this purpose. Before this can be done, the application must
associate, or bind, these variables to the columns of the result set; conceptually, this
process is the same as binding application variables to statement parameters. When the
application binds a variable to a result set column, it describes that variable - address,
data type, and so on - to the driver. The driver stores this information in the structure it
maintains for that statement and uses the information to return the value from the
column when the row is fetched.

Currently, when you use Power Query Desktop to connect to an SAP HANA database,
you can select the Enable column binding advanced option to enable column binding.

You can also enable column binding in existing queries or in queries used in Power
Query Online by manually adding the EnableColumnBinding option to the connection in
the Power Query formula bar or advanced editor. For example:

Power Query M

SapHana.Database("myserver:30015", [Implementation = "2.0",


EnableColumnBinding = true]),

There are limitations associated with manually adding the EnableColumnBinding option:
Enable column binding works in both Import and DirectQuery mode. However,
retrofitting an existing DirectQuery query to use this advanced option isn't
possible. Instead, a new query must be created for this feature to work correctly.
In SAP HANA Server version 2.0 or later, column binding is all or nothing. If some
columns can’t be bound, none will be bound, and the user will receive an
exception, for example, DataSource.Error: Column MEASURE_UNIQUE_NAME of type
VARCHAR cannot be bound (20002 > 16384) .

SAP HANA version 1.0 servers don't always report correct column lengths. In this
context, EnableColumnBinding allows for partial column binding. For some queries,
this could mean that no columns are bound. When no columns are bound, no
performance benefits are gained.

Native query support in the SAP HANA


database connector
The Power Query SAP HANA database connector supports native queries. For
information about how to use native queries in Power Query, go to Import data from a
database using native database query.

Query folding on native queries


The Power Query SAP HANA database connector now supports query folding on native
queries. More information: Query folding on native queries

7 Note

In the Power Query SAP HANA database connector, native queries don't support
duplicate column names when EnableFolding is set to true.

Parameters in native queries


The Power Query SAP HANA database connector now supports parameters in native
queries. You can specify parameters in native queries by using the Value.NativeQuery
syntax.

Unlike other connectors, the SAP HANA database connector supports EnableFolding =
True and specifying parameters at the same time.
To use parameters in a query, you place question marks (?) in your code as placeholders.
To specify the parameter, you use the SqlType text value and a value for that SqlType in
Value . Value can be any M value, but must be assigned to the value of the specified
SqlType .

There are multiple ways of specifying parameters:

Providing just the values as a list:

Power Query M

{ “Seattle”, 1, #datetime(2022, 5, 27, 17, 43, 7) }

Providing the values and the type as a list:

Power Query M

{ [ SqlType = "CHAR", Value = "M" ],


[ SqlType = "BINARY", Value = Binary.FromText("AKvN",
BinaryEncoding.Base64) ],
[ SqlType = "DATE", Value = #date(2022, 5, 27) ] }

Mix and match the two:

Power Query M

{ “Seattle”, 1, [ SqlType = "SECONDDATE", Value = #datetime(2022, 5,


27, 17, 43, 7) ] }

SqlType follows the standard type names defined by SAP HANA. For example, the

following list contains the most common types used:

BIGINT
BINARY
BOOLEAN
CHAR
DATE
DECIMAL
DOUBLE
INTEGER
NVARCHAR
SECONDDATE
SHORTTEXT
SMALLDECIMAL
SMALLINT
TIME
TIMESTAMP
VARBINARY
VARCHAR

The following example demonstrates how to provide a list of parameter values.

Power Query M

let
Source = Value.NativeQuery(
SapHana.Database(
"myhanaserver:30015",
[Implementation = "2.0"]
),
"select ""VARCHAR_VAL"" as ""VARCHAR_VAL""
from ""_SYS_BIC"".""DEMO/CV_ALL_TYPES""
where ""VARCHAR_VAL"" = ? and ""DATE_VAL"" = ?
group by ""VARCHAR_VAL""
",
{"Seattle", #date(1957, 6, 13)},
[EnableFolding = true]
)
in
Source

The following example demonstrates how to provide a list of records (or mix values and
records):

Power Query M

let
Source = Value.NativeQuery(
SapHana.Database(Server, [Implementation="2.0"]),
"select
""COL_VARCHAR"" as ""COL_VARCHAR"",
""ID"" as ""ID"",
sum(""DECIMAL_MEASURE"") as ""DECIMAL_MEASURE""
from ""_SYS_BIC"".""DEMO/CV_ALLTYPES""
where
""COL_ALPHANUM"" = ? or
""COL_BIGINT"" = ? or
""COL_BINARY"" = ? or
""COL_BOOLEAN"" = ? or
""COL_DATE"" = ?
group by
""COL_ALPHANUM"",
""COL_BIGINT"",
""COL_BINARY"",
""COL_BOOLEAN"",
""COL_DATE"",
{
[ SqlType = "CHAR", Value = "M" ],
// COL_ALPHANUM - CHAR
[ SqlType = "BIGINT", Value = 4 ],
// COL_BIGINT - BIGINT
[ SqlType = "BINARY", Value = Binary.FromText("AKvN",
BinaryEncoding.Base64) ], // COL_BINARY - BINARY
[ SqlType = "BOOLEAN", Value = true ],
// COL_BOOLEAN - BOOLEAN
[ SqlType = "DATE", Value = #date(2022, 5, 27) ],
// COL_DATE - TYPE_DATE
} ,
[EnableFolding=false]
)
in
Source

Support for dynamic attributes


The way in which the SAP HANA database connector treats calculated columns has been
improved. The SAP HANA database connector is a "cube" connector, and there are
some sets of operations (add items, collapse columns, and so on) that happen in "cube"
space. This cube space is exhibited in the Power Query Desktop and Power Query Online
user interface by the "cube" icon that replaces the more common "table" icon.

Before, when you added a table column (or another transformation that internally adds
a column), the query would "drop out of cube space", and all operations would be done
at a table level. At some point, this drop out could cause the query to stop folding.
Performing cube operations after adding a column was no longer possible.

With this change, the added columns are treated as dynamic attributes within the cube.
Having the query remain in cube space for this operation has the advantage of letting
you continue using cube operations even after adding columns.
7 Note

This new functionality is only available when you connect to Calculation Views in
SAP HANA Server version 2.0 or higher.

The following sample query takes advantage of this new capability. In the past, you
would get a "the value is not a cube" exception when applying
Cube.CollapseAndRemoveColumns.

Power Query M

let
Source = SapHana.Database(“someserver:someport”,
[Implementation="2.0"]),
Contents = Source{[Name="Contents"]}[Data],
SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models =
Contents{[Name="SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models"]}[Data],
PURCHASE_ORDERS1 =
SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models{[Name="PURCHASE_ORDERS"]}
[Data],
#"Added Items" = Cube.Transform(PURCHASE_ORDERS1,
{
{Cube.AddAndExpandDimensionColumn, "[PURCHASE_ORDERS]", {"
[HISTORY_CREATEDAT].[HISTORY_CREATEDAT].Attribute", "[Product_TypeCode].
[Product_TypeCode].Attribute", "[Supplier_Country].
[Supplier_Country].Attribute"}, {"HISTORY_CREATEDAT", "Product_TypeCode",
"Supplier_Country"}},
{Cube.AddMeasureColumn, "Product_Price", "[Measures].
[Product_Price]"}
}),
#"Inserted Year" = Table.AddColumn(#"Added Items", "Year", each
Date.Year([HISTORY_CREATEDAT]), Int64.Type),
#"Filtered Rows" = Table.SelectRows(#"Inserted Year", each
([Product_TypeCode] = "PR")),
#"Added Conditional Column" = Table.AddColumn(#"Filtered Rows",
"Region", each if [Supplier_Country] = "US" then "North America" else if
[Supplier_Country] = "CA" then "North America" else if [Supplier_Country] =
"MX" then "North America" else "Rest of world"),
#"Filtered Rows1" = Table.SelectRows(#"Added Conditional Column", each
([Region] = "North America")),
#"Collapsed and Removed Columns" =
Cube.CollapseAndRemoveColumns(#"Filtered Rows1", {"HISTORY_CREATEDAT",
"Product_TypeCode"})
in
#"Collapsed and Removed Columns"

Next steps
Enable encryption for SAP HANA

The following articles contain more information that you might find useful when
connecting to an SAP HANA debase.

Manage your data source - SAP HANA


Use Kerberos for single sign-on (SSO) to SAP HANA

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Connect to SAP HANA data sources by
using DirectQuery in Power BI
Article • 07/05/2023

You can connect to SAP HANA data sources directly using DirectQuery. There are two
options when connecting to SAP HANA:

Treat SAP HANA as a multi-dimensional source (default): In this case, the


behavior is similar to when Power BI connects to other multi-dimensional sources
like SAP Business Warehouse, or Analysis Services. When you connect to SAP
HANA using this setting, a single analytic or calculation view is selected and all the
measures, hierarchies and attributes of that view are available in the field list. As
visuals are created, the aggregate data is always retrieved from SAP HANA. This
technique is the recommended approach, and is the default for new DirectQuery
reports over SAP HANA.

Treat SAP HANA as a relational source: In this case, Power BI treats SAP HANA as
a relational source. This approach offers greater flexibility. Care must be taken with
this approach to ensure that measures are aggregated as expected, and to avoid
performance issues.

The connection approach is determined by a global tool option, which is set by selecting
File > Options and settings and then Options > DirectQuery, then selecting the option
Treat SAP HANA as a relational source, as shown in the following image.
The option to treat SAP HANA as a relational source controls the approach used for any
new report using DirectQuery over SAP HANA. It has no effect on any existing SAP
HANA connections in the current report, nor on connections in any other reports that
are opened. So if the option is currently unchecked, then upon adding a new connection
to SAP HANA using Get Data, that connection is made treating SAP HANA as a multi-
dimensional source. However, if a different report is opened that also connects to SAP
HANA, then that report continues to behave according to the option that was set at the
time it was created. This fact means that any reports connecting to SAP HANA that were
created prior to February 2018 continue to treat SAP HANA as a relational source.

The two approaches constitute different behavior, and it's not possible to switch an
existing report from one approach to the other.

Treat SAP HANA as a multi-dimensional source


(default)
All new connections to SAP HANA use this connection method by default, treating SAP
HANA as a multi-dimensional source. In order to treat a connection to SAP HANA as a
relational source, you must select File > Options and settings > Options, then check the
box under Direct Query > Treat SAP HANA as a relational source.

When connecting to SAP HANA as a multi-dimensional source, the following


considerations apply:

In the Get Data Navigator, a single SAP HANA view can be selected. It isn't
possible to select individual measures or attributes. There's no query defined at the
time of connecting, which is different from importing data or when using
DirectQuery while treating SAP HANA as a relational source. This consideration
also means that it's not possible to directly use an SAP HANA SQL query when
selecting this connection method.

All the measures, hierarchies, and attributes of the selected view are displayed in
the field list.

As a measure is used in a visual, SAP HANA is queried to retrieve the measure


value at the level of aggregation necessary for the visual. When dealing with non-
additive measures, such as counters and ratios, all aggregations are performed by
SAP HANA, and no further aggregation is performed by Power BI.

To ensure the correct aggregate values can always be obtained from SAP HANA,
certain restrictions must be imposed. For example, it's not possible to add
calculated columns, or to combine data from multiple SAP HANA views within the
same report.

Treating SAP HANA as a multi-dimensional source doesn't offer the greater flexibility
provided by the alternative relational approach, but it's simpler. The approach also
ensures correct aggregate values when dealing with more complex SAP HANA
measures, and generally results in higher performance.

The Field list includes all measures, attributes, and hierarchies from the SAP HANA view.
Note the following behaviors that apply when using this connection method:

Any attribute that is included in at least one hierarchy is hidden by default.


However, they can be seen if required by selecting View hidden from the context
menu on the field list. From the same context menu they can be made visible, if
necessary.

In SAP HANA, an attribute can be defined to use another attribute as its label. For
example, Product, with values 1 , 2 , 3 , and so on, could use ProductName, with
values Bike , Shirt , Gloves , and so on, as its label. In this case, a single field
Product is shown in the field list, whose values are the labels Bike , Shirt , Gloves ,
and so on, but which is sorted by, and with uniqueness determined by, the key
values 1 , 2 , 3 . A hidden column Product.Key is also created, allowing access to
the underlying key values if necessary.

Any variables defined in the underlying SAP HANA view are displayed at the time of
connecting, and the necessary values can be entered. Those values can later be changed
by selecting Transform data from the ribbon, and then Edit parameters from the
dropdown menu displayed.

The modeling operations allowed are more restrictive than in the general case when
using DirectQuery, given the need to ensure that correct aggregate data can always be
obtained from SAP HANA. However, it's still possible to make many additions and
changes, including defining measures, renaming and hiding fields, and defining display
formats. All such changes are preserved on refresh, and any non-conflicting changes
made to the SAP HANA view are applied.

Additional modeling restrictions


The other primary modeling restrictions when connecting to SAP HANA using
DirectQuery (treat as multi-dimensional source) are the following restrictions:

No support for calculated columns: The ability to create calculated columns is


disabled. This fact also means that Grouping and Clustering, which create
calculated columns, aren't available.
Additional limitations for measures: There are other limitations imposed on the
DAX expressions that can be used in measures, to reflect the level of support
offered by SAP HANA.
No support for defining relationships: Only a single view can be queried within a
report, and as such, there's no support for defining relationships.
No Data View: The Data View normally displays the detail level data in the tables.
Given the nature of OLAP sources such as SAP HANA, this view isn't available over
SAP HANA.
Column and measure details are fixed: The list of columns and measures seen in
the field list are fixed by the underlying source, and can't be modified. For
example, it's not possible to delete a column, nor change its datatype. It can,
however, be renamed.
Additional limitations in DAX: There are other limitations on the DAX that can be
used in measure definitions, to reflect limitations in the source. For example, it's
not possible to use an aggregate function over a table.
Additional visualization restrictions
There are restrictions in visuals when connecting to SAP HANA using DirectQuery (treat
as multi-dimensional source):

No aggregation of columns: It's not possible to change the aggregation for a


column on a visual, and it's always Do Not Summarize.

Treat SAP HANA as a relational source


When choosing to connect to SAP HANA as a relational source, some extra flexibility
becomes available. For example, you can create calculated columns, include data from
multiple SAP HANA views, and create relationships between the resulting tables.
However, there are differences from the behavior when treating SAP HANA as a
multidimensional source, particularly when the SAP HANA view contains non-additive
measures, for example, distinct counts, or averages, rather than simple sums, and related
to the efficiency of the queries that are run against SAP HANA.

It's useful to start by clarifying the behavior of a relational source such as SQL Server,
when the query defined in Get Data or Power Query Editor performs an aggregation. In
the example that follows, a query defined in Power Query Editor returns the average
price by ProductID.
If the data is being imported into Power BI versus using DirectQuery, the following
situation would result:

The data is imported at the level of aggregation defined by the query created in
Power Query Editor. For example, average price by product. This fact results in a
table with the two columns ProductID and AveragePrice that can be used in visuals.
In a visual, any subsequent aggregation, such as Sum, Average, Min, and others, is
performed over that imported data. For example, including AveragePrice on a
visual uses the Sum aggregate by default, and would return the sum over the
AveragePrice for each ProductID, in this example, 13.67. The same applies to any
alternative aggregate function, such as Min or Average, used on the visual. For
example, Average of AveragePrice returns the average of 6.66, 4 and 3, which
equates to 4.56, and not the average of Price on the six records in the underlying
table, which is 5.17.

If DirectQuery over that same relational source is being used instead of Import, the
same semantics apply and the results would be exactly the same:

Given the same query, logically exactly the same data is presented to the reporting
layer – even though the data isn't actually imported.
In a visual, any subsequent aggregation, such as Sum, Average, and Min, is again
performed over that logical table from the query. And again, a visual containing
Average of AveragePrice returns the same 4.56.

Consider SAP HANA when the connection is treated as a relational source. Power BI can
work with both Analytic Views and Calculation Views in SAP HANA, both of which can
contain measures. Yet today the approach for SAP HANA follows the same principles as
described previously in this section: the query defined in Get Data or Power Query
Editor determines the data available, and then any subsequent aggregation in a visual is
over that data, and the same applies for both Import and DirectQuery. However, given
the nature of SAP HANA, the query defined in the initial Get Data dialog or Power
Query Editor is always an aggregate query, and generally includes measures where the
actual aggregation that are used is defined by the SAP HANA view.

The equivalent of the previous SQL Server example is that there's an SAP HANA view
containing ID, ProductID, DepotID, and measures including AveragePrice, defined in the
view as Average of Price.

If in the Get Data experience, the selections made were for ProductID and the
AveragePrice measure, then that is defining a query over the view, requesting that
aggregate data. In the earlier example, for simplicity pseudo-SQL is used that doesn’t
match the exact syntax of SAP HANA SQL. Then any further aggregations defined in a
visual are further aggregating the results of such a query. Again, as described previously
for SQL Server, this result applies both for the Import and DirectQuery case. In the
DirectQuery case, the query from Get Data or Power Query Editor are used in a
subselect within a single query sent to SAP HANA, and thus it isn't actually the case that
all the data would be read in, prior to aggregating further.

All of these considerations and behaviors necessitate the following important


considerations when using DirectQuery over SAP HANA:

Attention must be paid to any further aggregation performed in visuals, whenever


the measure in SAP HANA is non-additive, for example, not a simple Sum, Min, or
Max.

In Get Data or Power Query Editor, only the required columns should be included
to retrieve the necessary data, reflecting the fact that the result is a query that
must be a reasonable query that can be sent to SAP HANA. For example, if dozens
of columns were selected, with the thought that they might be needed on
subsequent visuals, then even for DirectQuery a simple visual means the aggregate
query used in the subselect contains those dozens of columns, which generally
perform poorly.
In the following example, selecting five columns (CalendarQuarter, Color, LastName,
ProductLine, SalesOrderNumber) in the Get Data dialog, along with the measure
OrderQuantity, means that later creating a simple visual containing the Min
OrderQuantity results in the following SQL query to SAP HANA. The shaded is the
subselect, containing the query from Get Data / Power Query Editor. If this subselect
gives a high cardinality result, then the resulting SAP HANA performance is likely to be
poor.

Because of this behavior, we recommend the items selected in Get Data or Power Query
Editor be limited to those items that are needed, while still resulting in a reasonable
query for SAP HANA.

Best practices
For both approaches to connecting to SAP HANA, recommendations for using
DirectQuery also apply to SAP HANA, particularly recommendations related to ensuring
good performance. For more information, see using DirectQuery in Power BI.

Considerations and limitations


The following list describes all SAP HANA features that aren't fully supported, or
features that behave differently when using Power BI.

Parent Child Hierarchies: Parent child hierarchies aren't visible in Power BI. This
fact is because Power BI accesses SAP HANA using the SQL interface, and parent
child hierarchies can't be fully accessed by using SQL.
Other hierarchy metadata: The basic structure of hierarchies is displayed in Power
BI, however some hierarchy metadata, such as controlling the behavior of ragged
hierarchies, have no effect. Again, this fact is due to the limitations imposed by the
SQL interface.
Connection using SSL: You can connect using Import and multi-dimensional with
TLS, but can't connect to SAP HANA instances configured to use TLS for the
relational connector.
Support for Attribute views: Power BI can connect to Analytic and Calculation
views, but can't connect directly to Attribute views.
Support for Catalog objects: Power BI can't connect to Catalog objects.
Change to Variables after publish: You can't change the values for any SAP HANA
variables directly in the Power BI service, after the report is published.

Known issues
The following list describes all known issues when connecting to SAP HANA
(DirectQuery) using Power BI.

SAP HANA issue when query for Counters, and other measures: Incorrect data is
returned from SAP HANA if connecting to an Analytical View, and a Counter
measure and some other ratio measure, are included in the same visual. This issue
is covered by SAP Note 2128928 (Unexpected results when query a Calculated
Column and a Counter) . The ratio measure is incorrect in this case.

Multiple Power BI columns from single SAP HANA column: For some calculation
views, where an SAP HANA column is used in more than one hierarchy, SAP HANA
exposes the column as two separate attributes. This approach results in two
columns being created in Power BI. Those columns are hidden by default, however,
and all queries involving the hierarchies, or the columns directly, behave correctly.

Related content
For more information about DirectQuery, check out the following resources:

DirectQuery in Power BI
Data sources supported by DirectQuery
DirectQuery and SAP BW
On-premises data gateway
Use the SAP Business Warehouse
connector in Power BI Desktop
Article • 03/26/2024

You can use Power BI Desktop to access SAP Business Warehouse (SAP BW) data. The
SAP BW Connector Implementation 2.0 has significant improvements in performance
and capabilities from version 1.0.

For information about how SAP customers can benefit from connecting Power BI to their
SAP BW systems, see the Power BI and SAP BW whitepaper . For details about using
DirectQuery with SAP BW, see DirectQuery and SAP Business Warehouse (BW).

) Important

Version 1.0 of the SAP BW connector is deprecated. New connections use


Implementation 2.0 of the SAP BW connector. All support for version 1.0 will be
removed from the connector in the near future. Use the information in this article
to update existing version 1.0 reports to use Implementation 2.0 of the connector.

Use the SAP BW Connector


Follow these steps to install and connect to data with the SAP BW Connector.

Prerequisite
Implementation 2.0 of the SAP Connector requires the SAP .NET Connector 3.0 or 3.1.
You can download the SAP .NET Connector 3.0 or 3.1 from SAP. Access to the
download requires a valid S-user sign-in.

The .NET Framework connector comes in 32-bit and 64-bit versions. Choose the version
that matches your Power BI Desktop installation version.

When you install, in Optional setup steps, make sure you select Install assemblies to
GAC.
7 Note

The first version of the SAP BW Connector required the NetWeaver DLLs. The
current version doesn't require NetWeaver DLLs.

Connect to SAP BW data in Power BI Desktop


To connect to SAP BW data by using the SAP BW Connector, follow these steps:

1. In Power BI Desktop, select Get data.

2. On the Get Data screen, select Database, and then select either SAP Business
Warehouse Application Server or SAP Business Warehouse Message Server.
3. Select Connect.

4. On the next screen, enter server, system, and client information, and whether to
use Import or DirectQuery connectivity method. For detailed instructions, see:

Connect to an SAP BW Application Server from Power Query Desktop


Connect to an SAP BW Message Server from Power Query Desktop

7 Note

You can use the SAP BW Connector to import data from your SAP BW Server
cubes, which is the default, or you can use DirectQuery to connect to the data.
For more information about using the SAP BW Connector with DirectQuery,
see DirectQuery and SAP Business Warehouse (BW).
You can also select Advanced options, and select a Language code, a custom
MDX statement to run against the specified server, and other options. For more
information, see Use advanced options.

5. Select OK to establish the connection.

6. Provide any necessary authentication data and select Connect. For more
information about authentication, see Authentication with a data source.

7. If you didn't specify a custom MDX statement, the Navigator screen shows a list of
all cubes available on the server. You can drill down and select items from the
available cubes, including dimensions and measures. Power BI shows queries and
cubes that the Open Analysis Interfaces expose.

When you select one or more items from the server, the Navigator shows a
preview of the output table.

The Navigator dialog also provides the following display options:

Only selected items. By default, Navigator displays all items. This option is
useful to verify the final set of items you select. Alternatively, you can select
the column names in the preview area to view the selected items.
Enable data previews. This value is the default, and displays data previews.
Deselect this option to reduce the number of server calls by no longer
requesting preview data.
Technical names. SAP BW supports user-defined technical names for objects
within a cube. Cube owners can expose these friendly names for cube
objects, instead of exposing only the physical names for the objects.

8. After you select all the objects you want, choose one of the following options:

Load to load the entire set of rows for the output table into the Power BI
Desktop data model. The Report view opens. You can begin visualizing the
data, or make further modifications by using the Data or Model views.
Transform Data to open Power Query Editor with the data. You can specify
more data transformation and filtering steps before you bring the entire set
of rows into the Power BI Desktop data model.

Along with data from SAP BW cubes, you can also import data from a wide range of
other data sources in Power BI Desktop, and combine them into a single report. This
ability presents many interesting scenarios for reporting and analytics on top of SAP BW
data.

New options in SAP BW Implementation 2.0


This section lists some SAP BW Connector Implementation 2.0 features and
improvements. For more information, see Implementation details.

Advanced options
You can set the following options under Advanced options on the SAP BW connection
screen:

Execution mode specifies how the MDX interface executes queries on the server.
The following options are valid:
BasXml
BasXmlGzip
DataStream

The default value is BasXmlGzip. This mode can improve performance for low
latency or high volume queries.

Batch size specifies the maximum number of rows to retrieve at a time when
executing an MDX statement. A small number means more calls to the server while
retrieving a large semantic model. A large value might improve performance, but
could cause memory issues on the SAP BW server. The default value is 50000.

Enable characteristic structures changes the way the Navigator displays


characteristic structures. The default value for this option is false, or unchecked.
This option affects the list of objects available for selection, and isn't supported in
native query mode.

Other improvements
The following list describes other Implementation 2.0 improvements:

Better performance.
Ability to retrieve several million rows of data, and fine-tuning through the batch
size parameter.
Ability to switch execution modes.
Support for compressed mode, especially beneficial for high-latency connections
or large semantic models.
Improved detection of Date variables.
Date (ABAP type DATS ) and Time (ABAP type TIMS ) dimensions exposed as dates

and times, instead of text values. For more information, see Support for typed
dates in SAP BW.
Better exception handling. Errors that occur in BAPI calls are now surfaced.
Column folding in BasXml and BasXmlGzip modes. For example, if the generated
MDX query retrieves 40 columns but the current selection only needs 10, this
request passes on to the server to retrieve a smaller semantic model.

Update existing Implementation 1.0 reports


You can change existing reports to use Implementation 2.0 only in Import mode.

1. From the existing report in Power BI Desktop, select Transform data in the ribbon,
and then select the SAP Business Warehouse query to update.

2. Right-click the query and select Advanced Editor.

3. In the Advanced Editor, change the SapBusinessWarehouse.Cubes calls as follows:

4. Determine whether the query already contains an option record, such as the
following examples:

If so, add the [Implementation 2.0] option, and remove any ScaleMeasures option:

7 Note

The ScaleMeasures option is deprecated in this implementation. The


connector now always shows unscaled values.

5. If the query doesn't already include an options record, add it. For example, change
the following entry:
to:

7 Note

Implementation 2.0 of the SAP BW Connector should be compatible with version 1.


However, there might be some differences because of the different SAP BW MDX
execution modes. To resolve any discrepancies, try switching between execution
modes.

Troubleshooting
This section provides some troubleshooting situations and solutions for the SAP BW
connector. For more information, see SAP Business Warehouse connector
troubleshooting.

Numeric data from SAP BW returns misformatted


numeric data
In this issue, SAP BW returns numeric data with decimal points instead of commas. For
example, 1,000,000 returns as 1.000.000.

SAP BW returns decimal data with either a comma or a period as the decimal separator.
To specify which of these characters SAP BW should use for the decimal separator, the
Power BI Desktop driver makes a call to BAPI_USER_GET_DETAIL . This call returns a
structure called DEFAULTS , which has a field called DCPFM that stores Decimal Format
Notation as one of the following values:

' ' (space) = Decimal point is comma: N.NNN,NN

'X' = Decimal point is period: N,NNN.NN

'Y' = Decimal point is N: NNN NNN,NN

With this issue, the call to BAPI_USER_GET_DETAIL fails for a particular user, who gets the
misformatted data, with an error message similar to the following message:

XML

You are not authorized to display users in group TI:


<item>
<TYPE>E</TYPE>
<ID>01</ID>
<NUMBER>512</NUMBER>
<MESSAGE>You are not authorized to display users in group
TI</MESSAGE>
<LOG_NO/>
<LOG_MSG_NO>000000</LOG_MSG_NO>
<MESSAGE_V1>TI</MESSAGE_V1>
<MESSAGE_V2/>
<MESSAGE_V3/>
<MESSAGE_V4/>
<PARAMETER/>
<ROW>0</ROW>
<FIELD>BNAME</FIELD>
<SYSTEM>CLNTPW1400</SYSTEM>
</item>

To solve this error, the SAP admin must grant the Power BI SAP BW user the right to
execute BAPI_USER_GET_DETAIL . Also, verify that the user's data has the correct DCPFM
value.

Need connectivity for SAP BEx queries


You can do BEx queries in Power BI Desktop by enabling the Release for External Access
property, as shown in the following image:

Navigator doesn't display a data preview


In this issue, Navigator doesn't display a data preview and instead shows an Object
reference not set to an instance of an object error message.

SAP users need access to the following specific BAPI function modules to get metadata
and retrieve data from SAP BW's InfoProviders:

BAPI_MDPROVIDER_GET_CATALOGS
BAPI_MDPROVIDER_GET_CUBES
BAPI_MDPROVIDER_GET_DIMENSIONS
BAPI_MDPROVIDER_GET_HIERARCHYS
BAPI_MDPROVIDER_GET_LEVELS
BAPI_MDPROVIDER_GET_MEASURES
BAPI_MDPROVIDER_GET_MEMBERS
BAPI_MDPROVIDER_GET_VARIABLES
BAPI_IOBJ_GETDETAIL

To solve this issue, verify that the user has access to the MDPROVIDER modules and
BAPI_IOBJ_GETDETAIL .

Enable tracing
To further troubleshoot these or similar issues, you can enable tracing:

1. In Power BI Desktop, select File > Options and settings > Options.
2. In Options, select Diagnostics, and then select Enable tracing under Diagnostic
Options.
3. Try to get data from SAP BW while tracing is active, and examine the trace file for
more detail.

SAP BW Connection support


The following table describes current Power BI support for SAP BW.

ノ Expand table

Product Mode Authentication Connector SNC Library Supported

Power BI Any User / password Application N/A Yes


Desktop Server

Power BI Any Windows Application sapcrypto + Yes


Desktop Server gsskrb5/gx64krb5

Power BI Any Windows via Application sapcrypto + Yes


Desktop impersonation Server gsskrb5/gx64krb5

Power BI Any User / password Message N/A Yes


Desktop Server

Power BI Any Windows Message sapcrypto + Yes


Desktop Server gsskrb5/gx64krb5
Product Mode Authentication Connector SNC Library Supported

Power BI Any Windows via Message sapcrypto + Yes


Desktop impersonation Server gsskrb5/gx64krb5

Power BI Import Same as Power BI


Gateway Desktop

Power BI DirectQuery User / password Application N/A Yes


Gateway Server

Power BI DirectQuery Windows via Application sapcrypto + Yes


Gateway impersonation Server gsskrb5/gx64krb5
(fixed user, no SSO)

Power BI DirectQuery Use SSO via Application sapcrypto + Yes


Gateway Kerberos for Server gsskrb5/gx64krb5
DirectQuery queries
option

Power BI DirectQuery User / password Message N/A Yes


Gateway Server

Power BI DirectQuery Windows via Message sapcrypto + Yes


Gateway impersonation Server gsskrb5/gx64krb5
(fixed user, no SSO)

Power BI DirectQuery Use SSO via Message gsskrb5/gx64krb5 No


Gateway Kerberos for Server
DirectQuery queries
option

Power BI DirectQuery Use SSO via Message sapcrypto Yes


Gateway Kerberos for Server
DirectQuery queries
option

Related content
SAP BW fundamentals
DirectQuery and SAP HANA
DirectQuery and SAP Business Warehouse (BW)
Use DirectQuery in Power BI
Power BI data sources
Power BI and SAP BW whitepaper
What is Azure Center for SAP solutions?
Article • 05/15/2023

Azure Center for SAP solutions is an Azure offering that makes SAP a top-level workload
on Azure. Azure Center for SAP solutions is an end-to-end solution that enables you to
create and run SAP systems as a unified workload on Azure and provides a more
seamless foundation for innovation. You can take advantage of the management
capabilities for both new and existing Azure-based SAP systems.

The guided deployment experience takes care of creating the necessary compute,
storage and networking components needed to run your SAP system. Azure Center for
SAP solutions then helps automate the installation of the SAP software according to
Microsoft best practices.

In Azure Center for SAP solutions, you either create a new SAP system or register an
existing one, which then creates a Virtual Instance for SAP solutions (VIS). The VIS brings
SAP awareness to Azure by providing management capabilities, such as being able to
see the status and health of your SAP systems. Another example is quality checks and
insights, which allow you to know when your system isn't following documented best
practices and standards.

You can use Azure Center for SAP solutions to deploy the following types of SAP
systems:

Single server
Distributed
Distributed with High Availability (HA)

For existing SAP systems that run on Azure, there's a simple registration experience. You
can register the following types of existing SAP systems that run on Azure:

An SAP system that runs on SAP NetWeaver or ABAP stack


SAP systems that run on Windows, SUSE and RHEL Linux operating systems
SAP systems that run on HANA, DB2, SQL Server, Oracle, Max DB, or SAP ASE
databases

Azure Center for SAP solutions brings services, tools and frameworks together to
provide an end-to-end unified experience for deployment and management of SAP
workloads on Azure, creating the foundation for you to build innovative solutions for
your unique requirements.
What is a Virtual Instance for SAP solutions?
When you use Azure Center for SAP solutions, you'll create a Virtual Instance for SAP
solutions (VIS) resource. The VIS is a logical representation of an SAP system on Azure.

Every time that you create a new SAP system through Azure Center for SAP solutions, or
register an existing SAP system to Azure Center for SAP solutions, Azure creates a VIS. A
VIS contains the metadata for the entire SAP system.

Each VIS consists of:

The SAP system itself, referred to by the SAP System Identifier (SID)
An ABAP Central Services (ASCS) instance
A database instance
One or more SAP Application Server instances

Inside the VIS, the SID is the parent resource. Your VIS resource is named after the SID of
your SAP system. Any ASCS, Application Server, or database instances are child
resources of the SID. The child resources are associated with one or more VM resources
outside of the VIS. A standalone system has all three instances mapped to a single VM.
A distributed system has one ASCS and one Database instance, with each mapped to a
VM. High Availability (HA) deployments have the ASCS and Database instances mapped
to multiple VMs to enable HA. A distributed or HA type SAP system can have multiple
Application Server instances linked to their respective VMs.

What can you do with Azure Center for SAP


solutions?
After you create a VIS, you can:

See an overview of the entire SAP system, including the different parts of the VIS.
View the SAP system metadata. For example, properties of ASCS, database, and
Application Server instances; properties of SAP environment details; and properties
of associated VM resources.
Get the latest status and health check for your SAP system.
Start and stop the SAP application tier.
Get quality checks and insights about your SAP system.
Monitor your Azure infrastructure metrics for your SAP system resources. For
example, the CPU percentage used for ASCS and Application Server VMs, or disk
input/output operations per second (IOPS).
Analyze the cost of running your SAP System on Azure [VMs, Disks, Loadbalancers]

Next steps
Create a network for a new VIS deployment
Register an existing SAP system in Azure Center for SAP solutions
Common questions about Azure
Center for SAP solutions
FAQ

This article answers commonly asked questions about Azure Center for SAP solutions.

General
What capabilities do you gain with Azure Center
for SAP solutions?
Before the availability of Azure Center for SAP solutions, customers relied on
documentation and frameworks to help them set up system architecture. Then they had
to figure out how to access the VMs to install SAP. With Azure Center for SAP solutions,
customers can deploy SAP by using a guided experience that streamlines their ability to
select and configure resources. When a customer deploys or registers an existing SAP
system, Azure Center for SAP solutions creates a logical representation of the system, or
a Virtual Instance for SAP solutions (VIS). The VIS unlocks management capabilities, such
as the ability to run quality checks and to manage and monitor the system at the SAP
layer as well as the virtual machine (VM) layer.

Will Azure Center for SAP solutions replace other


services such as Azure Monitor for SAP
solutions?
No. Azure Center for SAP solutions brings Azure services together into a unified
experience for deploying and managing SAP workloads on Azure. Customers can
choose to use other Azure services, such as Azure Monitor for SAP solutions,
independently or in an integrated manner with Azure Center for SAP solutions. Azure
Monitor for SAP solutions is an Azure-native monitoring product for customers running
SAP landscapes on Azure. This product helps you collect data from Azure infrastructure
and databases in one central location and visually correlate the data for faster
troubleshooting.

What are the pricing and licensing implications?


There's no extra licensing cost for deploying or registering an SAP system and
generating a virtual instance. You pay only for the compute, storage, networking, Virtual
machine OS image license (Redhat/SUSE) and other Azure resources you deploy or
enable.

Which scenarios does Azure Center for SAP


solutions support?
Azure Center for SAP solutions supports the following capabilities at this time.

Deployment of infrastructure for SAP S4/HANA systems


Automated Software installation of SAP S/4/HANA 1909 SPS 03, S/4HANA 2020
SPS 03, S/4HANA 2021 ISS 00
Installation of S/4/HANA manually or through other tools on an infrastructure
deployed by ACSS
Registration of an existing Windows and Linux based ABAP/NetWeaver SAP system
on Azure
Monitoring the health and status of an SAP system
Creating a new Azure Monitor for SAP solutions provider or integrating an existing
one
Start and Stop of an SAP system, individual SAP instances, and HANA database.
Quality checks and recommendations for an SAP system with Azure Advisor
Analyze the cost of running the SAP System in Azure [non-shared resources only]

Azure Center for SAP solutions also offers the ability to customize the names of the
Azure resources deployed through it via API, CLI, and PowerShell. Consider referring to
sample API payload templates to deploy a system with custom resource naming
conventions.

What is Azure Virtual Instance for SAP solutions?


The Virtual Instance for SAP solutions (VIS) resource is an essential component of Azure
Center for SAP solutions that forms the foundation for the new experience. The VIS is a
logical representation of your SAP system in Azure that provides SAP awareness in
Azure and unlocks new management capabilities. When you deploy a new SAP system
or register an existing SAP system, Azure Center for SAP solutions automatically creates
a VIS for you.

Is Azure Center for SAP solutions a fully


managed offering?
No. Azure Center for SAP solutions makes it easier for you to deploy and manage SAP
systems on Azure while still retaining full control of and responsibility for the underlying
Azure resources, such as virtual machines.

With Azure Center for SAP solutions in General


Availability, are there any features in preview?
The following feature is still in preview:

Architecture Visualization - The feature helps customers visualize the architecture


of the SAP S/4/HANA system being deployed and can be downloaded for
documentation purposes.

Which regions support Azure Virtual Instance for


SAP solutions as part of Azure Center for SAP
solutions?
Azure Virtual Instance for SAP solutions is available in the following regions:

West Europe, North Europe, East US, East US 2, West US, West US 2, West US 3,
Central US, South Central US, North Central US, India Central, East Asia, Southeast
Asia, Korea Central, Japan East, Australia East, Australia Central, Canada Central,
Brazil South, UK South, Germany West Central, Sweden Central, France Central,
Switzerland North, Norway East, South Africa North and UAE North.
You can also see Products available by region page for information about
availability of Azure Center for SAP solutions in different Azure regions.

Registering Existing SAP System


Questions
What existing SAP systems can be registered as
a VIS?
You can register any of the following existing systems on Azure to create a VIS in Azure
Center for SAP solutions:

SAP systems running on SAP NetWeaver or ABAP stack


SAP systems running on Windows, SUSE, and RHEL Linux Operating Systems
SAP systems running on HANA, DB2, SQL Server, Oracle, Max DB, SAP ASE
databases

Refer to the product documentation to review the complete list of supported and
unsupported scenarios.

7 Note

If you are registering a system based on Oracle Linux OS and/or Oracle Database,
there will be limited support for quality checks, health and status and start-stop
offerings.

What are the prerequisites to register an existing


SAP system as an Azure Virtual Instance for SAP
solutions?
Here are the pre-requisites for registering existing SAP systems as a Virtual Instance for
SAP (VIS) resource. More details can be found in the product documentation.

Appropriate role access on the Azure subscription or resource groups where you
have the SAP system resources.
Azure Center for SAP solutions administrator and Managed Identity Operator or
equivalent role access.
A User-assigned managed identity that has Azure Center for SAP solutions service
role access on the Compute and Storage resource groups and Reader role
access on the Virtual Network resource group of the SAP system. Azure Center
for SAP solutions service uses this identity to discover your SAP system
resources and register the system as a VIS resource.
Make sure ASCS, Application Server, and Database virtual machines of the SAP
system are in Running state.
sapcontrol and saphostctrl exe files on ASCS, App server and Database.
Confirm the sapstartsrv process is running on all SAP instances and for SAP hostctrl
agent on all the VMs in the SAP system.

What resources are created in a customer’s


subscription when you register an existing SAP
system with Azure Center for SAP solutions?
When you register an existing SAP system with ACSS, ACSS creates the following logical
Azure resources:

Virtual Instance for SAP solutions


Central service instance for SAP solutions
App server instance for SAP solutions
Database for SAP solutions

Along with these logical resources, ACSS also creates a Managed resource group with a
storage account. ACSS uses this resource group to enable the SAP management
capabilities.

Deploying New SAP System Questions


What are the prerequisites to deploy a new SAP
system with Azure Center for SAP solutions?
Here are prerequisites to deploy a new SAP system with Azure Center for SAP solutions
and create a VIS. For more information, see the deployment guide.

Appropriate role access.


You need an Azure account with Azure Center for SAP solutions administrator
and Managed Identity Operator or equivalent role access to the subscription in
which you'll deploy the new system.
A User-assigned managed identity which has Azure Center for SAP solutions service
role access on the Subscription or all resource groups (Compute, Network,
Storage). If you wish to install SAP Software through Azure Center for SAP
solutions, provide Reader and Data Access role to the identity on SAP software
storage account where you would store the SAP Media.
An existing virtual network that allows outbound internet connectivity from the
Virtual Machines even when they are behind a standard Azure load balancer,
where you will deploy your new SAP system.
Register Appropriate resource providers that you have registered. (Compute,
Network, Storage, Workloads, Capacity).
Sufficient virtual machine quota in the region(s) used for the deployment.
Additionally, minimum four cores of Standard_D4ds_v4 or Standard_E4s_v3 SKU are
required for the deployment to succeed.

What resources are created in customer’s


subscription when you deploy an SAP S/4/HANA
system through Azure Center for SAP solutions?
When you deploy and install an SAP S/4/HANA system with ACSS, ACSS creates the
following logical Azure resources:

Virtual Instance for SAP solutions


Central service instance for SAP solutions
App server instance for SAP solutions
Database for SAP solutions

ACSS also creates Azure resources that are required to run the SAP S/4HANA system in
the application resource group. These include virtual machines and storage. Customers
can create a separate transport resource group if required.

Along with these resources, ACSS creates a Managed resource group containing a
storage account and key vault. ACSS uses the Managed resource group to enable the
SAP management capabilities.

How do I log on to an SAP system after


deploying it from ACSS?
Once you have deployed and installed an SAP S/4/HANA system through ACSS, you can
use the SAP master password and HANA database username and password available in
the Managed resource group key vault to log on to the SAP GUI or and connect to the
SAP HANA database.

How do I find the instance number(s) for SAP


instances deployed through ACSS?
Please refer to the documentation here to understand the instance numbers configured
through ACSS. Please note instance numbers are currently not configurable.

Management and Miscellaneous


Questions
Do Start and Stop operations on Virtual Instance
for SAP solutions resource also start or stop the
underlying virtual machines?
No. Start and Stop operations on the Virtual Instance for SAP solutions resource do not
impact the virtual machine running state. The same is true for Start and Stop on the
underlying child resources, such as the Central service instance and Application server
instance. To start SAP, the underlying VMs for that SAP system must be running.

What resources are available to learn more?


Customers and partners can find more information at Microsoft Learn:
https://learn.microsoft.com and SAP on Azure Resources page

What is the SLA for this service?


Because ACSS is available at no additional cost, it is not eligible for an SLA. Standard
SLAs apply to any other services that are deployed or enabled through Azure Center for
SAP solutions.
Quickstart: Create infrastructure for a
distributed non-high-availability SAP
system with Azure Center for SAP
solutions
Article • 05/15/2023

The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to deploy infrastructure for an SAP system with non
highly available (HA) Distributed architecture on Azure with Azure Center for SAP
solutions using Az PowerShell module. Alternatively, you can deploy SAP systems using
the Azure CLI, or in the Azure Portal.

After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Azure portal. For example, you can:

View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.

Prerequisites
An Azure subscription.

If you are using Azure Center for SAP solutions for the first time, Register the
Microsoft.Workloads Resource Provider on the subscription in which you are
deploying the SAP system. Use Register-AzResourceProvider, as follows:

PowerShell
Register-AzResourceProvider -ProviderNamespace "Microsoft.Workloads"

An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you'll create the Virtual Instance for SAP solutions (VIS) resource.

A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage). If you wish to install SAP Software through the Azure Center for
SAP solutions, also provide Reader and Data Access role to the identity on SAP
bits storage account where you would store the SAP Media.

A network set up for your infrastructure deployment.

Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3


SKUS which will be used during Infrastructure deployment and Software
Installation

Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.

Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.

Azure Cloud Shell or Azure PowerShell.

The steps in this quickstart run the Azure PowerShell cmdlets interactively in Azure
Cloud Shell. To run the commands in the Cloud Shell, select Open Cloudshell at
the upper-right corner of a code block. Select Copy to copy the code and then
paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the
Azure portal.
You can also install Azure PowerShell locally to run the cmdlets. The steps in this
article require Azure PowerShell module version 5.4.1 or later. Run Get-Module -
ListAvailable Az to find your installed version. If you need to upgrade, see Update

the Azure PowerShell module.

If you run PowerShell locally, run Connect-AzAccount to connect to Azure.

Right Size the SAP system you want to deploy


Use Invoke-AzWorkloadsSapSizingRecommendation to get SAP system sizing
recommendations by providing SAPS input for application tier and memory required for
database tier

PowerShell

Invoke-AzWorkloadsSapSizingRecommendation -Location eastus -AppLocation


eastus -DatabaseType HANA -DbMemory 256 -DeploymentType ThreeTier -
Environment NonProd -SapProduct S4HANA -Sap 10000 -DbScaleMethod ScaleUp

Create json configuration file


Prepare a json file with the payload that will be used for the deployment of SAP system
infrastructure. You can make edits in this sample payload or use the examples listed in
the Rest API documentation for Azure Center for SAP solutions

Deploy infrastructure for your SAP system


Use New-AzWorkloadsSapVirtualInstance to deploy infrastructure for your SAP system
with Three tier non-HA architecture

PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG'


-Name L46 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
Configuration .\CreatePayload.json -Tag @{k1 = "v1"; k2 = "v2"} -
IdentityType 'UserAssigned' -ManagedResourceGroupName "L46-rg" -
UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-
1999802b762c/resourcegroups/SAP-E2ETest-
rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'=
@{}}
Next steps
In this quickstart, you deployed infrastructure in Azure for an SAP system using Azure
Center for SAP solutions. Continue to the next article to learn how to install SAP
software on the infrastructure deployed.

Install SAP software


Quickstart: Install software for a
distributed non-high-availability (HA)
SAP system with Azure Center for SAP
solutions using Azure PowerShell
Article • 09/07/2023

The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to Install SAP software for infrastructure deployed for
an SAP system. In the previous step, you created infrastructure for an SAP system with
non highly available (HA) Distributed architecture on Azure with Azure Center for SAP
solutions using Az PowerShell module.

After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Virtual Instance for SAP solutions. For example, you can:

View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.

Prerequisites
An Azure subscription.
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage).
A storage account where you would store the SAP Media
Reader and Data Access role to the User-assigned managed identity on the
storage account where you would store the SAP Media.
A network set up for your infrastructure deployment.
A deployment of S/4HANA infrastructure.
The SSH private key for the virtual machines in the SAP system. You generated this
key during the infrastructure deployment.
You should have the SAP installation media available in a storage account. For
more information, see how to download the SAP installation media.
The json configuration file that you used to create infrastructure in the previous
step for SAP system using PowerShell or Azure CLI.

Create json configuration file


The json file for installation of SAP software is similar to the one used to Deploy
infrastructure for SAP with an added section for SAP software configuration.
The software configuration section requires he following inputs
Software installation type: Keep this as "SAPInstallWithoutOSConfig"
BOM URL: This is the BOM file path. Example: https://<your-storage-
account>.blob.core.windows.net/sapbits/sapfiles/boms/S42022SPS00_v0001ms.ya

ml

Software version: Azure Center for SAP solutions supports the following SAP
software versions viz. SAP S/4HANA 1909 SPS03 or SAP S/4HANA 2020 SPS 03
or SAP S/4HANA 2021 ISS 00 or SAP S/4HANA 2022
Storage account ID: This is the resource ID for the storage account where the
BOM file is created
You can use the sample software installation payload file

Install SAP software


Use New-AzWorkloadsSapVirtualInstance to install SAP software

PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG'


-Name L46 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
Configuration .\InstallPayload.json -Tag @{k1 = "v1"; k2 = "v2"} -
IdentityType 'UserAssigned' -ManagedResourceGroupName "L46-rg" -
UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-
1999802b762c/resourcegroups/SAP-E2ETest-
rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'=
@{}}

Next steps
In this quickstart, you installed SAP software on the deployed infrastructure in Azure for
an SAP system using Azure Center for SAP solutions. Continue to the next article to
learn how to Manage your SAP system on Azure using Virtual Instance for SAP solutions

Manage a Virtual Instance for SAP solutions


Quickstart: Use Azure CLI to create
infrastructure for a distributed highly
available (HA) SAP system with Azure
Center for SAP solutions with
customized resource names
Article • 05/15/2023

The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to use Azure CLI to deploy infrastructure for an SAP
system with highly available (HA) Three-tier Distributed architecture. You also see how
to customize resource names for the Azure infrastructure that gets deployed.
Alternatively, you can deploy SAP systems with customized using the Azure PowerShell
Module

After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Azure portal. For example, you can:

View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.

Prerequisites
An Azure subscription.

If you're using Azure Center for SAP solutions for the first time, Register the
Microsoft.Workloads Resource Provider on the subscription in which you're
deploying the SAP system:

Azure CLI

az provider register --namespace 'Microsoft.Workloads'

An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you create the Virtual Instance for SAP solutions (VIS) resource.

A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage). If you wish to install SAP Software through the Azure Center for
SAP solutions, also provide Reader and Data Access role to the identity on SAP
bits storage account where you would store the SAP Media.

A network set up for your infrastructure deployment.

Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3,


SKUS which will be used during Infrastructure deployment and Software
Installation

Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.

Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.

Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use
through your browser. You can use either Bash or PowerShell with Cloud Shell to work
with Azure services. You can use the Cloud Shell preinstalled commands to run the code
in this article, without having to install anything on your local environment.

To start Azure Cloud Shell:

Option Example/Link

Select Try It in the upper-right corner of a code or command block.


Selecting Try It doesn't automatically copy the code or command to
Cloud Shell.

Go to https://shell.azure.com , or select the Launch Cloud Shell


button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the upper right in
the Azure portal .

To use Azure Cloud Shell:

1. Start Cloud Shell.

2. Select the Copy button on a code block (or command block) to copy the code or
command.

3. Paste the code or command into the Cloud Shell session by selecting Ctrl+Shift+V
on Windows and Linux, or by selecting Cmd+Shift+V on macOS.

4. Select Enter to run the code or command.

Right Size the SAP system you want to deploy


Use az workloads sap-sizing-recommendation to get SAP system sizing
recommendations by providing SAPS input for application tier and memory required for
database tier

Azure CLI

az workloads sap-sizing-recommendation --app-location "eastus" --database-


type "HANA" --db-memory 1024 --deployment-type "ThreeTier" --environment
"Prod" --high-availability-type "AvailabilitySet" --sap-product "S4HANA" --
saps 75000 --location "eastus2" --db-scale-method ScaleUp

Create json configuration file with custom


resource names
Prepare a json file with the configuration (payload) to use for the deployment of
SAP system infrastructure. You can make edits in this [sample payload]
(https://github.com/Azure/Azure-Center-for-SAP-solutions-
preview/blob/main/Payload_Samples/CreatePayloadDistributedNon-HA.json or
use the examples listed in the Rest API documentation for Azure Center for SAP
solutions
In this json file, provide the custom resource names for the infrastructure that is
deployed for your SAP system

Deploy infrastructure for your SAP system


Use az workloads sap-virtual-instance create to deploy infrastructure for your SAP
system with Three tier HA architecture

Azure CLI

az workloads sap-virtual-instance create -g <Resource Group Name> -n <VIS


Name> --environment NonProd --sap-product s4hana --configuration <Payload
file path> --identity "{type:UserAssigned,userAssignedIdentities:
{<Managed_Identity_ResourceID>:{}}}"

Next steps
In this quickstart, you deployed infrastructure in Azure for an SAP system using Azure
Center for SAP solutions. You used custom resource names for the infrastructure.
Continue to the next article to learn how to install SAP software on the infrastructure
deployed.

Install SAP software


Quickstart: Install software for a
Distributed High-Availability (HA) SAP
system and customized resource names
with Azure Center for SAP solutions
using Azure CLI
Article • 05/15/2023

The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to Install SAP software for infrastructure deployed for
an SAP system. In the previous step, you created infrastructure for an SAP system with
highly available (HA) Distributed architecture on Azure with Azure Center for SAP
solutions using Azure CLI. You also provided customized resource names for the
deployed Azure resources.

After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Virtual Instance for SAP solutions. For example, you can:

View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.

Prerequisites
An Azure subscription.
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage).
A storage account where you would store the SAP Media
Reader and Data Access role to the User-assigned managed identity on the
storage account where you would store the SAP Media.
A network set up for your infrastructure deployment.
A deployment of S/4HANA infrastructure.
The SSH private key for the virtual machines in the SAP system. You generated this
key during the infrastructure deployment.
You should have the SAP installation media available in a storage account. For
more information, see how to download the SAP installation media.
The json configuration file that you used to create infrastructure in the previous
step for SAP system using PowerShell or Azure CLI.
As you're installing a Highly Available (HA) SAP system, get the Service Principal
identifier (SPN ID) and password to authorize the Azure fence agent (fencing
device) against Azure resources. For more information, see Use Azure CLI to create
an Azure AD app and configure it to access Media Services API.
For an example, see the Red Hat documentation for Creating an Azure Active
Directory Application .
To avoid frequent password expiry, use the Azure Command-Line Interface
(Azure CLI) to create the Service Principal identifier and password instead of the
Azure portal.

Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use
through your browser. You can use either Bash or PowerShell with Cloud Shell to work
with Azure services. You can use the Cloud Shell preinstalled commands to run the code
in this article, without having to install anything on your local environment.

To start Azure Cloud Shell:

Option Example/Link

Select Try It in the upper-right corner of a code or command block.


Selecting Try It doesn't automatically copy the code or command to
Cloud Shell.

Go to https://shell.azure.com , or select the Launch Cloud Shell


button to open Cloud Shell in your browser.
Option Example/Link

Select the Cloud Shell button on the menu bar at the upper right in
the Azure portal .

To use Azure Cloud Shell:

1. Start Cloud Shell.

2. Select the Copy button on a code block (or command block) to copy the code or
command.

3. Paste the code or command into the Cloud Shell session by selecting Ctrl+Shift+V
on Windows and Linux, or by selecting Cmd+Shift+V on macOS.

4. Select Enter to run the code or command.

Create json configuration file


The json file for installation of SAP software is similar to the one used to Deploy
infrastructure for SAP with an added section for SAP software configuration.
The software configuration section requires he following inputs
Software installation type: Keep this as "SAPInstallWithoutOSConfig"
BOM URL: This is the BOM file path. Example: https://<your-storage-
account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.ya

ml

Software version: Azure Center for SAP solutions supports three SAP software
versions viz. SAP S/4HANA 1909 SPS03 or SAP S/4HANA 2020 SPS 03 or SAP
S/4HANA 2021 ISS 00
Storage account ID: This is the resource ID for the storage account where the
BOM file is created
As you are deploying an HA system, you need to provide the High Availability
software configuration with the following two inputs:
Fencing Client ID: The client identifier for the STONITH Fencing Agent service
principal
Fencing Client Password: The password for the Fencing Agent service
principal
You can use the sample software installation payload file

Install SAP software


Use az workloads sap-virtual-instance create to install SAP software
Azure CLI

az workloads sap-virtual-instance create -g <Resource Group Name> -n <VIS


Name> --environment NonProd --sap-product s4hana --configuration <Payload
file path> --identity "{type:UserAssigned,userAssignedIdentities:
{<Managed_Identity_ResourceID>:{}}}"

Note: The commands for infrastructure deployment and installation are the same but
the payload file for the two needs to be different.

Next steps
In this quickstart, you installed SAP software on the deployed infrastructure in Azure for
an SAP system with Highly Available architecture type using Azure Center for SAP
solutions. You also noted that the resource names were customized for the system while
deploying infrastructure. Continue to the next article to learn how to Manage your SAP
system on Azure using Virtual Instance for SAP solutions

Manage a Virtual Instance for SAP solutions


Quickstart: Register an existing SAP
system with Azure Center for SAP
solutions with PowerShell
Article • 04/10/2024

The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to register an existing SAP system running on Azure
with Azure Center for SAP solutions using Az PowerShell module. Alternatively, you can
register systems using the Azure CLI, or in the Azure portal.
After you register an SAP system with Azure Center for SAP solutions, you can use its
visualization, management and monitoring capabilities through the Azure portal.

This quickstart requires the Az PowerShell module version 1.0.0 or later. Run Get-Module
-ListAvailable Az to find the version. If you need to install or upgrade, see Install Azure

PowerShell module.

Prerequisites for Registering a system


Check that you're trying to register a supported SAP system configuration

Grant access to Azure Storage accounts from the virtual network where the SAP
system exists. Use one of these options:
Allow outbound internet connectivity for the VMs.
Use a Storage service tag to allow connectivity to any Azure storage account
from the VMs.
Use a Storage service tag with regional scope to allow storage account
connectivity to the Azure storage accounts in the same region as the VMs.
Allowlist the region-specific IP addresses for Azure Storage.

The first time you use Azure Center for SAP solutions, you must register the
Microsoft.Workloads Resource Provider in the subscription where you have the
SAP system with Register-AzResourceProvider, as follows:

PowerShell

Register-AzResourceProvider -ProviderNamespace "Microsoft.Workloads"


Check that your Azure account has Azure Center for SAP solutions administrator
and Managed Identity Operator or equivalent role access on the subscription or
resource groups where you have the SAP system resources.

A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Compute resource group and Reader role access on the
Virtual Network resource group of the SAP system. Azure Center for SAP solutions
service uses this identity to discover your SAP system resources and register the
system as a VIS resource.

Make sure ASCS, Application Server and Database virtual machines of the SAP
system are in Running state.

sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
File path on Linux VMs: /usr/sap/hostctrl/exe
File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\

Make sure the sapstartsrv process is running on all SAP instances and for SAP
hostctrl agent on all the VMs in the SAP system.
To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -
start'
To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -
function StartService S0S'
To check status of hostctrl sapstartsrv use this command for Windows VMs:
C:\Program Files\SAP\hostctrl\exe\saphostexec –status

For successful discovery and registration of the SAP system, ensure there's network
connectivity between ASCS, App and DB VMs. 'ping' command for App instance
hostname must be successful from ASCS VM. 'ping' for Database hostname must
be successful from App server VM.

On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right
values configured for the discovery and registration of Database instance details.

Register SAP system


To register an existing SAP system in Azure Center for SAP solutions:

1. Use the New-AzWorkloadsSapVirtualInstance to register an existing SAP system as


a Virtual Instance for SAP solutions resource:

PowerShell
New-AzWorkloadsSapVirtualInstance `
-ResourceGroupName 'TestRG' `
-Name L46 `
-Location eastus `
-Environment 'NonProd' `
-SapProduct 'S4HANA' `
-CentralServerVmId
'/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.compute/vir
tualmachines/l46ascsvm' `
-Tag @{k1 = "v1"; k2 = "v2"} `
-ManagedResourceGroupName "acss-L46-rg" `
-ManagedRgStorageAccountName 'acssstoragel46' `
-IdentityType 'UserAssigned' `
-UserAssignedIdentity
@{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedId
entity/userAssignedIdentities/ACSS-MSI'= @{}} `

ResourceGroupName is used to specify the name of the existing Resource


Group into which you want the Virtual Instance for SAP solutions resource to
be deployed. It could be the same RG in which you have Compute, Storage
resources of your SAP system or a different one.
Name attribute is used to specify the SAP System ID (SID) that you're
registering with Azure Center for SAP solutions.
Location attribute is used to specify the Azure Center for SAP solutions
service location. Following table has the mapping that enables you to choose
the right service location based on where your SAP system infrastructure is
located on Azure.

ノ Expand table

SAP application location Azure Center for SAP solutions service location

East US East US

East US 2 East US 2

North Central US South Central US

South Central US South Central US

Central US South Central US

West US West US 3

West US 2 West US 2

West US 3 West US 3
SAP application location Azure Center for SAP solutions service location

West Europe West Europe

North Europe North Europe

Australia East Australia East

Australia Central Australia East

East Asia East Asia

Southeast Asia East Asia

Korea Central Korea Central

Japan East Japan East

Central India Central India

Canada Central Canada Central

Brazil South Brazil South

UK South UK South

Germany West Central Germany West Central

Sweden Central Sweden Central

France Central France Central

Switzerland North Switzerland North

Norway East Norway East

South Africa North South Africa North

UAE North UAE North

Environment is used to specify the type of SAP environment you're


registering. Valid values are NonProd and Prod.
SapProduct is used to specify the type of SAP product you're registering.
Valid values are S4HANA, ECC, Other.
ManagedResourceGroupName is used to specify the name of the managed
resource group which is deployed by ACSS service in your Subscription. This
RG is unique for each SAP system (SID) you register. If you don't specify the
name, ACSS service sets a name with this naming convention 'mrg-{SID}-
{random string}'.
ManagedRgStorageAccountName is used to specify the name of the Storage
Account which is deployed into the managed resource group. This storage
account is unique for each SAP system (SID) you register. ACSS service sets a
default name using '{SID}{random string}' naming convention.

2. Once you trigger the registration process, you can view its status by getting the
status of the Virtual Instance for SAP solutions resource that gets deployed as part
of the registration process.

PowerShell

Get-AzWorkloadsSapVirtualInstance -ResourceGroupName TestRG -Name L46

Next steps
Monitor SAP system from Azure portal
Manage a VIS
Quickstart: Register an existing SAP
system with Azure Center for SAP
solutions with CLI
Article • 07/21/2023

The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to register an existing SAP system running on Azure
with Azure Center for SAP solutions using Az CLI. Alternatively, you can register systems
using the Azure PowerShell or in the Azure portal. After you register an SAP system with
Azure Center for SAP solutions, you can use its visualization, management and
monitoring capabilities through the Azure portal. For example, you can:

This quickstart enables you to register an existing SAP system with Azure Center for SAP
solutions.

Prerequisites for registering a system


Check that you're trying to register a supported SAP system configuration

Grant access to Azure Storage accounts from the virtual network where the SAP
system exists. Use one of these options:
Allow outbound internet connectivity for the VMs.
Use a Storage service tag to allow connectivity to any Azure storage account
from the VMs.
Use a Storage service tag with regional scope to allow storage account
connectivity to the Azure storage accounts in the same region as the VMs.
Allowlist the region-specific IP addresses for Azure Storage.

The first time you use Azure Center for SAP solutions, you must register the
Microsoft.Workloads Resource Provider in the subscription where you have the
SAP system with Register-AzResourceProvider, as follows:

Azure CLI

az provider register --namespace 'Microsoft.Workloads'


Check that your Azure account has Azure Center for SAP solutions administrator
and Managed Identity Operator or equivalent role access on the subscription or
resource groups where you have the SAP system resources.

A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Compute resource group and Reader role access on the
Virtual Network resource group of the SAP system. Azure Center for SAP solutions
service uses this identity to discover your SAP system resources and register the
system as a VIS resource.

Make sure ASCS, Application Server and Database virtual machines of the SAP
system are in Running state.

sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
File path on Linux VMs: /usr/sap/hostctrl/exe
File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\

Make sure the sapstartsrv process is running on all SAP instances and for SAP
hostctrl agent on all the VMs in the SAP system.
To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -
start'
To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -
function StartService S0S'
To check status of hostctrl sapstartsrv use this command for Windows VMs:
C:\Program Files\SAP\hostctrl\exe\saphostexec –status

For successful discovery and registration of the SAP system, ensure there is
network connectivity between ASCS, App and DB VMs. 'ping' command for App
instance hostname must be successful from ASCS VM. 'ping' for Database
hostname must be successful from App server VM.

On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right
values configured for the discovery and registration of Database instance details.

Register SAP system


To register an existing SAP system in Azure Center for SAP solutions:

1. Use the az workloads sap-virtual-instance create to register an existing SAP system


as a Virtual Instance for SAP solutions resource:

Azure CLI
az workloads sap-virtual-instance create -g <Resource Group Name> \
-n C36 \
--environment NonProd \
--sap-product s4hana \
--central-server-vm <Virtual Machine resource ID> \
--identity "{type:UserAssigned,userAssignedIdentities:{<Managed
Identity resource ID>:{}}}" \
--managed-rg-name "acss-C36" \

g is used to specify the name of the existing Resource Group into which you
want the Virtual Instance for SAP solutions resource to be deployed. It could
be the same RG in which you have Compute, Storage resources of your SAP
system or a different one.
n parameter is used to specify the SAP System ID (SID) that you are
registering with Azure Center for SAP solutions.
environment parameter is used to specify the type of SAP environment you
are registering. Valid values are NonProd and Prod.
sap-product parameter is used to specify the type of SAP product you are
registering. Valid values are S4HANA, ECC, Other.
managed-rg-name parameter is used to specify the name of the managed
resource group which is deployed by ACSS service in your Subscription. This
RG is unique for each SAP system (SID) you register. If you do not specify the
name, ACSS service sets a name with this naming convention 'mrg-{SID}-
{random string}'.

2. Once you trigger the registration process, you can view its status by getting the
status of the Virtual Instance for SAP solutions resource that gets deployed as part
of the registration process.

Azure CLI

az workloads sap-virtual-instance show -g <Resource-group-name> -n C36

Next steps
Monitor SAP system from Azure portal
Manage a VIS
Quickstart: Start and stop SAP systems
from Azure Center for SAP solutions
with PowerShell
Article • 05/23/2023

The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.

In this how-to guide, you'll learn to start and stop your SAP systems through the Virtual
Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions using
PowerShell.

Through the Azure PowerShell module, you can start and stop:

The entire SAP Application tier, which includes ABAP SAP Central Services (ASCS)
and Application Server instances.
Individual SAP instances, which include Central Services and Application server
instances.
HANA Database
You can start and stop instances in the following types of deployments:
Single-Server
High Availability (HA)
Distributed Non-HA
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
SAP HA systems that use SUSE and RHEL Pacemaker clustering software and
Windows Server Failover Clustering (WSFC). Other certified cluster software isn't
currently supported.

Prerequisites
The following are prerequisites that you need to ensure before using the Start or Stop
capability on the Virtual Instance for SAP solutions resource.

An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions as a Virtual Instance for SAP solutions resource.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
For the start operation to work, the underlying virtual machines (VMs) of the SAP
instances must be running. This capability starts or stops the SAP application
instances, not the VMs that make up the SAP system resources.
The sapstartsrv service must be running on all VMs related to the SAP system.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
The Stop operation function for the HANA Database can only be initiated when the
cluster maintenance mode is in Disabled status. Similarly, Start operation can only
be initiated when the cluster maintenance mode is in Enabled status.

Start SAP system


To Start an SAP system represented as a Virtual Instance for SAP solutions resource:

Use the Start-AzWorkloadsSapVirtualInstance command:

Option 1:

Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to start.

PowerShell

Start-AzWorkloadsSapVirtualInstance -Name DB0 -ResourceGroupName db0-


vis-rg `

Option 2:

Use the InputObject parameter and pass the resource ID of the Virtual Instance for SAP
solutions resource you intend to start.

PowerShell

Start-AzWorkloadsSapVirtualInstance -InputObject
/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 `

Stop SAP system


To stop an SAP system represented as a Virtual Instance for SAP solutions resource:
Use the Stop-AzWorkloadsSapVirtualInstance command:

Option 1:

Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to stop.

PowerShell

Stop-AzWorkloadsSapVirtualInstance -Name DB0 -ResourceGroupName db0-


vis-rg `

Option 2:

Use the InputObject parameter and pass the resource ID of the Virtual Instance for SAP
solutions resource you intend to stop.

PowerShell

Stop-AzWorkloadsSapVirtualInstance -InputObject
/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 `

Next steps
Monitor SAP system from the Azure portal
Quickstart: Start and stop SAP systems
from Azure Center for SAP solutions
with CLI
Article • 05/23/2023

The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.

In this how-to guide, you'll learn how to start and stop your SAP systems through the
Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions using
the Azure CLI.

Through the Azure CLI, you can start and stop:

The entire SAP Application tier, which includes ABAP SAP Central Services (ASCS)
and Application Server instances.
Individual SAP instances, which include Central Services and Application server
instances.
HANA Database
You can start and stop instances in the following types of deployments:
Single-Server
High Availability (HA)
Distributed Non-HA
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
SAP HA systems that use SUSE and RHEL Pacemaker clustering software and
Windows Server Failover Clustering (WSFC). Other certified cluster software isn't
currently supported.

Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions as a Virtual Instance for SAP solutions resource.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
For the start operation to work, the underlying virtual machines (VMs) of the SAP
instances must be running. This capability starts or stops the SAP application
instances, not the VMs that make up the SAP system resources.
The sapstartsrv service must be running on all VMs related to the SAP system.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
The Stop operation function for the HANA Database can only be initiated when the
cluster maintenance mode is in Disabled status. Similarly, the Start operation
function can only be initiated when the cluster maintenance mode is in Enabled
status.

Start SAP system


To Start an SAP system represented as a Virtual Instance for SAP solutions resource:

Use the az workloads sap-virtual-instance start command:

Option 1:

Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to start.

Azure CLI

az workloads sap-virtual-instance start -g <Resource-group-name> -n


<ResourceName>

Option 2:

Use the id parameter and pass the resource ID of the Virtual Instance for SAP solutions
resource you intend to start.

Azure CLI

az workloads sap-virtual-instance start --id <ResourceID>

Stop SAP system


To stop an SAP system represented as a Virtual Instance for SAP solutions resource:

Use the az workloads sap-virtual-instance stop command:

Option 1:
Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to stop.

Azure CLI

az workloads sap-virtual-instance stop -g <Resource-group-name> -n


<ResourceName>

Option 2:

Use the id parameter and pass the resource ID of the Virtual Instance for SAP solutions
resource you intend to stop.

Azure CLI

az workloads sap-virtual-instance stop --id <ResourceID>

Next steps
Monitor SAP system from the Azure portal
Tutorial: Use Azure CLI to create
infrastructure for a distributed highly
available (HA) SAP system with Azure
Center for SAP solutions with
customized resource names
Article • 05/15/2023

Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Virtual Instance for SAP solutions

Introduction
The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.

This tutorial shows you how to use Azure CLI to deploy infrastructure for an SAP system
with highly available (HA) Three-tier Distributed architecture. You also see how to
customize resource names for the Azure infrastructure that gets deployed. See the
following steps:

" Complete the pre-requisites


" Understand the SAP SKUs available for your deployment type
" Check for recommended SKUs for SAPS and Memory requirements for your SAP
system
" Create json configuration file with custom resource names
" Deploy infrastructure for your SAP system

Prerequisites
An Azure subscription.

If you're using Azure Center for SAP solutions for the first time, Register the
Microsoft.Workloads Resource Provider on the subscription in which you're
deploying the SAP system:
Azure CLI

az provider register --namespace 'Microsoft.Workloads'

An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you create the Virtual Instance for SAP solutions (VIS) resource.

A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or at least all resource groups (Compute,
Network,Storage). If you wish to install SAP Software through the Azure Center for
SAP solutions, also provide Reader and Data Access role to the identity on SAP
bits storage account where you would store the SAP Media.

A network set up for your infrastructure deployment.

Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3,


SKUS which will be used during Infrastructure deployment and Software
Installation

Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.

Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.

Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use
through your browser. You can use either Bash or PowerShell with Cloud Shell to work
with Azure services. You can use the Cloud Shell preinstalled commands to run the code
in this article, without having to install anything on your local environment.
To start Azure Cloud Shell:

Option Example/Link

Select Try It in the upper-right corner of a code or command block.


Selecting Try It doesn't automatically copy the code or command to
Cloud Shell.

Go to https://shell.azure.com , or select the Launch Cloud Shell


button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the upper right in
the Azure portal .

To use Azure Cloud Shell:

1. Start Cloud Shell.

2. Select the Copy button on a code block (or command block) to copy the code or
command.

3. Paste the code or command into the Cloud Shell session by selecting Ctrl+Shift+V
on Windows and Linux, or by selecting Cmd+Shift+V on macOS.

4. Select Enter to run the code or command.

Understand the SAP certified Azure SKUs


available for your deployment type
Use az workloads sap-supported-sku to get a list of SKUs supported for your SAP
system deployment type from Azure Center for SAP solutions

Azure CLI

az workloads sap-supported-sku --app-location "eastus" --database-type


"HANA" --deployment-type "ThreeTier" --environment "Prod" --high-
availability-type "AvailabilitySet" --sap-product "S4HANA" --location
"eastus"

You can use any of these SKUs recommended for App tier and Database tier when
deploying infrastructure in the later steps. Or you can use the recommended SKUs by
Azure Center for SAP solutions in the next step.
Check for recommended SKUs for SAPS and
Memory requirements for your SAP system
Use az workloads sap-sizing-recommendation to get SAP system sizing
recommendations by providing SAPS input for application tier and memory required for
database tier

Azure CLI

az workloads sap-sizing-recommendation --app-location "eastus" --database-


type "HANA" --db-memory 1024 --deployment-type "ThreeTier" --environment
"Prod" --high-availability-type "AvailabilitySet" --sap-product "S4HANA" --
saps 75000 --location "eastus2" --db-scale-method ScaleUp

Create json configuration file with custom


resource names
Prepare a json file with the configuration (payload) to use for the deployment of
SAP system infrastructure. You can make edits in this sample payload or use the
examples listed in the Rest API documentation for Azure Center for SAP solutions
In this json file, provide the custom resource names for the infrastructure that is
deployed for your SAP system
The parameters available for customization are:
VM Name
Host Name
Network interface name
OS Disk Name
Load Balancer Name
Frontend IP Configuration Names
Backend Pool Names
Health Probe Names
Data Disk Names: default, hanaData or hana/data, hanaLog or hana/log, usrSap
or usr/sap, hanaShared or hana/shared, backup
Shared Storage Account Name
Shared Storage Account Private End Point Name

You can download the sample payload and replace the resource names and any other
parameter as needed

Deploy infrastructure for your SAP system


Use az workloads sap-virtual-instance create to deploy infrastructure for your SAP
system with Three tier HA architecture.

Azure CLI

az workloads sap-virtual-instance create -g <Resource Group Name> -n <VIS


Name> --environment NonProd --sap-product s4hana --configuration <Payload
file path> --identity "{type:UserAssigned,userAssignedIdentities:
{<Managed_Identity_ResourceID>:{}}}"

This will deploy your SAP system and the Virtual instance for SAP solutions (VIS)
resource representing your SAP system in Azure.

Cleanup
If you no longer wish to use the VIS resource, you can delete it by using az workloads
sap-virtual-instance delete

Azure CLI

az workloads sap-virtual-instance delete -g <Resource_Group_Name> -n <VIS


Name>

This command will only delete the VIS and other resources created by Azure Center for
SAP solutions. This will not delete the deployed infrastructure like VMs, Disks etc.

Next steps
In this tutorial, you deployed infrastructure in Azure for an SAP system using Azure
Center for SAP solutions. You used custom resource names for the infrastructure.
Continue to the next article to learn how to install SAP software on the infrastructure
deployed.

Install SAP software


Management of Azure Center for SAP
solutions resources with Azure RBAC
Article • 05/23/2023

Azure role-based access control (Azure RBAC) enables granular access management for
Azure. You can use Azure RBAC to manage Virtual Instance for SAP solutions resources
within Azure Center for SAP solutions. For example, you can separate duties within your
team and grant only the amount of access that users need to perform their jobs.

Users or user-assigned managed identities require minimum roles or permissions to use


the different capabilities in Azure Center for SAP solutions.

There are Azure built-in roles for Azure Center for SAP solutions, or you can create
Azure custom roles for more control. Azure Center for SAP solutions provides the
following built-in roles to deploy and manage SAP systems on Azure:

The Azure Center for SAP solutions administrator role has the required
permissions for a user to deploy infrastructure, install SAP, and manage SAP
systems from Azure Center for SAP solutions. The role allows users to:
Deploy infrastructure for a new SAP system
Install SAP software
Register existing SAP systems as a Virtual Instance for SAP solutions (VIS)
resource.
View the health and status of SAP systems.
Perform operations such as Start and Stop on the VIS resource.
Do all possible actions with Azure Center for SAP solutions, including the
deletion of the VIS resource.
The Azure Center for SAP solutions service role is intended for use by the user-
assigned managed identity. The Azure Center for SAP solutions service uses this
identity to deploy and manage SAP systems. This role has permissions to support
the deployment and management capabilities in Azure Center for SAP solutions.
The Azure Center for SAP solutions reader role has permissions to view all VIS
resources.

7 Note

To use an existing user-assigned managed identity for deploying a new SAP system
or registering an existing system, the user must also have the Managed Identity
Operator role. This role is required to assign a user-assigned managed identity to
the Virtual Instance for SAP solutions resource.
7 Note

If you're creating a new user-assigned managed identity when you deploy a new
SAP system or register an existing system, the user must also have the Managed
Identity Contributor and Managed Identity Operator roles. These roles are
required to create a user-assigned identity, make necessary role assignments to it
and assign it to the VIS resource.

Deploy infrastructure for new SAP system


To deploy infrastructure for a new SAP system, a user and user-assigned managed
identity requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Managed Identity Operator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/write

Microsoft.Workloads/Operations/read

Microsoft.Workloads/Locations/OperationStatuses/read

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSizingRecommendations/action

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSapSupportedSku/action

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getDiskConfigurations/action

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action

Microsoft.Resources/subscriptions/resourcegroups/deployments/read

Microsoft.Resources/subscriptions/resourcegroups/deployments/write

Microsoft.Network/virtualNetworks/read

Microsoft.Network/virtualNetworks/subnets/read

Microsoft.Network/virtualNetworks/subnets/write
Minimum permissions for users

Microsoft.Compute/sshPublicKeys/write

Microsoft.Compute/sshPublicKeys/read

Microsoft.Compute/sshPublicKeys /*/generateKeyPair/action

Microsoft.Storage/storageAccounts/read

Microsoft.Storage/storageAccounts/blobServices/read

Microsoft.Storage/storageAccounts/blobServices/containers/read

Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read

Microsoft.Storage/storageAccounts/fileServices/read

Microsoft.Storage/storageAccounts/fileServices/shares/read

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/disks/read

Microsoft.Compute/disks/write

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/write

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/extensions/delete

Microsoft.Compute/virtualMachines/instanceView/read

Microsoft.Compute/availabilitySets/read

Microsoft.Compute/availabilitySets/write

Microsoft.Network/loadBalancers/read

Microsoft.Network/loadBalancers/write

Microsoft.Network/loadBalancers/backendAddressPools/read
Minimum permissions for user-assigned managed identities

Microsoft.Network/loadBalancers/backendAddressPools/write

Microsoft.Network/loadBalancers/backendAddressPools/join/action

Microsoft.Network/loadBalancers/frontendIPConfigurations/read

Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action

Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read

Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/write

Microsoft.Network/networkInterfaces/read

Microsoft.Network/networkInterfaces/write

Microsoft.Network/networkInterfaces/join/action

Microsoft.Network/networkInterfaces/ipconfigurations/read

Microsoft.Network/networkInterfaces/ipconfigurations/join/action

Microsoft.Network/privateEndpoints/read

Microsoft.Network/privateEndpoints/write

Microsoft.Network/virtualNetworks/read

Microsoft.Network/virtualNetworks/subnets/read

Microsoft.Network/virtualNetworks/subnets/joinLoadBalancer/action

Microsoft.Network/virtualNetworks/subnets/join/action

Microsoft.Storage/storageAccounts/read

Microsoft.Storage/storageAccounts/write

Microsoft.Storage/storageAccounts/listAccountSas/action

Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action

Microsoft.Storage/storageAccounts/blobServices/read

Microsoft.Storage/storageAccounts/blobServices/containers/read

Microsoft.Storage/storageAccounts/fileServices/read

Microsoft.Storage/storageAccounts/fileServices/write
Minimum permissions for user-assigned managed identities

Microsoft.Storage/storageAccounts/fileServices/shares/read

Microsoft.Storage/storageAccounts/fileServices/shares/write

Install SAP software


To install SAP software, a user and user-assigned managed identity requires the following
role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/write

Microsoft.Workloads/sapVirtualInstances/applicationInstances/read

Microsoft.Workloads/sapVirtualInstances/centralInstances/read

Microsoft.Workloads/sapVirtualInstances/databaseInstances/read

Microsoft.Workloads/sapVirtualInstances/read

Microsoft.Workloads/Operations/read

Microsoft.Workloads/Locations/OperationStatuses/read

Microsoft.Storage/storageAccounts/read

Microsoft.Storage/storageAccounts/blobServices/read

Microsoft.Storage/storageAccounts/blobServices/containers/read

Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read

Microsoft.Storage/storageAccounts/fileServices/read

Microsoft.Storage/storageAccounts/fileServices/shares/read

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Reader and Data Access


Minimum permissions for user-assigned managed identities

Microsoft.Compute/disks/read

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/disks/write

Microsoft.Compute/virtualMachines/write

Microsoft.Compute/virtualMachines/extensions/delete

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Microsoft.Network/loadBalancers/read

Microsoft.Network/loadBalancers/backendAddressPools/read

Microsoft.Network/loadBalancers/frontendIPConfigurations/read

Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read

Microsoft.Network/networkInterfaces/read

Microsoft.Network/networkInterfaces/ipconfigurations/read

Microsoft.Network/privateEndpoints/read

Microsoft.Network/virtualNetworks/read

Microsoft.Network/virtualNetworks/subnets/read

Microsoft.Storage/storageAccounts/read

Microsoft.Storage/storageAccounts/listAccountSas/action

Microsoft.Storage/storageAccounts/blobServices/containers/read

Microsoft.Storage/storageAccounts/fileServices/read

Microsoft.Storage/storageAccounts/fileServices/shares/read

Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read

Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action

Microsoft.Storage/storageAccounts/write
Minimum permissions for user-assigned managed identities

Microsoft.Storage/storageAccounts/listAccountSas/action

Microsoft.Storage/storageAccounts/fileServices/write

Microsoft.Storage/storageAccounts/fileServices/shares/write

Register and manage existing SAP system


To register an existing SAP system and manage that system with Azure Center for SAP
solutions, a user or user-assigned managed identity requires the following role or
permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Managed Identity Operator

Minimum permissions for users

Microsoft.Workloads/sapvirtualInstances/*/read

Microsoft.Workloads/sapVirtualInstances/*/write

Microsoft.Workloads/Locations/*/read

Microsoft.Resources/subscriptions/resourceGroups/read

Microsoft.Resources/subscriptions/read

Microsoft.Compute/virtualMachines/read

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/disks/read

Microsoft.Compute/disks/write

Microsoft.Compute/virtualMachines/write
Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Microsoft.Network/loadBalancers/read

Microsoft.Network/loadBalancers/backendAddressPools/read

Microsoft.Network/loadBalancers/frontendIPConfigurations/read

Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read

Microsoft.Network/networkInterfaces/read

Microsoft.Network/networkInterfaces/ipconfigurations/read

Microsoft.Network/virtualNetworks/read

Microsoft.Network/virtualNetworks/subnets/read

Microsoft.Resources/subscriptions/resourceGroups/write

Microsoft.Resources/subscriptions/resourceGroups/read

Microsoft.Resources/subscriptions/read

Microsoft.Resources/subscriptions/resourcegroups/deployments/*

Microsoft.Resources/tags/*

View VIS resources


To view VIS resources, a user or user-assigned managed identity requires the following
role or permissions.

Built-in roles for users

Azure Center for SAP solutions reader

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/applicationInstances/read

Microsoft.Workloads/sapVirtualInstances/centralInstances/read
Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/databaseInstances/read

Microsoft.Workloads/sapVirtualInstances/read

Microsoft.Workloads/Operations/read

Microsoft.Workloads/Locations/OperationStatuses/read

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSizingRecommendations/action

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSapSupportedSku/action

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getDiskConfigurations/action

Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action

Microsoft.Insights/Metrics/Read

Microsoft.ResourceHealth/AvailabilityStatuses/read

Microsoft.Advisor/configurations/read

Microsoft.Advisor/recommendations/read

Built-in roles for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Built-in permissions for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Start SAP system


To start the SAP system from a VIS resource, a user and user-assigned managed identity
requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/start/action
Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Stop SAP system


To stop the SAP system from a VIS resource, a user and user-assigned managed identity
requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/stop/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Start SAP Central services instance


To start the SAP Central services instance from a VIS resource, a user and user-assigned
managed identity requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/centralInstances/start/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Stop SAP Central services instance


To stop the SAP Central services instance from a VIS resource, a user and user-assigned
managed identity requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/centralInstances/stop/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities


Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Start SAP Application server instance


To start the SAP Application server instance from a VIS resource, a user and user-
assigned managed identity requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/applicationInstances/start/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Stop SAP Application server instance


To stop the SAP Application server instance from a VIS resource, a user and user-
assigned managed identity requires the following role or permissions.

Built-in roles for users


Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/applicationInstances/stop/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Start SAP HANA Database instance


To start the SAP HANA Database instance from a VIS resource, a user and user-assigned
managed identity requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/databaseInstances/start/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read
Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

Stop SAP HANA Database instance


To stop the SAP HANA Database instance from a VIS resource, a user and user-assigned
managed identity requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/databaseInstances/stop/action

Built-in roles for user-assigned managed identities

Azure Center for SAP solutions service role

Minimum permissions for user-assigned managed identities

Microsoft.Compute/virtualMachines/read

Microsoft.Compute/virtualMachines/extensions/read

Microsoft.Compute/virtualMachines/extensions/write

Microsoft.Compute/virtualMachines/instanceView/read

View cost analysis


To view the cost analysis, a user requires the following role or permissions.

Built-in roles for users

Cost Management Reader

Minimum permissions for users

Microsoft.Consumption/*/read**
Minimum permissions for users

Microsoft.CostManagement/*/read

Microsoft.Billing/billingPeriods/read

Microsoft.Resources/subscriptions/read

Microsoft.Resources/subscriptions/resourceGroups/read

Microsoft.Billing/billingProperty/read

Built-in roles for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Minimum permissions for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

View Quality Insights


To view Quality Insights, a user requires the following role or permissions.

Built-in roles for users

Azure Center for SAP solutions reader

Minimum permissions for users

None, except the minimum role assignment.

Built-in roles for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Minimum permissions for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Set up Azure Monitor for SAP solutions


To set up Azure Monitor for SAP solutions for your SAP resources, a user requires the
following role or permissions.
Built-in roles for users

Contributor

Minimum permissions for users

None, except the minimum role assignment.

Built-in roles for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Minimum permissions for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Delete VIS resource


To delete a VIS resource, a user or user-assigned managed identity requires the following
role or permissions.

Built-in roles for users

Azure Center for SAP solutions administrator

Minimum permissions for users

Microsoft.Workloads/sapVirtualInstances/delete

Microsoft.Workloads/sapVirtualInstances/read

Microsoft.Workloads/sapVirtualInstances/applicationInstances/read

Microsoft.Workloads/sapVirtualInstances/centralInstances/read

Microsoft.Workloads/sapVirtualInstances/databaseInstances/read

Built-in roles for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.

Minimum permissions for user-assigned managed identities

This scenario isn't applicable to user-assigned managed identities.


Next steps
Manage VIS resources in Azure Center for SAP solutions
What is reliability in Azure Center for
SAP Solutions?
Article • 05/15/2023

This article describes reliability support in Azure Center for SAP Solutions, and covers
both regional resiliency with availability zones and cross-region resiliency with customer
enabled disaster recovery. For a more detailed overview of reliability in Azure, see Azure
reliability.

Azure Center for SAP solutions is an end-to-end solution that enables you to create and
run SAP systems as a unified workload on Azure and provides a more seamless
foundation for innovation. You can take advantage of the management capabilities for
both new and existing Azure-based SAP systems.

Availability zone support


Azure availability zones are at least three physically separate groups of datacenters
within each Azure region. Datacenters within each zone are equipped with independent
power, cooling, and networking infrastructure. In case of a local zone failure, availability
zones are designed such that, if one zone is affected, the remaining two zones can
support: regional services, capacity and high availability. Failures can range from
software and hardware failures to events such as earthquakes, floods, and fires.
Tolerance to failures is achieved with redundancy and logical isolation of Azure services.
For more detailed information on availability zones in Azure, see Availability zone service
and regional support.

There are three types of Azure services that support availability zones: zonal, zone-
redundant, and always-available services. You can learn more about these types of
services and how they promote resiliency in the Azure services with availability zone
support.

Azure Center for SAP Solutions supports zone-redundancy. When creating a new SAP
system through Azure Center for SAP solutions, you can choose the Compute availability
option for the infrastructure being deployed. You can choose to deploy the SAP system
with zone redundancy based on your requirements, while the service is zone-redundant
by default. Learn more about deployment type options for SAP systems here.

Regional availability
When deploying SAP systems using Azure Center for SAP solutions, you can use Zone-
redundant Premium plans in the following regions:

Americas Europe Asia Pacific

East US 2 North Europe Australia East

East US West Europe Central India

West US 3 East Asia

Prerequisites for ensuring Resiliency in Azure Center for


SAP solutions
You are expected to choose Zone redundancy for SAP workload that you deploy
using Azure Center for SAP solutions based on your requirements.
Zone redundancy for the SAP system infrastructure that you deploy using Azure
Center for SAP solutions can only be chosen when creating the Virtual Instance for
SAP solutions (VIS) resource. Once the VIS resource is created and infrastructure is
deployed, you cannot change the underlying infrastructure configuration to zone
redundant.

Deploy an SAP system with availability zone enabled

This section explains how you can deploy an SAP system with Zone redundancy from
the Azure portal. You can also use PowerShell and CLI interfaces to deploy a zone
redundant SAP system with Azure Center for SAP solutions. Learn more about deploying
a new SAP system using Azure Center for SAP solutions.

1. Open the Azure portal and navigate to the Azure Center for SAP solutions page.

2. In the Basics page, special attention to the fields in the table (also highlighted in
the screenshot), which have specific requirements for zone redundancy.

Setting Suggested value Notes for Zone Redundancy

Deployment Distributed with High You should choose Availability-Zone


Type Availability (HA) configuration for Compute Availability
3. There are no more input fields in the rest of the process that affects zone
redundancy. You can proceed with creating the system as per the deployment
guide.

Zone down experience


If you deploy the SAP system infrastructure with Zone-redundancy, the SAP workload
will fail over to the secondary virtual machine and you will be able to access the system
without any interruptions in case of a zone outage.

Disaster recovery: cross-region fail over


Azure Center for SAP solutions service is a zone redundant service. So, service may
experience downtime because no paired region exists. There will be no Microsoft
initiated fail over in the event of a region outage. This article explains some of the
strategies that you can use to achieve cross-region resiliency for Virtual Instance for SAP
solutions resources with customer enabled disaster recovery. It has detailed steps for
you to follow when a region in which your Virtual Instance for SAP solutions resource
exists is down.

Case ACSS SAP Scenario Mitigation Steps


# Service Workload
Region Region

Case A B ACSS Service Register the workload with ACSS service available
1 (Down) region is in another region using PowerShell or CLI which
down allow to select an available service location.

Case A B (Down) SAP Workload 1. Customers should perform workload failover to


2 region is DR region (outside of ACSS).
down 2. Register the failed over workload with ACSS
using PowerShell or CLI.

Case A B (Down) ACSS Service 1. Customers should perform workload failover to


3 (Down) and SAP DR region (outside of ACSS).
workload 2. Register the failed over workload with ACSS
regions are service available in another region using
down PowerShell or CLI which allow to select an
available service location.

Outage detection, notification, and management


When service goes down in a region customer will be notified through Azure
Communications. Customer also can check the service health page in Azure portal, and
can also configure the notifications on service health by following steps to create a
service health alert.

Capacity and proactive disaster recovery resiliency


You need to plan the capacity for your workload in the DR region.
Next steps
Resiliency in Azure
Customer enabled disaster recovery in
Azure Center for SAP solutions
Article • 05/15/2023

Azure Center for SAP solutions service is a zone redundant service. So, service may
experience downtime because no paired region exists. There will be no Microsoft
initiated failover in the event of a region outage. This article explains some of the
strategies that you can use to achieve cross-region resiliency for Virtual Instance for SAP
solutions resources with customer enabled disaster recovery. It has detailed steps for
you to follow when a region in which your Virtual Instance for SAP solutions resource
exists is down.

You must configure disaster recovery for SAP systems that you deploy using Azure
Center for SAP solutions using Disaster recovery overview and infrastructure guidelines
for SAP workload.

In case of a region outage, customers will be notified about it. This article has the steps
you can follow to get the Virtual Instance for SAP solutions resources up and running in
a different region.

Prerequisites for Customer enabled disaster


recovery in Azure Center for SAP solutions.
Configure disaster recovery for your SAP system deployed using Azure Center for SAP
solutions or otherwise using the Disaster recovery overview and infrastructure guidelines
for SAP workload.

Region Down Scenarios and Mitigation Steps:


Case ACSS SAP Scenario Mitigation Steps
# Service Workload
Region Region

Case A B ACSS Service Register the workload with ACSS service available
1 (Down) region is in another region using PowerShell or CLI which
down allow to select an available service location.
Case ACSS SAP Scenario Mitigation Steps
# Service Workload
Region Region

Case A B (Down) SAP Workload 1. Customers should perform workload failover to


2 region is DR region (outside of ACSS).
down 2. Register the failed over workload with ACSS
using PowerShell or CLI.

Case A B (Down) ACSS Service 1. Customers should perform workload failover to


3 (Down) and SAP DR region (outside of ACSS).
workload 2. Register the failed over workload with ACSS
regions are service available in another region using
down PowerShell or CLI which allow to select an
available service location.

Steps to re-register the SAP system with Azure


Center for SAP solutions in case of regional
outage:
1. In case the region where your SAP workload exists is down (case 1 and 2
mentioned in the above section), perform workload failover to DR region (outside
of ACSS) and have the workload running in a secondary region.

2. In case the Azure Center for SAP solutions service is down (case 1 and 3 mentioned
in the above section) in the region where your Virtual Instance for SAP solutions
resource exists, register your SAP system with another available region.

Azure PowerShell

New-AzWorkloadsSapVirtualInstance `
-ResourceGroupName 'TestRG' `
-Name L46 `
-Location eastus `
-Environment 'NonProd' `
-SapProduct 'S4HANA' `
-CentralServerVmId
'/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.compute/vir
tualmachines/l46ascsvm' `
-Tag @{k1 = "v1"; k2 = "v2"} `
-ManagedResourceGroupName "acss-L46-rg" `
-ManagedRgStorageAccountName 'acssstoragel46' `
-IdentityType 'UserAssigned' `
-UserAssignedIdentity
@{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedId
entity/userAssignedIdentities/ACSS-MSI'= @{}} `
3. Following table has the list of locations where Azure Center for SAP solutions
service is available. It is recommended that you choose a region within the same
geography where your SAP infrastructure resources are located.

Azure Center for SAP solutions service locations

East US

East US 2

West US 3

West Europe

North Europe

Australia East

East Asia

Central India

Next steps
Deploy a new SAP system with Azure Center for SAP solutions
Prepare network for infrastructure
deployment
Article • 10/12/2023

In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA
infrastructure using Azure Center for SAP solutions. This article provides general
guidance about creating a virtual network. Your individual environment and use case will
determine how you need to configure your own network settings for use with a Virtual
Instance for SAP (VIS) resource.

If you have an existing network that you're ready to use with Azure Center for SAP
solutions, go to the deployment guide instead of following this guide.

Prerequisites
An Azure subscription.
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
It's recommended to have multiple IP addresses in the subnet or subnets before
you begin deployment. For example, it's always better to have a /26 mask instead
of /29 .
The names including AzureFirewallSubnet, AzureFirewallManagementSubnet,
AzureBastionSubnet and GatewaySubnet are reserved names within Azure. Please
do not use these as the subnet names.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.

Create network
You must create a network for the infrastructure deployment on Azure. Make sure to
create the network in the same region that you want to deploy the SAP system.

Some of the required network components are:

A virtual network
Subnets for the Application Servers and Database Servers. Your configuration
needs to allow communication between these subnets.
Azure network security groups
Route tables
Firewalls (or NAT Gateway)

For more information, see the example network configuration.

Connect network
At a minimum, the network needs to have outbound internet connectivity for successful
infrastructure deployment and software installation. The application and database
subnets also need to be able to communicate with each other.

If internet connectivity isn't possible, allowlist the IP addresses for the following areas:

SUSE or Red Hat endpoints


Azure Storage accounts
Allowlist Azure Key Vault
Allowlist Microsoft Entra ID
Allowlist Azure Resource Manager

Then, make sure all resources within the virtual network can connect to each other. For
example, configure a network security group to allow resources within the virtual
network to communicate by listening on all ports.

Set the Source port ranges to *.


Set the Destination port ranges to *.
Set the Action to Allow

If it's not possible to allow the resources within the virtual network to connect to each
other, allow connections between the application and database subnets, and open
important SAP ports in the virtual network instead.

Allowlist SUSE or Red Hat endpoints


If you're using SUSE for the VMs, allowlist the SUSE endpoints . For example:
1. Create a VM with any OS using the Azure portal or using Azure Cloud Shell. Or,
install openSUSE Leap from the Microsoft Store and enable WSL.
2. Install pip3 by running zypper install python3-pip .
3. Install the pip package susepubliccloudinfo by running pip3 install
susepubliccloudinfo .

4. Get a list of IP addresses to configure in the network and firewall by running pint
microsoft servers --json --region with the appropriate Azure region parameter.

5. Allowlist all these IP addresses on the firewall or network security group where
you're planning to attach the subnets.

If you're using Red Hat for the VMs, allowlist the Red Hat endpoints as needed. The
default allowlist is the Azure Global IP addresses. Depending on your use case, you
might also need to allowlist Azure US Government or Azure Germany IP addresses.
Configure all IP addresses from your list on the firewall or the network security group
where you want to attach the subnets.

Allowlist storage accounts


Azure Center for SAP solutions needs access to the following storage accounts to install
SAP software correctly:

The storage account where you're storing the SAP media that is required during
software installation.
The storage account created by Azure Center for SAP solutions in a managed
resource group, which Azure Center for SAP solutions also owns and manages.

There are multiple options to allow access to these storage accounts:

Allow internet connectivity


Configure a Storage service tag
Configure Storage service tags with regional scope. Make sure to configure tags
for the Azure region where you're deploying the infrastructure, and where the
storage account with the SAP media exists.
Allowlist the regional Azure IP ranges .

Allowlist Key Vault


Azure Center for SAP solutions creates a key vault to store and access the secret keys
during software installation. This key vault also stores the SAP system password. To allow
access to this key vault, you can:

Allow internet connectivity


Configure a AzureKeyVault service tag
Configure an AzureKeyVault service tag with regional scope. Make sure to
configure the tag in the region where you're deploying the infrastructure.

Allowlist Microsoft Entra ID


Azure Center for SAP solutions uses Microsoft Entra ID to get the authentication token
for obtaining secrets from a managed key vault during SAP installation. To allow access
to Microsoft Entra ID, you can:

Allow internet connectivity


Configure an AzureActiveDirectory service tag.

Allowlist Azure Resource Manager


Azure Center for SAP solutions uses a managed identity for software installation.
Managed identity authentication requires a call to the Azure Resource Manager
endpoint. To allow access to this endpoint, you can:

Allow internet connectivity


Configure an AzureResourceManager service tag.

Open important SAP ports


If you're unable to allow connection between all resources in the virtual network as
previously described, you can open important SAP ports in the virtual network instead.
This method allows resources within the virtual network to listen on these ports for
communication purposes. If you're using more than one subnet, these settings also
allow connectivity within the subnets.

Open the SAP ports listed in the following table. Replace the placeholder values ( xx ) in
applicable ports with your SAP instance number. For example, if your SAP instance
number is 01 , then 32xx becomes 3201 .

SAP service Port Allow Allow Purpose


range incoming outgoing
traffic traffic

Host Agent 1128, Yes Yes HTTP/S port for the SAP host
1129 agent.

Web Dispatcher 32xx Yes Yes SAPGUI and RFC


communication.
SAP service Port Allow Allow Purpose
range incoming outgoing
traffic traffic

Gateway 33xx Yes Yes RFC communication.

Gateway (secured) 48xx Yes Yes RFC communication.

Internet 80xx, Yes Yes HTTP/S communication for SAP


Communication 443xx Fiori, WEB GUI
Manager (ICM)

Message server 36xx, Yes No Load balancing; ASCS to app


81xx, servers communication; GUI
444xx sign-in; HTTP/S traffic to and
from message server.

Control agent 5xx13, Yes No Stop, start, and get status of SAP
5xx14 system.

SAP installation 4237 Yes No Initial SAP installation.

HTTP and HTTPS 5xx00, Yes Yes HTTP/S server port.


5xx01

IIOP 5xx02, Yes Yes Service request port.


5xx03,
5xx07

P4 5xx04-6 Yes Yes Service request port.

Telnet 5xx08 Yes No Service port for management.

SQL communication 3xx13, Yes No Database communication port


3xx15, with application, including ABAP
3xx40-98 or JAVA subnet.

SQL server 1433 Yes No Default port for MS-SQL in SAP;


required for ABAP or JAVA
database communication.

HANA XS engine 43xx, Yes Yes HTTP/S request port for web
80xx content.

Example network configuration


The configuration process for an example network might include:

1. Create a virtual network, or use an existing virtual network.


2. Create the following subnets inside the virtual network:

a. An application tier subnet.

b. A database tier subnet.

c. A subnet for use with the firewall, named Azure FirewallSubnet.

3. Create a new firewall resource:

a. Attach the firewall to the virtual network.

b. Create a rule to allowlist RHEL or SUSE endpoints. Make sure to allow all source
IP addresses ( * ), set the source port to Any, allow the destination IP addresses
for RHEL or SUSE, and set the destination port to Any.

c. Create a rule to allow service tags. Make sure to allow all source IP addresses
( * ), set the destination type to Service tag. Then, allow the tags
Microsoft.Storage, Microsoft.KeyVault, AzureResourceManager and
Microsoft.AzureActiveDirectory.

4. Create a route table resource:

a. Add a new route of the type Virtual Appliance.

b. Set the IP address to the firewall's IP address, which you can find on the
overview of the firewall resource in the Azure portal.

5. Update the subnets for the application and database tiers to use the new route
table.

6. If you're using a network security group with the virtual network, add the following
inbound rule. This rule provides connectivity between the subnets for the
application and database tiers.

Priority Port Protocol Source Destination Action

100 Any Any virtual network virtual network Allow

7. If you're using a network security group instead of a firewall, add outbound rules
to allow installation.

Priority Port Protocol Source Destination Action

110 Any Any Any SUSE or Red Hat endpoints Allow


Priority Port Protocol Source Destination Action

115 Any Any Any Azure Resource Manager Allow

116 Any Any Any Microsoft Entra ID Allow

117 Any Any Any Storage accounts Allow

118 8080 Any Any Key vault Allow

119 Any Any Any virtual network Allow

Next steps
Deploy S/4HANA infrastructure
Deploy S/4HANA infrastructure with
Azure Center for SAP solutions
Article • 01/29/2024

In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in Azure Center
for SAP solutions. There are three deployment options: distributed with High Availability
(HA), distributed non-HA, and single server.

Prerequisites
An Azure subscription
Register the Microsoft.Workloads Resource Provider on the subscription in which
you are deploying the SAP system.
An Azure account with Contributor role access to the subscriptions and resource
groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Contributor role access on the
Subscription or atleast all resource groups (Compute, Network,Storage). If you wish
to install SAP Software through the Azure Center for SAP solutions, also provide
Storage Blob data Reader, Reader and Data Access roles to the identity on SAP bits
storage account where you would store the SAP Media.
A network set up for your infrastructure deployment.
Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3
SKUS which will be used during Infrastructure deployment and Software
Installation
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.
Deployment types
There are three deployment options that you can select for your infrastructure,
depending on your use case.

Distributed with High Availability (HA) creates distributed HA architecture. This


option is recommended for production environments. If you choose this option,
you need to select a High Availability SLA. Select the appropriate SLA for your use
case:
99.99% (Optimize for availability) shows available zone pairs for VM
deployment. The first zone is primary and the next is secondary. Active ASCS
and Database servers are deployed in the primary zone. Passive ASCS and
Database servers are deployed in the secondary zone. Application servers are
deployed evenly across both zones. This option isn't shown in regions without
availability zones, or without at least one M-series and E-series VM SKU
available in the zonal pairs within that region.
99.95% (Optimize for cost) shows three availability sets for all instances. The
HA ASCS cluster is deployed in the first availability set. All Application servers
are deployed across the second availability set. The HA Database server is
deployed in the third availability set. No availability zone names are shown.
Distributed creates distributed non-HA architecture.
Single Server creates architecture with a single server. This option is available for
non-production environments only.

Supported software
Azure Center for SAP solutions supports the following SAP software versions: S/4HANA
1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 and S/4HANA 2022 ISS 00.

The following operating system (OS) software versions are compatible with these SAP
software versions:

ノ Expand table

Publisher Image and Image Version Supported SAP Software Version

Red Hat Red Hat Enterprise Linux 8.6 for SAP S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
Applications - x64 Gen2 latest 03, S/4HANA 2021 ISS 00, S/4HANA 2022
ISS 00

Red Hat Red Hat Enterprise Linux 8.4 for SAP S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
Applications - x64 Gen2 latest 03, S/4HANA 2021 ISS 00, S/4HANA 2022
ISS 00
Publisher Image and Image Version Supported SAP Software Version

Red Hat Red Hat Enterprise Linux 8.2 for SAP S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
Applications - x64 Gen2 latest 03, S/4HANA 2021 ISS 00, S/4HANA 2022
ISS 00

SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
SAP Applications 15 SP4 - x64 Gen2 03, S/4HANA 2021 ISS 00, S/4HANA 2022
latest ISS 00

SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
SAP Applications 15 SP3 - x64 Gen2 03, S/4HANA 2021 ISS 00, S/4HANA 2022
latest ISS 00

SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03
SAP Applications 12 SP5 - x64 Gen2
latest

SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03
SAP Applications 12 SP4 - x64 Gen2
latest

You can use latest if you want to use the latest image and not a specific older
version. If the latest image version is newly released in marketplace and has an
unforeseen issue, the deployment might fail. If you are using Portal for
deployment, we recommend choosing a different image sku train (e.g. 12-SP4
instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI,
you can provide any other image version which is available. To view and select the
available image versions from a publisher, use below commands

Powershell

Get-AzVMImage -Location $locName -PublisherName $pubName -Offer


$offerName -Sku $skuName | Select Version

where, for example


$locName="eastus"
$pubName="RedHat"
$offerName="RHEL-SAP-HA"
$skuName="82sapha-gen2"

Azure Center for SAP solutions now supports deployment of SAP system VMs with
custom OS images along with the Azure Marketplace images. For deployment
using custom OS images, follow the steps here.

Create deployment
1. Sign in to the Azure portal .

2. In the search bar, enter and select Azure Center for SAP solutions.

3. On the Azure Center for SAP solutions landing page, select Create a new SAP
system.

4. On the Create Virtual Instance for SAP solutions page, on the Basics tab, fill in the
details for your project.

a. For Subscription, select the Azure subscription into which you're deploying the
infrastructure.

b. For Resource group, select the resource group for all resources that the VIS
creates.

5. Under Instance details, enter the details for your SAP instance.

a. For Name enter the three-character SAP system identifier (SID). The VIS uses the
same name as the SID.

b. For Region, select the Azure region into which you're deploying the resources.

c. For Environment type, select whether your environment is production or non-


production. If you select Production, you can deploy a distributed HA or non-
HA S/4HANA system. It's recommended to use distributed HA deployments for
production systems. If you select Non-production, you can use a single-server
deployment.

d. For SAP product, keep the selection as S/4HANA.

e. For Database, keep the selection as HANA.

f. For HANA scale method, keep the selection as Scale up.

g. For Deployment type, select and configure your deployment type.

h. For Network, create the network you created previously with subnets.

i. For Application subnet and Database subnet, map the IP address ranges as
required. It's recommended to use a different subnet for each deployment. The
names including AzureFirewallSubnet, AzureFirewallManagementSubnet,
AzureBastionSubnet and GatewaySubnet are reserved names within Azure.
Please do not use these as the subnet names.

6. Under Operating systems, select the source of the image.


7. If you're using Azure Marketplace OS images, use these settings:

a. For Application OS image, select the OS image for the application server.

b. For Database OS image, select the OS image for the database server.

c. If you're using custom OS images, use these settings:

i. For Application OS image, select the image version from the Azure Compute
Gallery.

ii. For Database OS image, select the image version from the Azure Compute
Gallery.

8. Under Administrator account, enter your administrator account details.

a. For Authentication type, keep the setting as SSH public.

b. For Username, enter an SAP administrator username.

c. For SSH public key source, select a source for the public key. You can choose to
generate a new key pair, use an existing key stored in Azure, or use an existing
public key stored on your local computer. If you don't have keys already saved,
it's recommended to generate a new key pair.

d. For Key pair name, enter a name for the key pair.

e. If you choose to use an Existing public key stored in azure, select the key in
Stored Keys input

f. Provide the corresponding SSH private key from local file stored on your
computer or copy paste the private key.

g. If you choose to use an Existing public key, you can either Provide the SSH
public key from local file stored on your computer or copy paste the public key.

h. Provide the corresponding SSH private key from local file stored on your
computer or copy paste the private key.

9. Under SAP Transport Directory, enter how you want to set up the transport
directory on this SID. This is applicable for Distributed with High Availability and
Distributed deployments only.

a. For SAP Transport Options, you can choose to Create a new SAP transport
Directory or Use an existing SAP transport Directory or completely skip the
creation of transport directory by choosing Don't include SAP transport
directory option. Currently, only NFS on AFS storage account fileshares is
supported.

b. If you choose to Create a new SAP transport Directory, this will create and
mount a new transport fileshare on the SID. By Default, this option will create an
NFS on AFS storage account and a transport fileshare in the resource group
where SAP system will be deployed. However, you can choose to create this
storage account in a different resource group by providing the resource group
name in Transport Resource Group. You can also provide a custom name for
the storage account to be created under Storage account name section.
Leaving the Storage account name will create the storage account with service
default name ""SIDname""nfs""random characters"" in the chosen transport
resource group. Creating a new transport directory will create a ZRS based
replication for zonal deployments and LRS based replication for non-zonal
deployments. If your region doesn't support ZRS replication deploying a zonal
VIS will lead to a failure. In such cases, you can deploy a transport fileshare
outside Azure Center for SAP Solutions with ZRS replication and then create a
zonal VIS where you select Use an existing SAP transport Directory to mount
the pre-created fileshare.

c. If you choose to Use an existing SAP transport Directory, select the pre -
existing NFS fileshare under File share name option. The existing transport
fileshare will be only mounted on this SID. The selected fileshare shall be in the
same region as that of SAP system being created. Currently, file shares existing
in a different region cannot be selected. Provide the associated private endpoint
of the storage account where the selected fileshare exists under Private
Endpoint option.

d. You can skip the creation of transport file share by selecting Don't include SAP
transport directory option. The transport fileshare will neither be created or
mounted for this SID.

10. Under Configuration Details, enter the FQDN for your SAP System.
a. For SAP FQDN, provide only the domain name for your system such
"sap.contoso.com"

11. Under User assigned managed identity, provide the identity which Azure Center
for SAP solutions will use to deploy infrastructure.

a. For Managed identity source, choose if you want the service to create a new
managed identity or you can instead use an existing identity. If you wish to
allow the service to create a managed identity, acknowledge the checkbox
which asks for your consent for the identity to be created and the contributor
role access to be added for all resource groups.

b. For Managed identity name, enter a name for a new identity you want to create
or select an existing identity from the drop down menu. If you are selecting an
existing identity, it should have Contributor role access on the Subscription or
on Resource Groups related to this SAP system you are trying to deploy. That is,
it requires Contributor access to the SAP application Resource Group, Virtual
Network Resource Group and Resource Group which has the existing SSHKEY. If
you wish to later install the SAP system using Azure Center for SAP Solutions,
we also recommend giving the Storage Blob Data Reader and Reader and Data
Access roles on the Storage Account which has the SAP software media.

12. Under Managed resource settings, choose the network settings for the managed
storage account deployed into your subscription. This storage account is required
for ACSS to orchestrate the deployment of new SAP system and further power all
the SAP management capabilities.
a. For Storage account network access, select Enable access from specific virtual
network for enhanced network security access for the managed storage
account. This option ensures that this storage account is accessible only from
the virtual network in which the SAP system exists.

) Important

To use the secure network access option, you must enable Microsoft.Storage
service endpoint on the Application and Database subnets. You can learn
more about storage account network security in this documentation. Private
endpoint on managed storage account is not currently supported in this
scenario.

When you choose to limit network access to specific virtual networks, Azure Center
for SAP solutions service accesses this storage account using trusted access based
on the managed identity associated with the VIS resource.

13. Select Next: Virtual machines.

14. In the Virtual machines tab, generate SKU size and total VM count
recommendations for each SAP instance from Azure Center for SAP solutions.

a. For Generate Recommendation based on, under Get virtual machine


recommendations, select SAP Application Performance Standard (SAPS).
b. For SAPS for application tier, provide the total SAPS for the application tier. For
example, 30,000.

c. For Memory size for database (GiB), provide the total memory size required for
the database tier. For example, 1024. The value must be greater than zero, and
less than or equal to 11,400.

d. Select Generate Recommendation.

e. Review the VM size and count recommendations for ASCS, Application Server,
and Database instances.

f. To change a SKU size recommendation, select the drop-down menu or select


See all sizes. Filter the list or search for your preferred SKU.

g. To change the Application server count, enter a new count for Number of VMs
under Application virtual machines.

The number of VMs for ASCS and Database instances aren't editable. The
default number for each is 2.

Azure Center for SAP solutions automatically configures a database disk layout
for the deployment. To view the layout for a single database server, make sure
to select a VM SKU. Then, select View disk configuration. If there's more than
one database server, the layout applies to each server.

15. Select Next: Visualize Architecture.

16. In the Visualize Architecture tab, visualize the architecture of the VIS that you're
deploying.

a. To view the visualization, make sure to configure all the inputs listed on the tab.

b. Optionally, click and drag resources or containers to move them around visually.

c. Click on Reset to reset the visualization to its default state. That is, revert any
changes you might have made to the position of resources or containers.

d. Click on Scale to fit to reset the visualization to its default zoom level.

e. Click on Zoom in to zoom into the visualization.

f. Click on Zoom out to zoom out of the visualization.

g. Click on Download JPG to export the visualization as a JPG file.

h. Click on Feedback to share your feedback on the visualization experience.


The visualization doesn't represent all resources for the VIS that you're
deploying, for instance it doesn't represent disks and NICs.

i. Select Next: Tags.

17. Optionally, enter tags to apply to all resources created by the Azure Center for SAP
solutions process. These resources include the VIS, ASCS instances, Application
Server instances, Database instances, VMs, disks, and NICs.

18. Select Review + Create.

19. Review your settings before deployment.

a. Make sure the validations have passed, and there are no errors listed.

b. Review the Terms of Service, and select the acknowledgment if you agree.

c. Select Create.

20. Wait for the infrastructure deployment to complete. Numerous resources are
deployed and configured. This process takes approximately 7 minutes.

Using a Custom OS Image


You can use custom images for deployment in Azure Center for SAP Solutions from the
Azure Compute Gallery

Custom image prerequisites


Make sure that you've met the general SAP deployment prerequisites, downloaded
the SAP media, and installed the SAP software.

Before you use an image from Azure Marketplace for customization, check the list
of supported OS image versions in Azure Center for SAP Solutions. BYOI is
supported on the OS version supported by Azure Center for SAP Solutions. Make
sure that Azure Center for SAP Solutions has support for the image, or else the
deployment will fail with the following error: The resource ID provided consists of an
OS image which is not supported in ACSS. Please ensure that the OS image version is
supported in ACSS for a successful installation.

Refer to SAP installation documentation to ensure the operating system


prerequisites are met for the deployment to be successful.
Check that the user-assigned managed identity has the Reader role on the gallery
of the custom OS image. Otherwise, the deployment will fail.

Create and upload a VM to a gallery in Azure Compute Gallery

Before beginning the deployment, make sure the image is available in Azure
Compute Gallery.

Verify that the image is in same subscription as the deployment.

Check that the image VM is of the Standard security type.

Deploying using Custom Operating System Image


Select the Use a custom image option during deployment. Choose which image to
use for the application and database OS.

Azure Center for SAP Solutions validates the base operating system version of the
custom OS Image is available in the supportability matrix in Azure Center for SAP
Solutions. If the versions are unsupported, the deployment fails. To fix this
problem, delete the VIS and infrastructure resources from the resource group, then
deploy again with a supported image.

Make sure the image version that you're using is compatible with the SAP software
version.

Confirm deployment
To confirm a deployment is successful:

1. In the Azure portal , search for and select Virtual Instances for SAP solutions.

2. On the Virtual Instances for SAP solutions page, select the Subscription filter, and
choose the subscription where you created the deployment.

3. In the table of records, find the name of the VIS. The Infrastructure column value
shows Deployed for successful deployments.

If the deployment fails, delete the VIS resource in the Azure portal, then recreate the
infrastructure.

Next steps
Install SAP software on your infrastructure
Get SAP installation media
Article • 09/07/2023

After you've created infrastructure for your new SAP system using Azure Center for SAP
solutions, you need to install the SAP software on your SAP system. However, before you
can do this installation, you need to get and upload the SAP installation media for use
with Azure Center for SAP solutions.

In this how-to guide, you'll learn how to get the SAP software installation media through
different methods. You'll also learn how to upload the SAP media to an Azure Storage
account to prepare for installation.

Prerequisites
An Azure subscription.
An Azure account with Contributor role access to the subscriptions and resource
groups in which the Virtual Instance for SAP solutions exists.
A User-assigned managed identity with Storage Blob Data Reader or Reader and
Data Access roles on the storage account which has the SAP software.
A network set up for your infrastructure deployment.
A deployment of S/4HANA infrastructure.
The SSH private key for the virtual machines in the SAP system. You generated this
key during the infrastructure deployment.
If you're installing a Highly Available (HA) SAP system, get the Service Principal
identifier (SPN ID) and password to authorize the Azure fence agent (fencing
device) against Azure resources.
For more information, see Use Azure CLI to create an Azure AD app and
configure it to access Media Services API.
For an example, see the Red Hat documentation for Creating an Azure Active
Directory Application .
To avoid frequent password expiry, use the Azure Command-Line Interface
(Azure CLI) to create the Service Principal identifier and password instead of the
Azure portal.

Required components
The following components are necessary for the SAP installation.

SAP software installation media (part of the sapbits container described later in
this article)
All essential SAP packages (SWPM, SAPCAR, etc.)
SAP software (for example, S/4HANA 2021 ISS 00)
Supporting software packages for the installation process. (These packages are
downloaded automatically and used by Azure Center for SAP solutions during the
installation.)
pip3 version pip-21.3.1.tar.gz
wheel version 0.38.1

jq version 1.6
ansible version 2.11.12

netaddr version 0.8.0

The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions.
These YAML files list all required SAP packages for the SAP software installation.
There's a main BOM ( S41909SPS03_v0011ms.yaml , S42020SPS03_v0003ms.yaml ,
S4HANA_2021_ISS_v0001ms.yaml , S42022SPS00_v0001ms.yaml ) and dependent BOMs

( HANA_2_00_059_v0004ms.yaml , HANA_2_00_064_v0001ms.yaml ,
SUM20SP15_latest.yaml , SWPM20SP13_latest.yaml ). They provide the following

information:
The full name of the SAP package ( name )
The package name with its file extension as downloaded ( archive )
The checksum of the package as specified by SAP ( checksum )
The shortened filename of the package ( filename )
The SAP URL to download the software ( url )
Template or INI files, which are stack XML files required to run the SAP packages.

Scripted upload method


To prepare for SAP installation, you can upload the SAP components to your Azure
Storage account using script.

Set up storage account


Before downloading the SAP software, set up an Azure Storage account to store the
components.

1. Create an Azure Storage account through the Azure portal. Make sure to create the
storage account in the same subscription as your SAP system infrastructure.

2. Create a container within the Azure Storage account named sapbits .

a. On the storage account's sidebar menu, select Containers under Data storage.
b. Select + Container.

c. On the New container pane, for Name, enter sapbits .

d. Select Create.

3. Grant the User-assigned managed identity, which was used during infrastructure
deployment, Storage Blob Data Reader and Reader and Data Access role access
on this storage account.

Create virtual machine


Next, set up a virtual machine (VM) where you will download the SAP components later.

1. Create an Ubuntu 20.04 VM in Azure. For more information, see how to create a
Linux VM in the Azure portal.

2. Sign in to the VM.

3. Install the Azure CLI on the VM.

Bash

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

4. If the Azure CLI version is not version 2.30.0 or higher, Update the Azure CLI. You
can run below command to check the version

Azure CLI

az --version

5. Sign in to Azure.

Azure CLI

az login

6. Install PIP3

Bash

sudo apt install python3-pip


7. Install Ansible 2.11.12 on the VM.

Bash

sudo pip3 install ansible-core==2.11.12

8. Install Ansible galaxy collection modules

Bash

sudo ansible-galaxy collection install ansible.netcommon:==5.0.0 -p


/opt/ansible/collections
sudo ansible-galaxy collection install ansible.posix:==1.5.1 -p
/opt/ansible/collections
sudo ansible-galaxy collection install ansible.utils:==2.9.0 -p
/opt/ansible/collections
sudo ansible-galaxy collection install ansible.windows:==1.13.0 -p
/opt/ansible/collections
sudo ansible-galaxy collection install community.general:==6.4.0 -p
/opt/ansible/collections

9. Clone the SAP automation samples repository from GitHub.

git

git clone https://github.com/Azure/SAP-automation-samples.git

10. Clone the SAP automation repository from GitHub.

git

git clone https://github.com/Azure/sap-automation.git

11. Switch to sap-automation directory

git

cd sap-automation/

12. Change the branch to main .

git

git checkout main


13. Optionally, check that your current branch is main .

git

git status

Download SAP media with script


Next, download the SAP installation media to the VM using a script.

1. Run the Ansible script playbook_bom_download with your own information. With
the exception of the s_password variable, enter the actual values within double
quotes but without the triangular brackets. For the s_password variable, use single
quotes. The Ansible command that you run should look like:

Bash

export bom_base_name="<Enter bom base name>"


export s_user="<s-user>"
export s_password='<password>'
export storage_account_access_key="<storageAccountAccessKey>"
export sapbits_location_base_path="<containerBasePath>"
export BOM_directory="<BOM_directory_path>"
export orchestration_ansible_user="root"
export playbook_path="<playbook_bom_downloader_yaml_path>"
sudo ansible-playbook ${playbook_path} \
-e "bom_base_name=${bom_base_name}" \
-e "deployer_kv_name=dummy_value" \
-e "s_user=${s_user}" \
-e "s_password=${s_password}" \
-e "sapbits_access_key=${storage_account_access_key}" \
-e "sapbits_location_base_path=${sapbits_location_base_path}" \
-e "BOM_directory=${BOM_directory}" \
-e "orchestration_ansible_user=${orchestration_ansible_user}"

2. If prompted that if you have a storage account, enter Y .

3. Where playbook_bom_downloader_yaml_path is the absolute path to sap-


automation/deploy/ansible/playbook_bom_downloader.yaml. e.g.
/home/loggedinusername/sap-
automation/deploy/ansible/playbook_bom_downloader.yaml

4. For <bom_base_name> , use the SAP Version you want to install i.e.
S41909SPS03_v0011ms or S42020SPS03_v0003ms or S4HANA_2021_ISS_v0001ms
or S42022SPS00_v0001ms
5. For <s_user> , use your SAP username.

6. For <s_password> , use your SAP password.

7. For <storageAccountAccessKey> , use your storage account's access key. To find the
storage account's key:

a. Find the storage account in the Azure portal that you created.

b. On the storage account's sidebar menu, select Access keys under Security +
networking.

c. For key1, select Show key.

d. Copy the Key value.

8. For <containerBasePath> , use the path to your sapbits container. To find the
container path:

a. Find the storage account that you created in the Azure portal.

b. Find the container named sapbits .

c. On the container's sidebar menu, select Properties under Settings.

d. Copy down the URL value. The format is https://<your-storage-


account>.blob.core.windows.net/sapbits . The format is https://<your-storage-
account>.blob.core.windows.net/sapbits .

9. Where BOM_directory_path is the absolute path to SAP-automation-samples/SAP.


e.g. /home/loggedinusername/SAP-automation-samples/SAP

10. Where orchestration_ansible_user is the user with admin privileges like (e.g.
root).

Now you can install the SAP software through Azure Center for SAP solutions.

Manual upload method


To prepare for SAP installation, you can upload the SAP components to your Azure
Storage account manually.

Set up storage account manually


First, set up an Azure Storage account for the SAP components:
7 Note

Don't change the folder name structure for any steps in this process. Otherwise, the
installation process fails.

1. Create a new Azure Storage account for storing the software components.

2. Grant the roles Storage Blob Data Reader and Reader and Data Access to the
user-assigned managed identity, which you used during infrastructure deployment.

3. Create a container within the storage account. You can choose any container name,
such as sapbits .

4. Create a folder within the container, named sapfiles .

5. Go to the sapfiles folder.

6. Create two subfolders named archives and boms .

7. In the boms folder, create four subfolders with the following names, depending on
the SAP version that you're using:

a. For S/4HANA 1909 SPS 03:

i. HANA_2_00_059_v0003ms

ii. S41909SPS03_v0011ms

iii. SWPM20SP12_latest

iv. SUM20SP14_latest

b. For S/4HANA 2020 SPS 03:

i. HANA_2_00_064_v0001ms

ii. S42020SPS03_v0003ms

iii. SWPM20SP12_latest

iv. SUM20SP14_latest

c. For S/4HANA 2021 ISS 00:

i. HANA_2_00_064_v0001ms

ii. S4HANA_2021_ISS_v0001ms
iii. SWPM20SP12_latest

iv. SUM20SP14_latest

d. For S/4HANA 2022 ISS 00:

i. HANA_2_00_071_v0001ms

ii. S42022SPS00_v0001ms

iii. SWPM20SP15_latest

iv. SUM20SP17_latest

Upload SAP media


Next, upload the SAP software files to the storage account:

1. Upload the following YAML files to the folders with the same name. Make sure to
use the files that correspond to the SAP version that you're using.

a. For S/4HANA 1909 SPS 03:

i. S41909SPS03_v0011ms.yaml

ii. HANA_2_00_059_v0004ms.yaml

b. For S/4HANA 2020 SPS 03:

i. S42020SPS03_v0003ms.yaml

ii. HANA_2_00_064_v0001ms.yaml

c. For S/4HANA 2021 ISS 00:

i. S4HANA_2021_ISS_v0001ms.yaml

ii. HANA_2_00_064_v0001ms.yaml

d. For S/4HANA 2022 ISS 00:

i. S42022SPS00_v0001ms.yaml

ii. HANA_2_00_071_v0001ms.yaml

2. Depending on your SAP version, go to the folder S41909SPS03_v0011ms or


S42020SPS03_v0003ms or S4HANA_2021_ISS_v0001ms or
S42022SPS00_v0001ms.
3. Create a subfolder named templates.

4. Download the following files, depending on your SAP version.

a. For S/4HANA 1909 SPS 03:

i. HANA_2_00_055_v1_install.rsp.j2

ii. S41909SPS03_v0011ms-app-inifile-param.j2

iii. S41909SPS03_v0011ms-dbload-inifile-param.j2

iv. S41909SPS03_v0011ms-ers-inifile-param.j2

v. S41909SPS03_v0011ms-generic-inifile-param.j2

vi. S41909SPS03_v0011ms-pas-inifile-param.j2

vii. S41909SPS03_v0011ms-scs-inifile-param.j2

viii. S41909SPS03_v0011ms-scsha-inifile-param.j2

ix. S41909SPS03_v0011ms-web-inifile-param.j2

b. For S/4HANA 2020 SPS 03:

i. HANA_2_00_055_v1_install.rsp.j2

ii. HANA_2_00_install.rsp.j2

iii. S42020SPS03_v0003ms-app-inifile-param.j2

iv. S42020SPS03_v0003ms-dbload-inifile-param.j2

v. S42020SPS03_v0003ms-ers-inifile-param.j2

vi. S42020SPS03_v0003ms-generic-inifile-param.j2

vii. S42020SPS03_v0003ms-pas-inifile-param.j2

viii. S42020SPS03_v0003ms-scs-inifile-param.j2

ix. S42020SPS03_v0003ms-scsha-inifile-param.j2

c. For S/4HANA 2021 ISS 00:

i. HANA_2_00_055_v1_install.rsp.j2

ii. HANA_2_00_install.rsp.j2
iii. NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB

iv. NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params

v. NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params

vi. NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params

vii. NW_Users_Create-GENERIC.HDB.PD_Distributed.params

viii. S4HANA_2021_ISS_v0001ms-app-inifile-param.j2

ix. S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2

x. S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2

xi. S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2

xii. S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2

xiii. S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2

xiv. S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2

xv. S4HANA_2021_ISS_v0001ms-web-inifile-param.j2

d. For S/4HANA 2022 ISS 00:

i. S42022SPS00_v0001ms-app-inifile-param.j2

ii. S42022SPS00_v0001ms-dbload-inifile-param.j2

iii. S42022SPS00_v0001ms-ers-inifile-param.j2

iv. S42022SPS00_v0001ms-generic-inifile-param.j2

v. S42022SPS00_v0001ms-pas-inifile-param.j2

vi. S42022SPS00_v0001ms-scs-inifile-param.j2

vii. S42022SPS00_v0001ms-scsha-inifile-param.j2

viii. S42022SPS00_v0001ms-web-inifile-param.j2

5. Upload all the files that you downloaded to the templates folder.

6. Go back to the sapfiles folder, then go to the archives subfolder.


7. Download all packages that aren't labeled as download: false from the main BOM
URL. Choose the packages based on your SAP version. You can use the URL
mentioned in the BOM to download each package. Make sure to download the
exact package versions listed in each BOM.

a. For S/4HANA 1909 SPS 03:

i. S41909SPS03_v0011ms.yaml

ii. HANA_2_00_059_v0004ms.yaml

b. For S/4HANA 2020 SPS 03:

i. S42020SPS03_v0003ms.yaml

ii. HANA_2_00_064_v0001ms.yaml

c. For S/4HANA 2021 ISS 00:

i. S4HANA_2021_ISS_v0001ms.yaml

ii. HANA_2_00_064_v0001ms.yaml

d. For S/4HANA 2022 ISS 00:

i. S42022SPS00_v0001ms.yaml

ii. HANA_2_00_071_v0001ms.yaml

8. Repeat the previous step for the main and dependent BOM files.

9. Upload all the packages that you downloaded to the archives folder. Don't
rename the files.

10. Optionally, install other packages that aren't required.

a. Download the package files.

b. Upload the files to the archives folder.

c. Open the S41909SPS03_v0011ms or S42020SPS03_v0003ms or


S4HANA_2021_ISS_v0001ms or S42022SPS00_v0001ms YAML file for the BOM.

d. Edit the information for each optional package to download:true .

e. Save and reupload the YAML file. Make sure you only have one YAML file in the
subfolder ( S41909SPS03_v0011ms or S42020SPS03_v0003ms or
S4HANA_2021_ISS_v0001ms or S42022SPS00_v0001ms ) of the boms folder.

Now you can install the SAP software through Azure Center for SAP solutions.

Next steps
Install the SAP software through Azure Center for SAP solutions
Install SAP software
Article • 05/15/2023

After you've created infrastructure for your new SAP system using Azure Center for SAP
solutions, you need to install the SAP software.

In this how-to guide, you'll learn two ways to install the SAP software for your system.
Choose whichever method is appropriate for your use case. You can either:

Install the SAP software through Azure Center for SAP solutions directly using the
installation wizard.
Install the SAP software outside of Azure Center for SAP solutions, then detect the
installed system from the service.

Prerequisites
Review the prerequisites for your preferred installation method: through the Azure
Center for SAP solutions installation wizard or through an outside method

Prerequisites for wizard installation


An Azure subscription.
An Azure account with Contributor role access to the subscriptions and resource
groups in which the Virtual Instance for SAP solutions exists.
A user-assigned managed identity with Storage Blob Data Reader and Reader and
Data Access roles on the Storage Account which has the SAP software.
A network set up for your SAP deployment.
A deployment of S/4HANA infrastructure.
If you are installing an SAP System through Azure Center for SAP solutions, you
should have the SAP installation media available in a storage account. For more
information, see how to download the SAP installation media.
If you're installing a Highly Available (HA) SAP system, get the Service Principal
identifier (SPN ID) and password to authorize the Azure fence agent (fencing
device) against Azure resources. For more information, see Use Azure CLI to create
an Azure AD app and configure it to access Media Services API.
For an example, see the Red Hat documentation for Creating an Azure Active
Directory Application .
To avoid frequent password expiry, use the Azure Command-Line Interface
(Azure CLI) to create the Service Principal identifier and password instead of the
Azure portal.
Prerequisites for outside installation
An Azure subscription.
An Azure account with Contributor role access to the subscriptions and resource
groups in which the Virtual Instance for SAP solutions exists.
A user-assigned managed identity that you created during infrastructure
deployment with Contributor role access on the subscription, or on all resource
groups (compute, network and storage) that the SAP System is a part of.
Infrastructure for the SAP system that you previously created through Azure Center
for SAP solution. Don't make any changes to this infrastructure.
An SAP System (and underlying infrastructure resources) that is up and running.
Optionally, you can add fully installed application servers to the system before
detecting the SAP software; then, the SAP system with additional application
servers will also be detected.
If you add additional application servers to this Virtual Instance for SAP
solutions after infrastructure deployment, the previously created user-assigned
managed identity also needs Contributor role access on the subscription or on
the resource group under which this new application server exists.
The number of application virtual machines installed should not be less than the
number created during the infrastructure deployment phase in Azure Center for
SAP solutions. You can still detect additional application servers.

Only the following scenarios are supported for this installation method:

Infrastructure for S4/HANA was created through Azure Center for SAP solutions.
The S4/HANA application was installed outside Azure Center for SAP solutions
through a different tool.
Only S4/HANA installation done outside Azure Center for SAP solutions can be
detected. If you have installed a different SAP Application than S4/HANA, the
detection will fail.
If you want a fresh installation of S4/HANA software on the infrastructure deployed
by Azure Center for SAP solutions, use the wizard installation option instead.

Install SAP with Azure Center for SAP solutions


To install the SAP software directly, use the Azure Center for SAP solutions installation
wizard.

1. Sign in to the Azure portal .

2. Search for and select Virtual Instance for SAP solutions.


3. Select your Virtual Instance for SAP solutions instance.

4. On the Overview page for the Virtual Instance for SAP solutions resource, select
Install SAP software.

5. In the Prerequisites tab of the wizard, review the prerequisites. Then, select Next.

6. On the Software tab, provide information about your SAP media.

a. For Have you uploaded the software to an Azure storage account?, select Yes.

b. For Software version, use the SAP S/4HANA 1909 SPS03 or SAP S/4HANA
2020 SPS 03 or SAP S/4HANA 2021 ISS 00 . Please note only those versions will
light up which are supported with the OS version that was used to deploy the
infrastructure previously.

c. For BOM directory location, select Browse and find the path to your BOM file.
For example, https://<your-storage-
account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.ya

ml .

d. For High Availability (HA) systems only, enter the client identifier for the
STONITH Fencing Agent service principal for Fencing client ID.

e. For High Availability (HA) systems only, enter the password for the Fencing
Agent service principal for Fencing client password.

f. Select Next.

7. On the Review + install tab, review the software settings.

8. Select Install to proceed with installation.

9. Wait for the installation to complete. The process takes approximately three hours.
You can see the progress, along with estimated times for each step, in the wizard.

10. After the installation completes, sign in with your SAP system credentials. To find
the SAP system and HANA DB credentials for the newly installed system, see how
to manage a Virtual Instance for SAP solutions.

Install SAP through outside method


If you install the SAP software elsewhere, you need to detect the software installation
and update your Virtual Instance for SAP solutions metadata.
1. Sign in to the Azure portal . Make sure to sign in with an Azure account that has
Contributor role access to the subscription or resource groups where the SAP
system exists.

2. Search for and select Azure Center for SAP solutions in the Azure portal's search
bar.

3. Select Virtual Instances for SAP solutions. Then select the Virtual Instance for SAP
solutions resource that you want to detect.

4. On the resource's overview page, select Confirm already installed software. Read
all the instructions, then select Confirm. Extensions will now be installed on ASCS,
APP and DB virtual machines and start discovering SAP metadata.

5. Wait for the Virtual Instance for SAP solutions resource to be detected and
populated with the metadata. The process completes after all SAP system
components have been detected.

6. Review the Virtual Instance for SAP solutions resource in the Azure portal. The
resource page now shows the SAP system resources, and information about the
system.

Limitations
The following are known limitations and issues.

Application servers
You can install a maximum of 10 Application Servers, excluding the Primary Application
Server.

SAP package version changes


When SAP changes the version of packages for a component in the BOM, you might
encounter problems with the automated installation shell script. It's recommended to
download your SAP installation media as soon as possible to avoid issues.

If you encounter this problem, follow these steps:

1. Download a new valid package from the SAP software downloads page.

2. Upload the new package in the archives folder of your Azure Storage account.
3. Update the following contents in the BOM file(s) that reference the updated
component.

name to the new package name

archive to the new package name and extension


checksum to the new checksum

filename to the new shortened package name

permissions to 0755
url to the new SAP download URL

4. Reupload the BOM file(s) in the subfolder ( S41909SPS03_v0011ms or


S42020SPS03_v0003ms or S4HANA_2021_ISS_v0001ms ) of the boms folder

Special characters like $ in S-user password is not


accepted while downloading the BOM.
1. Clone the SAP automation repository. For more information, see how to download
the SAP installation media.

git

git clone https://github.com/Azure/sap-automation.git

2. Before running the Ansible playbook set the SPASS environment variable below.
Single quotes should be present in the command.

Bash

export SPASS='password_with_special_chars'

3. Run the Ansible playbook:

Azure CLI

ansible-playbook ./sap-
automation/deploy/ansible/playbook_bom_downloader.yaml -e
"bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -
e "s_user=<username>" -e "s_password=$SPASS" -e "sapbits_access_key=
<storageAccountAccessKey>" -e "sapbits_location_base_path=
<containerBasePath>"

For <username> , use your SAP username.


For <bom_base_name> , use the SAP Version you want to install i.e.
S41909SPS03_v0011ms or S42020SPS03_v0003ms or
S4HANA_2021_ISS_v0001ms
For <storageAccountAccessKey> , use your storage account's access key. You
found this value in the Download SAP media section
For <containerBasePath> , use the path to your sapbits container. You found
this value in the Download SAP media section. The format is https://<your-
storage-account>.blob.core.windows.net/sapbits

Next steps
Find SAP and HANA passwords through Azure Center for SAP solutions
Monitor SAP system from Azure portal
Manage a Virtual Instance for SAP solutions
Register existing SAP system
Article • 01/19/2024

In this how-to guide, you learn how to register an existing SAP system with Azure Center
for SAP solutions. After you register an SAP system with Azure Center for SAP solutions,
you can use its visualization, management and monitoring capabilities through the
Azure portal. For example, you can:

View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.

When you register a system with Azure Center for SAP solutions, the following resources
are created in your Subscription:

Virtual Instance for SAP solutions, Central service instance for SAP solutions, App
server instance for SAP solutions and Database for SAP solutions. These resource
types are created to represent the SAP system on Azure. These resources do not
have any billing or cost associated with them.
A managed resource group that is used by Azure Center for SAP solutions service.
A Storage account within the managed resource group that contains blobs. These
blobs are scripts and logs necessary for the service to provide various capabilities
that include discovering and registering all components of SAP system.

7 Note

You can customize the names of the Managed resource group and the Storage
account which get deployed as part of the registration process by using Azure
Portal, Azure PowerShell or Azure CLI interfaces, when you register your systems.

7 Note
You can now enable secure access from specific virtual networks to the ACSS
managed storage account using the new option in the registration experience.

Prerequisites

Azure infrastructure level pre-requisites


Check that you're trying to register a supported SAP system configuration
Grant access to Azure Storage accounts, Azure resource manager (ARM) and
Microsoft Entra services from the virtual network where the SAP system exists. Use
one of these options:
Allow outbound internet connectivity for the VMs.
Use a Service tags to allow connectivity
Use a Service tags with regional scope to allow connectivity to resources in the
same region as the VMs.
Allowlist the region-specific IP addresses for Azure Storage, ARM and Microsoft
Entra ID.
ACSS deploys a managed storage account into your subscription, for each SAP
system being registered. You have the option to choose network access setting for
the storage account.
If you choose network access from specific Virtual Networks option, then you
need to make sure Microsoft.Storage service endpoint is enabled on all subnets
in which the SAP system Virtual Machines exist. This service endpoint is used to
enable access from the SAP virtual machine to the managed storage account, to
access the scripts that ACSS runs on the VM extension.
If you choose public network access option, then you need to grant access to
Azure Storage accounts from the virtual network where the SAP system exists.
Register the Microsoft.Workloads Resource Provider in the subscription where you
have the SAP system.
Check that your Azure account has Azure Center for SAP solutions administrator
and Managed Identity Operator or equivalent role access on the subscription or
resource groups where you have the SAP system resources.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Compute resource group and Reader role access on the
Virtual Network resource group of the SAP system. Azure Center for SAP solutions
service uses this identity to discover your SAP system resources and register the
system as a VIS resource.
Make sure ASCS, Application Server and Database virtual machines of the SAP
system are in Running state.
SAP system level pre-requisites
sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
File path on Linux VMs: /usr/sap/hostctrl/exe
File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\
Make sure the sapstartsrv process is running on all SAP instances and for SAP
hostctrl agent on all the VMs in the SAP system.
To start hostctrl sapstartsrv, use this command for Linux VMs: 'hostexecstart -
start'
To start instance sapstartsrv, use the command: 'sapcontrol -nr 'instanceNr' -
function StartService S0S'
To check status of hostctrl sapstartsrv use this command for Windows VMs:
C:\Program Files\SAP\hostctrl\exe\saphostexec –status
For successful discovery and registration of the SAP system, ensure there is
network connectivity between ASCS, App and DB VMs. 'ping' command for App
instance hostname must be successful from ASCS VM. 'ping' for Database
hostname must be successful from App server VM.
On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right
values configured for the discovery and registration of Database instance details.

Supported systems
You can register SAP systems with Azure Center for SAP solutions that run on the
following configurations:

SAP NetWeaver or ABAP stacks


Windows, SUSE and RHEL Linux operating systems
HANA, DB2, SQL Server, Oracle, Max DB, and SAP ASE databases
SAP system with multiple Application Server Instances on a single Virtual Machine
SAP system with clustered Application Server architecture

The following SAP system configurations aren't supported in Azure Center for SAP
solutions:

HANA Large Instance (HLI)


Systems with HANA Scale-out, MCOS and MCOD configurations
Java stack
Dual stack (ABAP and Java)
Systems distributed across peered virtual networks
Systems using IPv6 addresses
Multiple SIDs running on same set of Virtual Machines. For example, two or more
SIDs sharing a single VM for ASCS instance.

Enable resource permissions


When you register an existing SAP system as a VIS, Azure Center for SAP solutions
service needs a User-assigned managed identity that has Azure Center for SAP
solutions service role access on the Compute (VMs, Disks, Load balancers) resource
group and Reader role access on the Virtual Network resource group of the SAP system.
Before you register an SAP system with Azure Center for SAP solutions, either create a
new user-assigned managed identity or update role access for an existing managed
identity.

Azure Center for SAP solutions uses this user-assigned managed identity to install VM
extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center
for SAP solutions to discover the SAP system components, and other SAP system
metadata. User-assigned managed identity is required to enable SAP system monitoring
and management capabilities.

Setup User-assigned managed identity


To provide permissions to the SAP system resources to a user-assigned managed
identity:

1. Create a new user-assigned managed identity if needed or use an existing one.


2. Assign Azure Center for SAP solutions service role role access to the user-
assigned managed identity on the resource group(s) that have the Virtual
Machines, Disks and Load Balancers of the SAP system and Reader role on the
resource group(s) which have the Virtual Network components of the SAP system.
3. Once the permissions are assigned, this managed identity can be used in Azure
Center for SAP solutions to register and manage SAP systems.

Managed storage account network access


settings
ACSS deploys a managed storage account into your subscription, for each SAP system
being registered. When you register your SAP system using Azure Portal, PowerShell or
REST API, you have the option to choose network access setting for the storage
account. You can choose either public network access or access from specific virtual
networks.
To secure the managed storage account and limit access to only the virtual network that
has your SAP virtual machines, you can choose the network access setting as Enable
access from specific Virtual Networks. You can learn more about storage account
network security in this documentation.

) Important

When you limit storage account network access to specific virtual networks, you
have to configure Microsoft.Storage service endpoint on all subnets related to the
SAP system that you are registering. Without the service endpoint enabled, you will
not be able to successfully register the system. Private endpoint on managed
storage account is not currently supported in this scenario.

When you choose to limit network access to specific virtual networks, Azure Center for
SAP solutions service accesses this storage account using trusted access based on the
managed identity associated with the VIS resource.

Register SAP system


To register an existing SAP system in Azure Center for SAP solutions:

1. Sign in to the Azure portal . Make sure to sign in with an Azure account that has
Azure Center for SAP solutions administrator and Managed Identity Operator
role access to the subscription or resource groups where the SAP system exists. For
more information, see the resource permissions explanation.

2. Search for and select Azure Center for SAP solutions in the Azure portal's search
bar.

3. On the Azure Center for SAP solutions page, select Register an existing SAP
system.

4. On the Basics tab of the Register existing SAP system page, provide information
about the SAP system.

a. For ASCS virtual machine, select Select ASCS virtual machine and select the
ASCS VM resource.

b. For SID name, enter the SID name.

c. For SAP product, select the SAP system product from the drop-down menu.

d. For Environment, select the environment type from the drop-down menu. For
example, production or non-production environments.

e. For Managed identity source, select Use existing user-assigned managed


identity option.

f. For Managed identity name, select a User-assigned managed identity which


has Azure Center for SAP solutions service role and Reader role access to the
respective resources of this SAP system.

g. For Managed resource group name, optionally enter a resource group name as
per your organization's naming policies. This resource group is managed by
ACSS service.

h. For Managed storage account name, optionally enter a storage account name
as per your organization's naming policies. This storage account is managed by
ACSS service.
i. For Storage account network access, select Enable access from specific virtual
network for enhanced network security access for the managed storage
account.

j. Select Review + register to discover the SAP system and begin the registration
process.

k. On the Review + register pane, make sure your settings are correct. Then, select
Register.

5. Wait for the VIS resource to be created. The VIS name is the same as the SID name.
The VIS deployment finishes after all SAP system components are discovered from
the ASCS VM that you selected.

You can now review the VIS resource in the Azure portal. The resource page shows the
SAP system resources, and information about the system.
If the registration doesn't succeed, see what to do when an SAP system registration fails
in Azure Center for SAP solutions. Once you have fixed the configuration causing the
issue, retry registration using the Retry action available on the VIS resource page on
Azure portal.

Fix registration failure


The process of registering an SAP system with Azure Center for SAP solutions
might fail when any of the pre-requisites are not met.
Review the pre-requisites and ensure the configurations are as suggested.
Review any error messages displayed on the VIS resource on Azure portal. Follow
any recommended actions.
Once you have fixed the configuration causing the issue, retry registration using
the Retry action available on the Virtual Instance for SAP solutions page on Azure
portal.

Error - Failed to discover details from the DB VM


This error happens when the Database identifier is incorrectly configured on the SAP
system. One possible cause is that the Application Server profile parameter rsdb/dbid
has an incorrect identifier for the HANA Database. To fix the error:

1. Stop the Application Server instance:

sapcontrol -nr <instance number> -function Stop

2. Stop the ASCS instance:

sapcontrol -nr <instance number> -function Stop

3. Open the Application Server profile.

4. Add the profile parameter for the HANA Database:

rsdb/dbid = <SID of HANA Database>

5. Restart the Application Server instance:

sapcontrol -nr <instance number> -function Start

6. Restart the ASCS instance:

sapcontrol -nr <instance number> -function Start


7. Delete the VIS resource whose registration failed.

8. Register the SAP system again.

Error - Azure VM Agent not in desired provisioning state


Cause: This issue occurs when Azure VM agent's provisioning state is not as expected on
the specified Virtual Machine. Expected state is Ready. Verify the agent status by
checking the properties section in the VM overview page.

Solution: To fix the Linux VM Agent,

1. Login to the VM using bastion or serial console.


2. If the VM agent exists and is not running, then restart the waagent.

sudo systemctl status waagent.

3. If the service is not running then restart this service. To restart use the following
steps:

sudo systemctl stop waagent


sudo systemctl start waagent

4. If this does not solve the issue, try updating the VM Agent using this document
5. If the VM agent does not exist or needs to be re-installed, then follow this
documentation.

To fix the Windows VM Agent, follow Troubleshooting Azure Windows VM Agent.

Next steps
Monitor SAP system from Azure portal
Manage a VIS
Manage a Virtual Instance for SAP
solutions
Article • 05/15/2023

In this article, you'll learn how to view the Virtual Instance for SAP solutions (VIS)
resource created in Azure Center for SAP solutions through the Azure portal. You can use
these steps to find your SAP system's properties and connect parts of the VIS to other
resources like databases.

Prerequisites
An Azure subscription in which you have a successfully created Virtual Instance for
SAP solutions(VIS) resource.
An Azure account with Azure Center for SAP solutions administrator role access
to the subscription or resource groups where you have the VIS resources.

Open VIS in portal


To configure your VIS in the Azure portal:

1. Open the Azure portal in a browser.

2. Sign in with your Azure account that has the necessary role access as described in
the prerequisites.

3. In the search field in the navigation menu, enter and select Azure Center for SAP
solutions.

4. On the Azure Center for SAP solutions overview page, search for and select
Virtual Instances for SAP solutions in the sidebar menu.

5. On the Virtual Instances for SAP solutions page, select the VIS that you want to
view.

) Important

Each VIS resource has a unique Managed Resource Group associated with. This
Resource Group contains resources like Storage Account, Keyvault etc. which are
critical for Azure Center for SAP solutions service to provide capabilities like
deployment of infrastructure for a new system, installation of SAP software,
registration of existing systems and all other SAP system management functions.
Please do not delete this RG or any resources within it. If they are deleted, you will
have to re-register the VIS to use any capabilities of ACSS.

Monitor VIS
To see infrastructure-based metrics for the VIS, open the VIS in the Azure portal. On the
Overview pane, select the Monitoring tab. You can see the following metrics:

VM utilization by ASCS and Application Server instances. The graph shows CPU
usage percentage for all VMs that support the ASCS and Application Server
instances.
VM utilization by the database instance. The graph shows CPU usage percentage
for all VMs that support the database instance.
IOPS consumed by the database instance's data disk. The graph shows the
percentage of disk utilization by all VMs that support the database instance.

View instance properties


To view properties for the instances within your VIS, first open the VIS in the Azure
portal.

In the sidebar menu, look under the section SAP resources:


To see properties of ASCS instances, select Central service instances.
To see properties of application server instances, select App server instances.
To see properties of database instances, select Databases.

Default Instance Numbers


If you've deployed an SAP system using Azure Center for SAP solutions, the following
list shows the default values of instance numbers configured during deployment:

Distributed Systems [HA and non-HA systems]


ASCS Instance Number - 00
ERS Instance Number - 01
DB Instance Number - 00
APP Instance Number - 00

Single Server Systems


ASCS Instance Number - 01
DB Instance Number - 00
APP Instance Number - 02

Connect to SAP Application


To connect to and manage SAP Application, you can use the following credentials:

User : DDIC or RFC_USER or SAP*


Client ID : 000

Connect to HANA database


If you've deployed an SAP system using Azure Center for SAP solutions, find the SAP
system's main password and HANA database passwords.

The HANA database username is either system or SYSTEM for:

Distributed High Availability (HA) SAP systems


Distributed non-HA systems
Standalone systems

Find SAP and HANA passwords


To retrieve the password:

1. Open the VIS in the Azure portal.

2. On the overview page, select the Managed resource group.

3. On the resource group's page, select the Key vault resource in the table.

4. On the key vault's page, select Secrets in the navigation menu under Settings.

5. Make sure that you have access to all the secrets. If you have correct permissions,
you can see the SAP password file listed in the table, which hosts the global
password for your SAP system.

6. Select the SAP password file name to open the secret's page.

7. Copy the Secret value.

If you get the warning The operation 'List' is not enabled in this key vault's access
policy. with the message You are unauthorized to view these contents.:

1. Make sure that you're responsible to manage these secrets in your organization.
2. In the sidebar menu, under Settings, select Access policies.
3. On the access policies page for the key vault, select + Add Access Policy.
4. In the pane Add access policy, configure the following settings.
a. For Configure from template (optional), select Key, Secret, & Certificate
Management.
b. For Key permissions, select the keys that you want to use.
c. For Secret permissions, select the secrets that you want to use.
d. For Certificate permissions, select the certificates that you want to use.
e. For Select principal, assign your own account name.
5. Select Add to add the policy.
6. In the access policy's menu, select Save to save your settings.
7. In the sidebar menu, under Settings, select Secrets.
8. On the secrets page for the key vault, make sure you can now see the SAP
password file.

Delete VIS
When you delete a VIS, you also delete the managed resource group and all instances
that are attached to the VIS. That is, the VIS, ASCS, Application Server, and Database
instances are deleted. Any Azure physical resources aren't deleted when you delete a
VIS. For example, the VMs, disks, NICs, and other resources aren't deleted.

2 Warning

Deleting a VIS is a permanent action! It's not possible to restore a deleted VIS.

To delete a VIS:

1. Open the VIS in the Azure portal.

2. On the overview page's menu, select Delete.

3. In the deletion pane, make sure that you want to delete this VIS and related
resources. You can see a count for each type of resource to be deleted.

4. Enter YES in the confirmation field.

5. Select Delete to delete the VIS.

6. Wait for the deletion operation to complete for the VIS and related resources.

After you delete a VIS, you can register the SAP system again. Open Azure Center for
SAP solutions in the Azure portal, and select Register an existing SAP system.

Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for your VIS
Start and stop SAP systems, instances
and HANA database
Article • 10/31/2023

In this how-to guide, you'll learn to start and stop your SAP systems through the Virtual
Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.

Through the Azure portal, Azure PowerShell, CLI and REST API interfaces, you can start
and stop:

Entire SAP Application tier in one go, which include ABAP SAP Central Services
(ASCS) and Application Server instances.
Specific SAP instance, such as the application server instance.
HANA Database
You can start and stop instances and HANA database in the following types of
deployments:
Single-Server
High Availability (HA)
Distributed Non-HA
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
SAP HA systems that use SUSE and RHEL Pacemaker clustering software and
Windows Server Failover Clustering (WSFC). Other certified cluster software isn't
currently supported.

Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
For the start operation to work, the underlying virtual machines (VMs) of the SAP
instances must be running. This capability starts or stops the SAP application
instances, not the VMs that make up the SAP system resources.
The sapstartsrv service must be running on all VMs related to the SAP system.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
For HANA Database, Stop operation is initiated only when the cluster maintenance
mode is in Disabled status. Similarly, Start operation is initiated only when the
cluster maintenance mode is in Enabled status.

7 Note

When you deploy an SAP system using Azure Center for SAP solutions, RHEL and
SUSE cluster connector for highly available systems is already configured on them
as part of the SAP software installation process.

Supported scenarios
The following scenarios are supported when Starting and Stopping SAP systems:

SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
Stopping and Starting SAP system or individual instances from the VIS resource
only stops or starts the SAP application. The underlying VMs are not stopped or
started.
Stopping a highly available SAP system from the VIS resource gracefully stops the
SAP instances in the right order and does not result in a failover of Central Services
instance.
Stopping the HANA Database from the VIS resource results in the entire HANA
instance to be stopped. In case of HANA MDC with multiple tenant DBs, the entire
instance is stopped and not the specific Tenant DB.
For highly available (HA) HANA databases, start and stop operations through
Virtual Instance for SAP solutions resource are supported only when cluster
management solution is in place. Any other HANA database high availability
configurations without a cluster are not currently supported when starting and
stopping using Virtual Instance for SAP solutions resource.

7 Note

When multiple application server instances run on a single virtual machine and you
intend to stop all these instances, you can currently stop them one instance at a
time only. If you attempt to stop them in parallel, only one stop request is accepted
and all others would fail.
Stop SAP system
To stop an SAP system in the VIS resource:

1. Sign in to the Azure portal .

2. Search for and select Azure Center for SAP solutions in the search bar.

3. Select Virtual Instances for SAP solutions in the sidebar menu.

4. In the table of VIS resources, select the name of the VIS you want to stop.

5. Select the Stop button. If you can't select this button, the SAP system already isn't
running.

6. Select Yes in the confirmation prompt to stop the VIS.

A notification pane then opens with a Stopping Virtual Instance for SAP solutions
message.

7. Wait for the VIS resource's Status to change to Stopping.


A notification pane then opens with a Stopped Virtual Instance for SAP solutions
message.

Start SAP system


To start an SAP system in the VIS resource:

1. Sign in to the Azure portal .

2. Search for and select Azure Center for SAP solutions in the search bar.

3. Select Virtual Instances for SAP solutions in the sidebar menu.

4. In the table of VIS resources, select the name of the VIS you want to start.

5. Select the Start button. If you can't select this button, make sure that you've
followed the prerequisites for the VMs within your SAP system.

A notification pane then opens with a Starting Virtual Instance for SAP solutions
message. The VIS resource's Status also changes to Starting.

6. Wait for the VIS resource's Status to change to Running.

A notification pane then opens with a Started Virtual Instance for SAP solutions
message.

Troubleshooting
If the SAP system takes longer than 300 seconds to complete a start or stop operation,
the operation terminates. After the operation terminates, the monitoring service
continues to check and update the status of the SAP system in the VIS resource.
Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for a VIS resource
Soft stop SAP systems, application
server instances and HANA database
Article • 11/20/2023

In this how-to guide, you'll learn to soft stop your SAP systems, individual instances and
HANA database through the Virtual Instance for SAP solutions (VIS) resource in Azure
Center for SAP solutions. You can stop your system smoothly by making sure that
existing user connections, batch processes, etc. are drained first.

Using the Azure PowerShell, CLI and REST API interfaces, you can:

Soft stop the entire SAP system, that is the application server instances and central
services instance.
Soft stop specific SAP application server instances.
Soft stop HANA database.

Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. For
more information, see how to use granular permissions that govern start and stop
actions on the VIS, individual SAP instances and HANA databases.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
For HANA Database, Stop operation is initiated only when the cluster maintenance
mode is in Disabled status.

Soft stop SAP system


Currently, you can initiate a soft stop operation from the Azure PowerShell, Azure
Command-Line Interface (Azure CLI) and REST API interfaces. You must use the stop
operation along with a soft stop timeout value in seconds to initiate a soft stop. Once
you initiate soft stop on VIS and the operation is successfully triggered on the SAP
system, then monitor the Health and Status of the VIS to check if the system has
stopped.
7 Note

When attempting to soft stop an SAP system or applicaton server instance using
Azure Center for SAP solutions, soft stop timeout value must be greater than 0 and
less than 82800 seconds.

Soft stop system in PowerShell


Use the Stop-AzWorkloadsSapVirtualInstance command:

PowerShell

Stop-AzWorkloadsSapVirtualInstance -InputObject
/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 --SoftStopTimeoutSecond 300 `

Soft stop system in CLI


Use the az workloads sap-virtual-instance stop command:

Azure CLI

az workloads sap-virtual-instance stop --id


/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 --soft-stop-timeout-seconds 300

Soft stop system using REST API


Use this sample payload to soft stop an SAP system. You can specify the soft stop
timeout value in seconds.

Soft stop SAP Application server instance


You can soft stop a specific application server in Azure Center for SAP solutions using
Azure PowerShell, CLI and REST API interfaces. Once you initiate soft stop on application
server and the operation is successfully triggered, then monitor Health and Status of the
application server instance to check if it has stopped.

To soft stop an application server represented as an App server instance for SAP solutions
resource:
Using PowerShell
Use the Stop-AzWorkloadsSapApplicationInstance command:

PowerShell

Stop-AzWorkloadsSapApplicationInstance -InputObject
/subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0/applicationInstances/app0 --SoftStopTimeoutSecond 300 `

Using CLI
Use the az workloads sap-application-server-instance stop command:

Azure CLI

az workloads sap-application-server-instance stop --id


/subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0/applicationInstances/app0 --soft-stop-timeout-seconds 300

Using REST API


Use this sample payload to soft stop an application server instance. You can specify the
soft stop timeout value in seconds.

Soft stop HANA database


You can soft stop the HANA database so that the database stops gracefully after all
running statements have finished. You can use the Azure PowerShell, CLI and REST API
interfaces to soft stop database. Once you initiate soft stop on HANA database and the
operation is successfully triggered on the database instance, then monitor the status of
the database instance on the VIS to check if it has stopped.

7 Note

When attempting to soft stop HANA database instance using Azure Center for SAP
solutions, soft stop timeout value must be greater than 0 and less than 1800
seconds.

Using PowerShell
Use the Stop-AzWorkloadsSapDatabaseInstance command:

PowerShell

Stop-AzWorkloadsSapDatabaseInstance -InputObject
/subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0/databaseInstances/ab0 --SoftStopTimeoutSecond 300 `

Using CLI
Use the az workloads sap-database-instance stop command:

Azure CLI

az workloads sap-database-instance stop --id


/subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0/databaseInstances/ab0 --soft-stop-timeout-seconds 300

Using REST API


Use this sample payload to soft stop HANA database. You can specify the soft stop
timeout value in seconds.
Start and Stop SAP systems, instances,
HANA database and their underlying
Virtual machines
Article • 10/31/2023

In this how-to guide, you'll learn how to start and stop SAP systems and their underlying
virtual machines through the Virtual Instance for SAP solutions (VIS) resource in Azure
Center for SAP solutions. This simplifies the process to stop and start SAP systems by
shutting down and bringing up underlying infrastructure and SAP application in one
command.

Using the REST API interfaces, you can:

Start and stop the entire SAP application tier and its Virtual machines, which
includes ABAP SAP Central Services (ASCS) and Application Server instances.
Start and stop a specific SAP instance, such as the application server instance, and
its Virtual machines.
Start and stop HANA database instance and its Virtual machines.

) Important

The ability to start and stop virtual machines of an SAP system is available from API
Version 2023-10-01.

7 Note

You can schedule stop and start of SAP systems, HANA database at scale for your
SAP landscapes using the ARM template . This ARM template can be customized
to suit your own requirements.

Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
Check that the User Assigned Managed Identity associated with the VIS resource
has Virtual Machine Contributor or equivalent role access. This is needed to be
able to Start and Stop VMs.

Unsupported scenarios
The following scenarios are not currently supported when using the Start and Stop of
SAP, individual SAP instances, HANA database and their underlying VMs:

Starting and stopping systems when multiple SIDs on the same set of Virtual
Machines.
Starting and stopping HANA databases with MCOS (Multiple Components in One
System) architecture, where multiple HANA instances run on the same set of virtual
machines.
Starting and stopping SAP application server or central services instances where
instances of multiple SIDs or multiple instances of the same SID run on the same
virtual machine.

) Important

For single-server deployments, when you want to stop SAP, HANA DB and the VM,
use stop VIS action to stop SAP application tier and then stop HANA database with
'deallocateVm' set to true. This ensures that SAP application and HANA database
are both stopped before stopping the VM.

7 Note

When stopping a VIS or an instance with 'DeallocateVm' option set to true, only
that VIS or instance is stopped and then the virtual machine is shutdown. SAP
instances of other SIDs are not stopped. Use the virtual machine stop option only
after all instances running on the VM are stopped.

Start and Stop SAP system and underlying


Virtual machines
You can start and stop the entire SAP application tier and underlying VMs using REST
API version 2023-10-01.
Start SAP system and its VMs
To start the virtual machines and the SAP application on it, use the following REST API
with "startVm" parameter set to true. This command starts the VMs associated with
Central services instance and Application server instances.

HTTP

POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/start?api-
version=2023-10-01-preview

{
"startVm": true
}

Stop SAP system and its VMs


To stop the SAP application and its VMs, use the following REST API with "deallocateVm"
parameter set to true.

HTTP

POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/stop?api-
version=2023-10-01-preview

{
"deallocateVm": true
}

Start and Stop HANA Database and its VMs


You can start and stop HANA database and its underlying VMs using REST API version
2023-10-01.

Start HANA database and its VMs


To start the virtual machines and the HANA database on it, use the following REST API
with "startVm" parameter set to true.

HTTP
POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/databaseInstances/d
b0/start?api-version=2023-10-01-preview

{
"startVm": true
}

Stop HANA database and its VMs


To stop HANA database and its underlying VMs, use the following REST API with
deallocateVm parameter set to true .

HTTP

POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/databaseInstances/d
b0/stop?api-version=2023-10-01-preview

{
"deallocateVm": true
}
Get quality checks and insights for a
Virtual Instance for SAP solutions
Article • 05/15/2023

The Quality Insights Azure workbook in Azure Center for SAP solutions provides insights
about the SAP system resources as a result of running more than 100 quality checks on
the VIS. The feature is part of the monitoring capabilities built in to the Virtual Instance
for SAP solutions (VIS). These quality checks make sure that your SAP system uses Azure
and SAP best practices for reliability and performance.

In this how-to guide, you'll learn how to use quality checks and insights to get more
information about various configurations within your SAP system.

Prerequisites
An SAP system that you've created with Azure Center for SAP solutions or
registered with Azure Center for SAP solutions.

Open Quality Insights workbook


To open the workbook:

1. Sign in to the Azure portal .

2. Search for and select Azure Center for SAP solutions in the Azure portal search
bar.

3. On the Azure Center for SAP solutions page's sidebar menu, select Virtual
Instances for SAP solutions.

4. On the Virtual Instances for SAP solutions page, select the VIS that you want to
get insights about.

5. On the sidebar menu for the VIS, under Monitoring select Quality Insights.

There are multiple sections in the workbook:

Select the default Advisor Recommendations tab to see the list of


recommendations made by Azure Center for SAP solutions for the different
instances in your VIS
Select the Virtual Machine tab to find information about the VMs in your VIS
Select the Configuration Checks tab to see configuration checks for your VIS

Get Advisor Recommendations


The Quality checks feature in Azure Center for SAP solutions runs validation checks for
all VIS resources. These quality checks validate the SAP system configurations follow the
best practices recommended for SAP on Azure. If a VIS doesn't follow these best
practices, you receive a recommendation from Azure Advisor. Azure Center for SAP
solutions runs more than 100 quality checks on all VIS resources. These checks span
across the following categories:

Azure Infrastructure checks


OS parameter checks.
High availability (HA) Load Balancer checks
HANA DB file system checks.
OS parameter checks for ANF file system.
Pacemaker configuration checks for HANA DB and ASCS Instance for SUSE and
Redhat
OS Configuration checks for Application Instances

The table in the Advisor Recommendations tab shows all the recommendations for
ASCS, Application and Database instances in the VIS.

Select an instance name to see all recommendations, including which action to take to
resolve an issue.

Set Alerts for Quality check recommendations


As the Quality checks recommendations in Azure Center for SAP solutions are
integrated with Azure Advisor, you can set alerts for the recommendations. See how to
Configure alerts for recommendations

7 Note

These quality checks run on all VIS instances at a regular frequency of once every 1
hour. The corresponding recommendations in Azure Advisor also refresh at the
same 1-hour frequency.If you take action on one or more recommendations from
Azure Center for SAP solutions, wait for the next refresh to see any new
recommendations from Azure Advisor.

) Important

Azure Advisor filters out recommendations for Deleted Azure resources for 7 days.
Therefore, if you delete a VIS and then re-register it, you will be able to see Advisor
recommendations after 7 days of re-registration.

Get VM information
The Virtual Machine tab provides insights about the VMs in your VIS. There are multiple
subsections:

Azure Compute
Compute List
Compute Extensions
Compute + OS Disk
Compute + Data Disks

Azure Compute
The Azure Compute tab shows a summary graph of the VMs inside the VIS.

Compute List
The Compute List tab shows a table of information about the VMs inside the VIS. This
information includes the VM's name and state, SKU, OS, publisher, image version and
SKU, offer, Azure region, resource group, tags, and more.

You can toggle Show Help to see more information about the table data.

Select a VM name to see its overview page, and change settings like Boot Diagnostic.

Compute Extensions
The Compute Extensions tab shows information about your VM extensions. There are
three tabs within this section:

VM+Extensions
VM Extensions Status
Failed VM Extensions

VM + Extensions

VM+Extensions shows a summary of any VM extensions installed on the VMs in your


VIS.

VM Extensions Status
VM Extensions Status shows details about the VM extensions in each VM. You can see
each extension's state, version, and if AutoUpgrade is enabled.

Failed VM Extensions

Failed VM Extensions shows which VM extensions are failing in the selected VIS.

Compute + OS Disk
The Compute+OS Disk tab shows a table with OS disk configurations in the SAP system.

Compute + Data Disks


The Compute+Data Disks tab shows a table with data disk configurations in the SAP
system.

Run configuration checks


The Configuration Checks tab provides configuration checks for the VMs in your VIS.
There are four subsections:

Accelerated Networking
Public IP
Backup
Load Balancer

Accelerated Networking
The Accelerated Networking tab shows if Accelerated Networking State is enabled for
each NIC in the VIS. It's recommended to enable this setting for reliability and
performance.


Public IP
The Public IP tab shows any public IP addresses that are associated with the NICs linked
to the VMs in the VIS.

Backup
The Backup tab shows a table of VMs that don't have Azure Backup configured. It's
recommended to use Azure Backup with your VMs.

Load Balancer
The Load Balancer tab shows information about load balancers connected to the
resource group(s) for the VIS. There are two subsections: Load Balancer Overview and
Load Balancer Monitor.

Load Balancer Overview


The Load Balancer Overview tab shows rules and details for the load balancers in the
VIS. You can review:

If the HA ports are defined for the load balancers.


If the load balancers have floating IP addresses enabled.
If the keep-alive functionality is enabled, with a maximum timeout of 30 minutes.

Load Balancer Monitor


The Load Balancer Monitor tab shows monitoring information for the load balancers.
You can filter the information by load balancer and time range.

Load Balancer Key Metrics, which is a table that shows important information about the
load balancers in the subscription where the VIS exists.

Backend health probe by Backend IP, which is a chart that shows the health probe
status for each load balancer over time.

Next steps
Manage a VIS
Monitor SAP system from the Azure portal
View post-deployment cost analysis for
SAP system
Article • 05/15/2023

In this how-to guide, you'll learn how to view the running cost of your SAP systems
through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP
solutions.

After you deploy or register an SAP system as a VIS resource, you can view the cost of
running that SAP system on the VIS resource's page. This feature shows the post-
deployment running costs in the context of your SAP system. When you have Azure
resources of multiple SAP systems in a single resource group, you no longer need to
analyze the cost for each system. Instead, you can easily view the system-level cost from
the VIS resource.

How does cost analysis work?


When you deploy infrastructure for a new SAP system with Azure Center for SAP
solutions or register an existing system with Azure Center for SAP solutions, the
costanalysis-parent tag is added to all virtual machines (VMs), disks, and load balancers
related to that SAP system. The cost is determined by the total cost of all the Azure
resources in the system with the costanalysis-parent tag. Whenever there are changes
to the SAP system, such as the addition or removal of Application Server Instance VMs,
tags are updated on the relevant Azure resources.

7 Note

If you register an existing SAP system as a VIS, the cost analysis only shows data
after the time of registration. Even if some infrastructure resources might have been
deployed before the registration, the cost analysis tags aren't applied to historical
data.

The following Azure resources aren't included in the SAP system-level cost analysis. This
list includes some resources that might be shared across multiple SAP systems.

Virtual networks
Storage accounts
Azure NetApp files (ANF)
Azure key vaults
Azure Monitor for SAP solutions resources
Azure Backup resources

Cost and usage data is typically available within 8-24 hours. As such, your VIS resource
can take 8-24 hours to start showing cost analysis data.

View cost analysis


To view the post-deployment costs of running an SAP system registered as a VIS
resource:

1. Sign in to the Azure portal .


2. Search for and select Azure Center for SAP solutions in the Azure portal's search
bar.
3. Select Virtual Instance for SAP solutions in the sidebar menu.
4. Select a VIS resource that is either successfully deployed or registered.
5. Select Cost Analysis in the sidebar menu.
6. To change the cost analysis from table view to a chart view, select the Column
(grouped) option.

Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for a VIS resource
Start and Stop SAP systems
Configure and monitor Azure Backup
status for your SAP system through
Virtual Instance for SAP solutions
(Preview)
Article • 11/15/2023

7 Note

Configuration of Backup from Virtual Instance for SAP solutions feature is currently
in Preview.

In this how-to guide, you'll learn to configure and monitor Azure Backup for your SAP
system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for
SAP solutions.

When you configure Azure Backup from the VIS resource, you can enable Backup for
your SAP Central Services instance, Application server and Database virtual machines
and HANA Database in one step. For the HANA Database, Azure Center for SAP
solutions automates the step of running the Pre-Registration script.

Once backup is configured, you can monitor the status of your Backup Jobs for both
virtual machines and HANA DB from the VIS.

If you have already configured Backup from Azure Backup Center for your SAP VMs and
HANA DB, then VIS resource automatically detects this and enables you to monitor the
status of Backup jobs.

Prerequisites
A Virtual Instance for SAP solutions (VIS) resource representing your SAP system
on Azure Center for SAP solutions.
An Azure account with Contributor role access on the Subscription in which your
SAP system exists.

To be able to configure Backup from the VIS resource, assign the following roles to
Azure Workloads Connector Service first-party app
1. Backup Contributor role access on the Subscription or specific Resource group
which has the Recovery services vault that will be used for Backup.
2. Virtual Machine Contributor role access on the Subscription or Resource groups
which have the Compute resources of the SAP systems. You can skip this step if
you have already configured Backup for your VMs and HANA DB using Azure
Backup Center. You will be able to monitor Backup of your SAP system from the
VIS.

) Important

Once you have completed configuring Backup from the VIS experience, it is
recommended that you remove role access assigned to Azure Workloads
Connector Service first-party app, as the access is no longer needed when
monitoring backup status from VIS.

For HANA database backup, ensure the prerequisites required by Azure Backup are
in place.
For HANA database backup, create a HDB Userstore key that will be used for
preparing HANA DB for configuring Backup. For a highly available(HA) HANA
database, the Userstore key should be created in both Primary and Secondary
databases.

7 Note

If you are configuring backup for HANA database from the Virtual Instance for SAP
solutions resource, you can skip running the Backup pre-registration script. Azure
Center for SAP solutions runs this script before configuring HANA backup.

Configure Backup for your SAP system


You can configure Backup for your Central service, Application server and Database
virtual machines and HANA database from the Virtual Instance for SAP solutions
resource following these steps:

1. Sign in to the Azure portal .


2. Search for ACSS and select Azure Center for SAP solutions from search results.
3. On the left navigation, select Virtual Instance for SAP solutions.
4. Select the Backup (preview) tab on the left navigation.
5. Select Configure button on the Backup (preview) page.
6. Select the checkboxes Central service + App server VMs Backup and Database
Backup.
7. For Central service + App server VMs Backup, select an existing Recovery Services
vault or Create new.

Select a Backup policy that is to be used for backing up Central service, App
server and Database VMs.
Select Include database servers for virtual machine backup if you want to
have Azure VM backup configured for database VMs. If this is not selected,
only Central service and App server VMs will have VM backup configured.
If you choose to include database VMs for backup, then you can decide if
all disks associated to the VM must be backed up or OS disk only.

8. For Database Backup, select an existing Recovery Services vault or Create new.

Select a Backup policy that is to be used for backing up HANA database.

9. Provide a HANA DB User Store key name.

) Important

If you are configuring backup for a HSR enabled HANA database, then you
must ensure the HANA DB user store key is available on both primary and
secondary databases.

10. If SSL enforce is enabled for the HANA database, provide the key store, trust store
path and SSL hostname and crypto provider details.

7 Note

If you are configuring backup for a HSR enabled HANA database from the Virtual
Instance for SAP solutions resource, then the Backup pre-registration script is run
on both Primary and Secondary HANA VMs. This is inline with Azure Backup
configuration process for HSR enabled HANA databases, to ensure Azure Backup
service is able to connect to any new primary node automatically without any
manual intervention. Learn more.

Monitor Backup status of your SAP system


After you configure Backup for the Virtual Machines and HANA Database of your SAP
system either from the Virtual Instance for SAP solutions resource or from the Backup
Center, you can monitor the status of Backup from the Virtual Instance for SAP solutions
resource.

To monitor Backup status:

1. Sign in to the Azure portal .


2. Search for ACSS and select Azure Center for SAP solutions from search results.
3. On the left navigation, select Virtual Instance for SAP solutions.
4. Select the Backup (preview) tab on the left navigation.
5. For Central service + App server VMs and HANA Database, view protection status
of Backup instances and status of Backup jobs in the last 24 hours.

Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for a VIS resource
Start and Stop SAP systems
View Cost Analysis of SAP system
Monitor SAP system from Azure portal
Article • 05/15/2023

In this how-to guide, you'll learn how to monitor the health and status of your SAP
system with Azure Center for SAP solutions through the Azure portal. The following
capabilities are available for your Virtual Instance for SAP solutions resource:

Monitor your SAP system, along with its instances and VMs.
Analyze important SAP infrastructure metrics.
Create and/or register an instance of Azure Monitor for SAP solutions to monitor
SAP platform metrics.

System health
The health of an SAP system within Azure Center for SAP solutions is based on the
status of its underlying instances. Codes for health are also determined by the collective
impact of these instances on the performance of the SAP system.

Possible values for health are:

Healthy: the system is healthy.


Unhealthy: the system is unhealthy.
Degraded: the system shows signs of degradation and possible failure.
Unknown: the health of the system is unknown.

System status
The status of an SAP system within Azure Center for SAP solutions indicates the current
state of the system.

Possible values for status are:

Running: the system is running.


Offline: the system is offline.
Partially running: the system is partially running.
Unavailable: the system is unavailable.

Instance properties
When you check the health or status of your SAP system in the Azure portal, the results
for each instance are listed and color-coded.

Color-coding for states


For ASCS and application server instances:

Color code Status Health

Green Running Healthy

Yellow Running Degraded

Red Running Unhealthy

Gray Unavailable Unknown

For database instances:

Color code Status

Green Running

Yellow Unavailable

Red Unavailable

Gray Unavailable

Example scenarios
The following are different scenarios with the corresponding status and health values.

Application instance state ASCS instance state System status System health

Running and healthy Running and healthy Running Healthy

Running and degraded Running and healthy Running Degraded

Running and unhealthy Running and healthy Running Unhealthy

Health and status codes


When you check the health or status of your SAP system in the Azure portal, these
values are displayed with corresponding symbols.
Depending on the type of instance, there are different color-coded scenarios with
different status and health outcomes.

For ASCS and application server instances, the following color-coding applies:

Check health and status

7 Note

After creating your virtual Instance for SAP solutions (VIS), you might need to wait
2-5 minutes to see health and status information.

The average latency to get health and status information is about 30 seconds.

To check basic health and status settings:

1. Sign in to the Azure portal .

2. In the search bar, enter SAP on Azure , then select Azure Center for SAP solutions
in the results.

3. On the service's page, select Virtual Instances for SAP solutions in the sidebar
menu.

4. On the page for the VIS, review the table of instances. There is an overview of
health and status information for each VIS.


5. Select the VIS you want to check.

6. On the Overview page for the VIS resource, select the Properties tab.

7. On the properties page for the VIS, review the SAP status section to see the health
of SAP instances. Review the Virtual machines section to see the health of VMs
inside the VIS.

To see information about ASCS instances:

1. Open the VIS in the Azure portal, as previously described.

2. In the sidebar menu, under SAP resources, select Central service instances.

3. Select an instance from the table to see its properties.


To see information about SAP application server instances:

1. Open the VIS in the Azure portal, as previously described.

2. In the sidebar menu, under SAP resources, select App server instances.

3. Select an instance from the table to see its properties.

Monitor SAP infrastructure


Azure Center for SAP solutions enables you to analyze important SAP infrastructure
metrics from the Azure portal.

1. Sign in to the Azure portal .

2. In the search bar, enter SAP on Azure , then select Azure Center for SAP solutions
in the results.

3. On the service's page, select SAP Virtual Instances in the sidebar menu.

4. On the page for the VIS, select the VIS from the table.

5. On the overview page for the VIS, select the Monitoring tab.

6. Review the monitoring charts, which include:

a. CPU utilization by the Application server and ASCS server

b. IOPS percentage consumed by the Database server instance

c. CPU utilization by the Database server instance

7. Select any of the monitoring charts to do more in-depth analysis with Azure
Monitor metrics explorer.

Configure Azure Monitor


You can also set up or register Azure Monitor for SAP solutions to monitor SAP
platform-level metrics.

1. Sign in to the Azure portal .

2. In the search bar, enter SAP on Azure , then select Azure Center for SAP solutions
in the results.

3. On the service's page, select SAP Virtual Instances in the sidebar menu.

4. On the page for the VIS, select the VIS from the table.

5. In the sidebar menu for the VIS, under Monitoring, select Azure Monitor for SAP
solutions.

6. Select whether you want to [create a new Azure Monitor for SAP solutions
instance](#create-new-Azure Monitor for SAP solutions-resource), or [register an
existing Azure Monitor for SAP solutions instance](#register-existing-Azure
Monitor for SAP solutions-resource). If you don't see this option, you've already
configured this setting.

7. After you create or register your Azure Monitor for SAP solutions instance, you are
redirected to the Azure Monitor for SAP solutions instance.

Create new Azure Monitor for SAP solutions resource


To configure a new Azure Monitor for SAP solutions resource:

1. On the Create new Azure Monitor for SAP solutions resource page, select the
Basics tab.

2. Under Project details, configure your resource.

a. For Subscription, select your Azure subscription.

b. For Azure Monitor for SAP solutions resource group, select the same resource
group as the VIS.

) Important

If you select a resource group that's different from the resource group of the
VIS, the deployment fails.
3. Under Azure Monitor for SAP solutions instance details, configure your Azure
Monitor for SAP solutions instance.

a. For Resource name, enter a name for your Azure Monitor for SAP solutions
resource.

b. For Workload region, select an Azure region for your workload.

4. Under Networking, configure networking information.

a. For Virtual network, select a virtual network to use.

b. For Subnet, select a subnet in your virtual network.

c. For Route All, choose to enable or disable the option. When you enable this
setting, all outbound traffic from the app is affected by your networking
configuration.

5. Select the Review + Create tab.

Register existing Azure Monitor for SAP solutions


resource
To register an existing Azure Monitor for SAP solutions resource, select the instance
from the drop-down menu on the registration page.

7 Note

You can only view and select the current version of Azure Monitor for SAP solutions
resources. Azure Monitor for SAP solutions (classic) resources aren't available.


Unregister Azure Monitor for SAP solutions from VIS

7 Note

This operation only unregisters the Azure Monitor for SAP solutions resource from
the VIS. To delete the Azure Monitor for SAP solutions resource, you need to delete
the Azure Monitor for SAP solutions instance.

To remove the link between your Azure Monitor for SAP solutions resource and your
VIS:

1. Sign in to the Azure portal .

2. In the sidebar menu, under Monitoring, select Azure Monitor for SAP solutions.

3. On the Azure Monitor for SAP solutions page, select Delete to unregister the
resource.

4. Wait for the confirmation message, Azure Monitor for SAP solutions has been
unregistered successfully.

Troubleshooting issues with Health and Status


on VIS
If an error appears on a successfully registered or deployed Virtual Instance for SAP
solutions resource indicating that service is unable to fetch health and status data, then
use the guidance provided here to fix problem.

Error - Unable to fetch health and status data from


primary SAP Central services VM
Possible causes:

1. The SAP central services VM might not be running.


2. The monitoring VM extension might not be running or encountered an
unexpected failure on the central services VM.
3. The storage account in the managed resource group isn't reachable from the
Central service VM(s) or the storage account or underlying container/blob required
by the monitoring service may have been deleted.
4. The Central Service VM(s) system assigned managed identity doesn't have ‘Storage
Blob Data Owner’ access on the managed RG or this managed identity may have
been disabled.
5. The sapstartsrv process might not be running for the SAP instance or for SAP
hostctrl agent on the primary Central service VM.
6. The monitoring VM extension couldn't execute the script to fetch health and status
information due to policies or restrictions in place on the VM.

Solution:

1. If the SAP Central services VM is not running, then bring up the virtual machine
and SAP services on the VM. Once this is done, wait for a few minutes and check if
the Health and Status shows up on the VIS resource.
2. Navigate to the SAP Central Services VM on Azure Portal and check if the status of
Microsoft.Workloads.MonitoringExtension on the Extensions + applications tab
shows Provisioning Succeeded. If not, raise a support ticket.
3. Navigate to the VIS resource and go to the Managed Resource Group from the
Essentials section on Overview. Check if a Storage Account exists in this resource
group. If it exists, then check if your virtual network allows connectivity from the
SAP central services VM to this storage account. Enable connectivity if needed. If
the storage account doesn't exist, then you will have to delete the VIS resource
and register the system again.
4. Check if the SAP central services VM system assigned managed identity has the
‘Storage Blob Data Owner’ access on the managed resource group of the VIS. If
not, provide the necessary access. If the system assigned managed identity doesn't
exist, then you will have to delete the VIS and re-register the system.
5. Ensure sapstartsrv process for the SAP instance and SAP Hostctrl is running on the
Central Services VM.
6. If everything mentioned above is in place, then log a support ticket.

Next steps
Get quality checks and insights for your VIS
az workloads sap-virtual-instance
Preview Reference

7 Note

This reference is part of the workloads extension for the Azure CLI (version 2.55.0
or higher). The extension will automatically install the first time you run an az
workloads sap-virtual-instance command. Learn more about extensions.

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Manage virtual instance.

Commands
ノ Expand table

Name Description Type Status

az workloads sap- Create a Virtual Instance for SAP solutions (VIS) Extension Preview
virtual-instance resource.
create

az workloads sap- Delete a Virtual Instance for SAP solutions resource Extension Preview
virtual-instance and its child resources, that is the associated Central
delete Services Instance, Application Server Instances and
Database Instance.

az workloads sap- List all Virtual Instances for SAP solutions resources in Extension Preview
virtual-instance a Resource Group.
list

az workloads sap- Show a Virtual Instance for SAP solutions resource. Extension Preview
virtual-instance
show

az workloads sap- Starts the SAP application, that is the Central Services Extension Preview
virtual-instance instance and Application server instances.
start

az workloads sap- Stops the SAP Application, that is the Application Extension Preview
virtual-instance server instances and Central Services instance.
stop
Name Description Type Status

az workloads sap- Update a Virtual Instance for SAP solutions (VIS) Extension Preview
virtual-instance resource.
update

az workloads sap- Place the CLI in a waiting state until a condition is met. Extension Preview
virtual-instance
wait

az workloads sap-virtual-instance create


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Create a Virtual Instance for SAP solutions (VIS) resource.

Azure CLI

az workloads sap-virtual-instance create --name


--resource-group
[--central-server-vm]
[--configuration]
[--environment {NonProd, Prod}]
[--identity]
[--location]
[--managed-resources-network-
access-type {Private, Public}]
[--managed-rg-name]
[--managed-rg-sa-name]
[--no-wait {0, 1, f, false, n, no,
t, true, y, yes}]
[--sap-product {ECC, Other,
S4HANA}]
[--tags]

Examples
Deploy infrastructure for a three-tier distributed SAP system. See sample json payload
here: https://go.microsoft.com/fwlink/?linkid=2230236

Azure CLI
az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-
name> --environment NonProd --sap-product s4hana --configuration <payload-
file-path> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
identity-resource-id>:{}}}"

Install SAP software on the infrastructure deployed for the three-tier distributed SAP
system. See sample json payload here: https://go.microsoft.com/fwlink/?linkid=2230167

Azure CLI

az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-


name> --environment NonProd --sap-product s4hana --configuration <payload-
file-path> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
Identity-resource-id>:{}}}"

Deploy infrastructure for a three-tier distributed Highly Available (HA) SAP system with
customized resource naming. See sample json payload here:
https://go.microsoft.com/fwlink/?linkid=2230402

Azure CLI

az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-


name> --environment NonProd --sap-product s4hana --configuration <payload-
file-path> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
identity-resource-id>:{}}}"

Install SAP software on the infrastructure deployed for the three-tier distributed Highly
Available (HA) SAP system with customized resource naming. See sample json payload
here: https://go.microsoft.com/fwlink/?linkid=2230340

Azure CLI

az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-


name> --environment NonProd --sap-product s4hana --configuration <payload-
file-path> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
identity-resource-id>:{}}}"

Register an existing SAP system as a Virtual Instance for SAP solutions resource (VIS)

Azure CLI

az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-


name> --environment NonProd --sap-product s4hana --central-server-vm
<virtual-machine-id> --identity "{type:UserAssigned,userAssignedIdentities:
{<managed-identity-resource-id>:{}}}"
Register an existing SAP system as a Virtual Instance for SAP solutions resource (VIS)
with a custom Managed Resource Group and Managed Storage Account Name, and
specify the Managed Storage Account Network Access Type setting as per your security
requirements. Learn More: https://go.microsoft.com/fwlink/?linkid=2256933

Azure CLI

az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-


name> --environment NonProd --sap-product s4hana --central-server-vm
<virtual-machine-id> --identity "{type:UserAssigned,userAssignedIdentities:
{<managed-identity-resource-id>:{}}}" --managed-rg-name <managed-rg-name> --
managed-rg-sa-name <managed-rg-storage-account-name> --managed-resources-
network-access-type <public/private>

Deploy infrastructure for a three-tier distributed Highly Available (HA) SAP system with
Azure Compute Gallary Image. See sample json payload here:
https://go.microsoft.com/fwlink/?linkid=2263420

Azure CLI

az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-


name> --environment NonProd --sap-product s4hana --configuration <payload-
file-path> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
identity-resource-id>:{}}}"

Required Parameters

--name --sap-virtual-instance-name -n

The name of the Virtual Instances for SAP solutions resource.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

Optional Parameters

--central-server-vm

The virtual machine ID or name of the Central Server.


--configuration

Path to the configuration file. Support json-file and yaml-file.

--environment

Defines the environment type - Production/Non Production.


accepted values: NonProd, Prod

--identity

A pre-created user assigned identity with appropriate roles assigned. To learn more
on identity and roles required, visit the ACSS how-to-guide. Support shorthand-
syntax, json-file and yaml-file. Try "??" to show more.

--location -l

The geo-location where the resource lives.

--managed-resources-network-access-type --mrg-network-access-typ

Specifies the network access configuration for the resources that will be deployed in
the Managed Resource Group. The options to choose from are Public and Private. If
'Private' is chosen, the Storage Account service tag should be enabled on the
subnets in which the SAP VMs exist. This is required for establishing connectivity
between VM extensions and the managed resource group storage account. This
setting is currently applicable only to Storage Account. Learn more here
https://go.microsoft.com/fwlink/?linkid=2247228 .
accepted values: Private, Public
default value: Public

--managed-rg-name

Managed resource group name.

--managed-rg-sa-name

The custom storage account name for the storage account created by the service in
the managed resource group created as part of VIS deployment.

--no-wait

Do not wait for the long-running operation to finish.


accepted values: 0, 1, f, false, n, no, t, true, y, yes

--sap-product

Defines the SAP Product type.


accepted values: ECC, Other, S4HANA

--tags

Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show
more.

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose
Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance delete


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Delete a Virtual Instance for SAP solutions resource and its child resources, that is the
associated Central Services Instance, Application Server Instances and Database
Instance.

Azure CLI

az workloads sap-virtual-instance delete [--ids]


[--name]
[--no-wait {0, 1, f, false, n, no,
t, true, y, yes}]
[--resource-group]
[--subscription]
[--yes]

Examples
Delete a Virtual Instance for SAP solutions (VIS)

Azure CLI

az workloads sap-virtual-instance delete -g <resource-group-name> -n <vis-


name>

Remove a Virtual Instance for SAP solutions (VIS) using the Azure resource ID of the VIS

Azure CLI

az workloads sap-virtual-instance delete --id <resource-id>

Optional Parameters
--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--name --sap-virtual-instance-name -n

The name of the Virtual Instances for SAP solutions resource.

--no-wait

Do not wait for the long-running operation to finish.


accepted values: 0, 1, f, false, n, no, t, true, y, yes

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--yes -y

Do not prompt for confirmation.


default value: False

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.


--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance list


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

List all Virtual Instances for SAP solutions resources in a Resource Group.

Azure CLI

az workloads sap-virtual-instance list --resource-group


[--max-items]
[--next-token]

Examples
Get a list of the Virtual Instance(s) for SAP solutions (VIS)

Azure CLI
az workloads sap-virtual-instance list -g <resource-group-name>

Required Parameters

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

Optional Parameters

--max-items

Total number of items to return in the command's output. If the total number of
items available is more than the value specified, a token is provided in the
command's output. To resume pagination, provide the token value in --next-token
argument of a subsequent command.

--next-token

Token to specify where to start paginating. This is the token value from a previously
truncated response.

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance show


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Show a Virtual Instance for SAP solutions resource.

Azure CLI

az workloads sap-virtual-instance show [--ids]


[--name]
[--resource-group]
[--subscription]

Examples
Get an overview of any Virtual Instance(s) for SAP solutions (VIS)

Azure CLI

az workloads sap-virtual-instance show -g <resource-group-name> -n <vis-


name>

Get an overview of the Virtual Instance(s) for SAP solutions (VIS) using the Azure
resource ID of the VIS

Azure CLI

az workloads sap-virtual-instance show --id <resource-id>

Optional Parameters

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--name --sap-virtual-instance-name -n

The name of the Virtual Instances for SAP solutions resource.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.


--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance start


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Starts the SAP application, that is the Central Services instance and Application server
instances.

Azure CLI

az workloads sap-virtual-instance start [--ids]


[--no-wait {0, 1, f, false, n, no,
t, true, y, yes}]
[--resource-group]
[--sap-virtual-instance-name]
[--start-vm {0, 1, f, false, n, no,
t, true, y, yes}]
[--subscription]

Examples
Start an SAP system: This command starts the SAP application tier, that is ASCS instance
and App servers of the system.

Azure CLI

az workloads sap-virtual-instance start -g <resource-group-name> -n <vis-


name>

Start an SAP system using the Azure resource ID of the Virtual instance for SAP solutions
(VIS): This command starts the SAP application tier, that is ASCS instance and App
servers of the system.

Azure CLI

az workloads sap-virtual-instance start --id <resource-id>

Start an SAP system with Virtual Machines: This command starts the SAP application tier,
that is ASCS instance and App servers of the system with Virtual Machines.

Azure CLI

az workloads sap-virtual-instance start -g <resource-group-name> -n <vis-


name> --start-vm

Optional Parameters

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--no-wait

Do not wait for the long-running operation to finish.


accepted values: 0, 1, f, false, n, no, t, true, y, yes
--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--sap-virtual-instance-name --vis-name

The name of the Virtual Instances for SAP solutions resource.

--start-vm

The boolean value indicates whether to start the virtual machines before starting the
SAP instances.
accepted values: 0, 1, f, false, n, no, t, true, y, yes
default value: False

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query
JMESPath query string. See http://jmespath.org/ for more information and
examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance stop


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Stops the SAP Application, that is the Application server instances and Central Services
instance.

Azure CLI

az workloads sap-virtual-instance stop [--deallocate-vm {0, 1, f, false, n,


no, t, true, y, yes}]
[--ids]
[--no-wait {0, 1, f, false, n, no, t,
true, y, yes}]
[--resource-group]
[--sap-virtual-instance-name]
[--soft-stop-timeout-seconds]
[--subscription]

Examples
Stop an SAP system: This command stops the SAP application tier, that is ASCS instance
and App servers of the system.

Azure CLI
az workloads sap-virtual-instance stop -g <resource-group-name> -n <vis-
name>

Stop an SAP system using the Azure resource ID of the Virtual instance for SAP solutions
(VIS): This command stops the SAP application tier, that is ASCS instance and App
servers of the system.

Azure CLI

az workloads sap-virtual-instance stop --id <resource-id>

Stop an SAP system with Virtual Machines: This command stops the SAP application tier,
that is ASCS instance and App servers of the system with Virtual Machines.

Azure CLI

az workloads sap-virtual-instance stop -g <resource-group-name> -n <vis-


name> --deallocate-vm

Soft Stop an SAP system: This command soft stops the SAP application tier, that is ASCS
instance and App servers of the system.

Azure CLI

az workloads sap-virtual-instance stop -g <resource-group-name> -n <vis-


name> --soft-stop-timeout-seconds <timeout-in-seconds>

Optional Parameters

--deallocate-vm

The boolean value indicates whether to Stop and deallocate the virtual machines
along with the SAP instances.
accepted values: 0, 1, f, false, n, no, t, true, y, yes
default value: False

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.
--no-wait

Do not wait for the long-running operation to finish.


accepted values: 0, 1, f, false, n, no, t, true, y, yes

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--sap-virtual-instance-name --vis-name

The name of the Virtual Instances for SAP solutions resource.

--soft-stop-timeout-seconds

This parameter defines how long (in seconds) the soft shutdown waits until the
RFC/HTTP clients no longer consider the server for calls with load balancing. Value 0
means that the kernel does not wait, but goes directly into the next shutdown state,
i.e. hard stop.
default value: 0

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.


--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance update


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Update a Virtual Instance for SAP solutions (VIS) resource.

Azure CLI

az workloads sap-virtual-instance update [--add]


[--configuration]
[--force-string {0, 1, f, false, n,
no, t, true, y, yes}]
[--identity]
[--ids]
[--managed-resource-group-
configuration]
[--managed-resources-network-
access-type {Private, Public}]
[--name]
[--no-wait {0, 1, f, false, n, no,
t, true, y, yes}]
[--remove]
[--resource-group]
[--set]
[--subscription]
[--tags]

Examples
Add tags for an existing Virtual Instance for SAP solutions (VIS) resource

Azure CLI

az workloads sap-virtual-instance update -g <resource-group-name> -n <vis-


name> --tags tag1=test1 tag2=test2

Add tags for an existing Virtual Instance for SAP solutions (VIS) resource using the Azure
resource ID of the VIS

Azure CLI

az workloads sap-virtual-instance update --id <resource-id> --tags


tag1=test1

Add/Change Identity and Managed Resource Network Access for an existing Virtual
Instance for SAP Solutions (VIS) resource

Azure CLI

az workloads sap-virtual-instance update -g <resource-group-name> -n <vis-


name> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
identity-resource-id>:{}}}" --managed-resources-network-access-type
<public/private>

Optional Parameters

--add

Add an object to a list of objects by specifying a path and key value pairs. Example: -
-add property.listProperty <key=value, string or JSON string>.

--configuration
Defines if the SAP system is being created using Azure Center for SAP solutions
(ACSS) or if an existing SAP system is being registered with ACSS Support shorthand-
syntax, json-file and yaml-file. Try "??" to show more.

--force-string

When using 'set' or 'add', preserve string literals instead of attempting to convert to
JSON.
accepted values: 0, 1, f, false, n, no, t, true, y, yes

--identity

Managed service identity (user assigned identities) Support shorthand-syntax, json-


file and yaml-file. Try "??" to show more.

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--managed-resource-group-configuration --mrg-config

Managed resource group configuration Support shorthand-syntax, json-file and


yaml-file. Try "??" to show more.

--managed-resources-network-access-type --mrg-network-access-typ

Specifies the network access configuration for the resources that will be deployed in
the Managed Resource Group. The options to choose from are Public and Private. If
'Private' is chosen, the Storage Account service tag should be enabled on the
subnets in which the SAP VMs exist. This is required for establishing connectivity
between VM extensions and the managed resource group storage account. This
setting is currently applicable only to Storage Account. Learn more here
https://go.microsoft.com/fwlink/?linkid=2247228 .
accepted values: Private, Public

--name --sap-virtual-instance-name -n

The name of the Virtual Instances for SAP solutions resource.

--no-wait
Do not wait for the long-running operation to finish.
accepted values: 0, 1, f, false, n, no, t, true, y, yes

--remove

Remove a property or an element from a list. Example: --remove property.list OR --


remove propertyToRemove.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--set

Update an object by specifying a property path and value to set. Example: --set
property1.property2=.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--tags

Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show
more.

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az workloads sap-virtual-instance wait


Preview

Command group 'az workloads' is in preview and under development. Reference


and support levels: https://aka.ms/CLI_refstatus

Place the CLI in a waiting state until a condition is met.

Azure CLI

az workloads sap-virtual-instance wait [--created]


[--custom]
[--deleted]
[--exists]
[--ids]
[--interval]
[--name]
[--resource-group]
[--subscription]
[--timeout]
[--updated]

Optional Parameters
--created

Wait until created with 'provisioningState' at 'Succeeded'.


default value: False

--custom

Wait until the condition satisfies a custom JMESPath query. E.g.


provisioningState!='InProgress', instanceView.statuses[?
code=='PowerState/running'].

--deleted

Wait until deleted.


default value: False

--exists

Wait until the resource exists.


default value: False

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID


containing all information of 'Resource Id' arguments. You should provide either --
ids or other 'Resource Id' arguments.

--interval

Polling interval in seconds.


default value: 30

--name --sap-virtual-instance-name -n

The name of the Virtual Instances for SAP solutions resource.

--resource-group -g

Name of resource group. You can configure the default group using az configure --
defaults group=<name> .

--subscription
Name or ID of subscription. You can configure the default subscription using az
account set -s NAME_OR_ID .

--timeout

Maximum wait in seconds.


default value: 3600

--updated

Wait until updated with provisioningState at 'Succeeded'.


default value: False

Global Parameters

--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json

--query

JMESPath query string. See http://jmespath.org/ for more information and


examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az


account set -s NAME_OR_ID .
--verbose

Increase logging verbosity. Use --debug for full debug logs.


New-AzWorkloadsSapVirtualInstance
Reference

Module: Az.Workloads

Creates a Virtual Instance for SAP solutions (VIS) resource

Syntax
PowerShell

New-AzWorkloadsSapVirtualInstance
-Name <String>
-ResourceGroupName <String>
[-SubscriptionId <String>]
-Environment <SapEnvironmentType>
-Location <String>
-SapProduct <SapProductType>
-CentralServerVmId <String>
[-ManagedRgStorageAccountName <String>]
[-IdentityType <ManagedServiceIdentityType>]
[-ManagedResourceGroupName <String>]
[-Tag <Hashtable>]
[-UserAssignedIdentity <Hashtable>]
[-DefaultProfile <PSObject>]
[-AsJob]
[-NoWait]
[-WhatIf]
[-Confirm]
[<CommonParameters>]

PowerShell

New-AzWorkloadsSapVirtualInstance
-Name <String>
-ResourceGroupName <String>
[-SubscriptionId <String>]
-Environment <SapEnvironmentType>
-Location <String>
-SapProduct <SapProductType>
[-IdentityType <ManagedServiceIdentityType>]
[-ManagedResourceGroupName <String>]
[-Tag <Hashtable>]
[-UserAssignedIdentity <Hashtable>]
-Configuration <String>
[-DefaultProfile <PSObject>]
[-AsJob]
[-NoWait]
[-WhatIf]
[-Confirm]
[<CommonParameters>]

Description
Creates a Virtual Instance for SAP solutions (VIS) resource

Examples

Example 1: Deploy infrastructure for a three-tier


distributed SAP system using Virtual Instances for SAP
solutions
PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG'


-Name L46 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
Configuration .\CreatePayload.json -Tag @{k1 = "v1"; k2 = "v2"} -
IdentityType 'UserAssigned' -ManagedResourceGroupName "L46-rg" -
UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-
1999802b762c/resourcegroups/SAP-E2ETest-
rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'=
@{}}

Name ResourceGroupName Health Environment ProvisioningState SapProduct


State Status Location
---- ----------------- ------ ----------- ----------------- ---------- -
---- ------ --------
L46 PowerShell-CLI-TestRG NonProd Succeeded S4HANA
SoftwareInstallationPending eastus

In this example, you Deploy the infrastructure for a three tier distributed SAP system. A
sample json payload is a linked here: https://go.microsoft.com/fwlink/?linkid=2230236

Example 2: Install SAP software on the infrastructure


deployed for the three-tier distributed SAP system using
Virtual Instances for SAP solutions
PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG'


-Name L46 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
Configuration .\InstallPayload.json -Tag @{k1 = "v1"; k2 = "v2"} -
IdentityType 'UserAssigned' -ManagedResourceGroupName "L46-rg" -
UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-
1999802b762c/resourcegroups/SAP-E2ETest-
rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'=
@{}}

Name ResourceGroupName Health Environment ProvisioningState SapProduct


State Status Location
---- ----------------- ------ ----------- ----------------- ---------- -
---- ------ --------
L46 PowerShell-CLI-TestRG NonProd Succeeded S4HANA
RegistrationComplete eastus

In this example, you Install the SAP software on the deployed infrastructure for a three
tier Non-High Availability distributed SAP system. A sample json payload is a linked
here:https://go.microsoft.com/fwlink/?linkid=2230167

Example 3: Deploy infrastructure for a three-tier


distributed Highly Available (HA) SAP system using
Virtual Instances for SAP solutions
PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG'


-Name SK1 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
Configuration .\CreatePayloadHACustomNames.json -IdentityType 'UserAssigned'
-ManagedResourceGroupName "acss-mrg1" -UserAssignedIdentity
@{'/subscriptions/49d64d54-e966-4c46-a868-1999802b762c/resourcegroups/SAP-
E2ETest-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-
RBAC-MSI'= @{}}

Name ResourceGroupName Health Environment ProvisioningState SapProduct


State Status Location
---- ----------------- ------ ----------- ----------------- ---------- -
---- ------ --------
SK1 PowerShell-CLI-TestRG NonProd Succeeded S4HANA
SoftwareInstallationPending eastus

In this example, you Deploy the infrastructure for a three tier distributed Highly
Available (HA) SAP system.

Example 4: Install SAP software on the infrastructure


deployed for the three-tier distributed Highly Available
(HA) SAP system using Virtual Instances for SAP solutions
PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG'


-Name SK1 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
Configuration .\CreatePayloadHACustomNamesInstall.json -IdentityType
'UserAssigned' -ManagedResourceGroupName "acss-mrg1" -UserAssignedIdentity
@{'/subscriptions/49d64d54-e966-4c46-a868-1999802b762c/resourcegroups/SAP-
E2ETest-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-
RBAC-MSI'= @{}}

Name ResourceGroupName Health Environment ProvisioningState SapProduct


State Status Location
---- ----------------- ------ ----------- ----------------- ---------- -
---- ------ --------
SK1 PowerShell-CLI-TestRG NonProd Succeeded S4HANA
RegistrationComplete eastus

In this example, you Install the SAP software on the deployed infrastructure for a three
tier distributed Highly Availabile SAP system with Transport directory and customized
resource naming.

Example 5: Register an existing SAP system as a VIS


PowerShell

New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'TestRG' -Name L46 -


Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -
CentralServerVmId '/subscriptions/49d64d54-e966-4c46-a868-
1999802b762c/resourcegroups/powershell-cli-
testrg/providers/microsoft.compute/virtualmachines/l46ascsvm' -Tag @{k1 =
"v1"; k2 = "v2"} -ManagedResourceGroupName "L46-rg" -
ManagedRgStorageAccountName 'acssstoragel46' -IdentityType 'UserAssigned' -
UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-
1999802b762c/resourcegroups/SAP-E2ETest-
rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'=
@{}}

Name ResourceGroupName Health Environment ProvisioningState SapProduct


State Status Location
---- ----------------- ------ ----------- ----------------- ---------- -
---- ------ --------
L46 PowerShell-CLI-TestRG NonProd Succeeded S4HANA
RegistrationComplete eastus

Use the New-AzWorkloadsSapVirtualInstance cmdlet with the suggested input


parameters to register an existing SAP system as a Virtual Instance for SAP solutions
resource.
Parameters
-AsJob

Run the command as a job

ノ Expand table

Type: SwitchParameter

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-CentralServerVmId

The virtual machine ID of the Central Server

ノ Expand table

Type: String

Position: Named

Default value: None

Required: True

Accept pipeline input: False

Accept wildcard characters: False

-Configuration

Configuration json path.

ノ Expand table

Type: String

Position: Named
Default value: None

Required: True

Accept pipeline input: False

Accept wildcard characters: False

-Confirm

Prompts you for confirmation before running the cmdlet.

ノ Expand table

Type: SwitchParameter

Aliases: cf

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-DefaultProfile

The credentials, account, tenant, and subscription used for communication with
Azure.

ノ Expand table

Type: PSObject

Aliases: AzureRMContext, AzureCredential

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False


-Environment

Defines the environment type - Production/Non Production.

ノ Expand table

Type: SapEnvironmentType

Position: Named

Default value: None

Required: True

Accept pipeline input: False

Accept wildcard characters: False

-IdentityType

Type of manage identity

ノ Expand table

Type: ManagedServiceIdentityType

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-Location

The geo-location where the resource lives

ノ Expand table

Type: String

Position: Named

Default value: None


Required: True

Accept pipeline input: False

Accept wildcard characters: False

-ManagedResourceGroupName

Managed resource group name

ノ Expand table

Type: String

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-ManagedRgStorageAccountName

The custom storage account name for the storage account created by the service in
the managed resource group created as part of VIS deployment.

Refer to the storage account naming rules here.

If not provided, the service will create the storage account with a random name

ノ Expand table

Type: String

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-Name
The name of the Virtual Instances for SAP solutions resource

ノ Expand table

Type: String

Aliases: SapVirtualInstanceName

Position: Named

Default value: None

Required: True

Accept pipeline input: False

Accept wildcard characters: False

-NoWait

Run the command asynchronously

ノ Expand table

Type: SwitchParameter

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-ResourceGroupName

The name of the resource group. The name is case insensitive.

ノ Expand table

Type: String

Position: Named

Default value: None


Required: True

Accept pipeline input: False

Accept wildcard characters: False

-SapProduct

Defines the SAP Product type.

ノ Expand table

Type: SapProductType

Position: Named

Default value: None

Required: True

Accept pipeline input: False

Accept wildcard characters: False

-SubscriptionId

The ID of the target subscription.

ノ Expand table

Type: String

Position: Named

Default value: (Get-AzContext).Subscription.Id

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-Tag

Resource tags.

ノ Expand table
Type: Hashtable

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-UserAssignedIdentity

User assigned identities dictionary

ノ Expand table

Type: Hashtable

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False

-WhatIf

Shows what would happen if the cmdlet runs. The cmdlet is not run.

ノ Expand table

Type: SwitchParameter

Aliases: wi

Position: Named

Default value: None

Required: False

Accept pipeline input: False

Accept wildcard characters: False


Outputs
ISapVirtualInstance

6 Collaborate with us on
Azure PowerShell feedback
GitHub
Azure PowerShell is an open source
The source for this content can project. Select a link to provide
be found on GitHub, where you feedback:
can also create and review
issues and pull requests. For  Open a documentation issue
more information, see our
contributor guide.  Provide product feedback
SAP Virtual Instances
Reference

Service: Workloads
API Version: 2023-10-01-preview

Operations
ノ Expand table

Create Creates a Virtual Instance for SAP solutions (VIS) resource

Delete Deletes a Virtual Instance for SAP solutions resource and its child resources,
that is the associated Central Services Instance, Application Server Instances
an...

Get Gets a Virtual Instance for SAP solutions resource

List By Resource Gets all Virtual Instances for SAP solutions resources in a Resource Group.
Group

List By Gets all Virtual Instances for SAP solutions resources in a Subscription.
Subscription

Start Starts the SAP application, that is the Central Services instance and
Application server instances.

Stop Stops the SAP Application, that is the Application server instances and Central
Services instance.

Update Updates a Virtual Instance for SAP solutions resource


SAP Deployment Automation
Framework
Article • 12/21/2023

SAP Deployment Automation Framework is an open-source orchestration tool that


can deploy, install, and maintain SAP environments. You can deploy the systems on any
of the SAP-supported operating system versions and into any Azure region. You can
create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with
AnyDB by using Terraform . The environments can be configured using Ansible .

Terraform from Hashicorp is an open-source tool for provisioning and managing


cloud infrastructure.

Ansible is an open-source platform by Red Hat that automates cloud provisioning,


configuration management, and application deployments. When you use Ansible, you
can automate deployment and configuration of resources in your environment.

The automation framework has two main components:

Deployment infrastructure (control plane, typically deployed in the hub)


SAP infrastructure (SAP workload zone, typically deployed in a spoke.)

The dependency between the control plane and the application plane is illustrated in
the following diagram. In a typical deployment, a single control plane is used to manage
multiple SAP deployments.
You use the control plane of SAP Deployment Automation Framework to deploy the SAP
infrastructure and the SAP application. The deployment uses Terraform templates to
create the infrastructure as a service (IaaS) -defined infrastructure to host the SAP
applications.

7 Note

This automation framework is based on Microsoft best practices and principles for
SAP on Azure. To understand how to use certified virtual machines (VMs) and
storage solutions for stability, reliability, and performance, see Get started with
SAP automation framework on Azure.
This automation framework also follows the Microsoft Cloud Adoption Framework
for Azure.

You can use the automation framework to deploy the following SAP architectures:

Standalone: For this architecture, all the SAP roles are installed on a single server.
Distributed: With this architecture, you can separate the database server and the
application tier. The application tier can further be separated in two by having SAP
central services on a VM and one or more application servers.
Distributed (highly available): This architecture is similar to the distributed
architecture. In this deployment, the database and/or SAP central services can both
be configured by using a highly available configuration that uses two VMs, each
with Pacemaker clusters.

About the control plane


The control plane houses the deployment infrastructure from which other environments
are deployed. After the control plane is deployed, it rarely needs to be redeployed, if
ever.

The control plane provides the following services:

Deployment agents for running:


Terraform deployment
Ansible configuration
Persistent storage for the Terraform state files
Persistent storage for the downloaded SAP software
Azure Key Vault for secure storage for deployment credentials
Private DNS zone (optional)
A Web application for configuration management

The control plane is typically a regional resource deployed into the hub subscription in a
hub-and-spoke architecture.

The following diagram shows the key components of the control plane and the
workload zone.
The application configuration is performed from the deployment agents in the control
plane by using a set of predefined playbooks. These playbooks will:

Configure base operating system settings.


Configure SAP-specific operating system settings.
Make the installation media available in the system.
Install the SAP system components.
Install the SAP database (SAP HANA and AnyDB).
Configure high availability by using Pacemaker.
Configure high availability for your SAP database.

For more information about how to configure and deploy the control plane, see
Configure the control plane and Deploy the control plane.

Deployer VMs
These VMs are used to run the orchestration scripts that deploy the Azure resources by
using Terraform. They're also Ansible controllers and are used to execute the Ansible
playbooks on all the managed nodes, that is, the VMs of an SAP deployment.

About the SAP workload zone


The workload zone allows for partitioning of the SAP systems deployments into different
environments, such as development, test, and production. The workload zone provides
the shared resources (networking and credentials management) that are used by the
SAP systems.

You would typically create a workload zone for each unique Azure Virtual network
(VNet) that you want to deploy the SAP systems into.

The SAP workload zone provides the following services to the SAP systems:

Virtual network
Azure Key Vault for system credentials (VMs and SAP accounts)
Shared storage (optional)

It is recommended to deploy the workload zone into a spoke subscription in a hub-and-


spoke architecture and use a dedicated deployment credential for each workload zone.

For more information about how to configure and deploy the SAP workload zone, see
Configure the workload zone and Deploy the SAP workload zone.

About the SAP systems


Each SAP system is deployed into a dedicated resource group and they use the services
from the workload zone.

The SAP system deployment consists of the VMs and the associated resources required
to run the SAP application, including the web, app, and database tiers.

For more information about how to configure and deploy the SAP system, see Configure
the SAP system and Deploy the SAP system.

Software acquisition process


The framework also provides an Ansible playbook that can be used to download the
software from SAP and persist it in the storage accounts in the control plane's SAP
library resource group.

The software acquisition is using an SAP application manifest file that contains the list of
SAP software to be downloaded. The manifest file is a YAML file that contains the:

List of files to be downloaded.


List of the product IDs for the SAP application components.
Set of template files used to provide the parameters for the unattended
installation.
The SAP software download playbook processes the manifest file and the dependent
manifest files and downloads the SAP software from SAP by using the specified SAP user
account. The software is downloaded to the SAP library storage account and is available
for the installation process.

As part of the download process, the application manifest and the supporting templates
are also persisted in the storage account. The application manifest and the dependent
manifests are aggregated into a single manifest file that is used by the installation
process.

Glossary
The following terms are important concepts for understanding the automation
framework.

SAP concepts

ノ Expand table

Term Description

System An instance of an SAP application that contains the resources the application needs
to run. Defined by a unique three-letter identifier, the SID.

Landscape A collection of systems in different environments within an SAP application. For


example, SAP ERP Central Component (ECC), SAP customer relationship
management (CRM), and SAP Business Warehouse (BW).

Workload Partitions the SAP applications to environments, such as nonproduction and


zone production environments or development, quality assurance, and production
environments. Provides shared resources, such as virtual networks and key vaults,
to all systems within.

The following diagram shows the relationships between SAP systems, workload zones
(environments), and landscapes. In this example setup, the customer has three SAP
landscapes: ECC, CRM, and BW. Each landscape contains three workload zones:
production, quality assurance, and development. Each workload zone contains one or
more systems.
Deployment components

ノ Expand table

Term Description Scope

Deployer A VM that can execute Terraform and Ansible commands. Region

Library Provides storage for the Terraform state files and the SAP Region
installation media.

Workload Contains the virtual network for the SAP systems and a key vault Workload
zone that holds the system credentials. zone

System The deployment unit for the SAP application (SID). Contains all Workload
infrastructure assets. zone

Next steps
Get started with the deployment automation framework

Plan for the automation framework

Configure Azure DevOps for the automation framework

Configure the control plane

Configure the workload zone

Configure the SAP system


Supportability matrix for the SAP
automation framework
Article • 03/10/2024

SAP Deployment Automation Framework supports deployment of all the supported SAP
on Azure topologies.

Supported operating systems


The automation framework supports the following operating systems.

Control plane
The deployer virtual machine of the control plane must be deployed on Linux because
the Ansible controllers only work on Linux.

SAP infrastructure
The automation framework supports deployment of the SAP on Azure infrastructure
both on Linux or Windows virtual machines on x86-64 or x64 hardware.

The framework supports the following operating systems and distributions:

Windows server 64 bit for the x86-64 platform


SUSE Linux 64 bit for the x86-64 platform (12.x and 15.x)
Red Hat Linux 64 bit for the x86-64 platform (7.x and 8.x)
Oracle Linux 64 bit for the x86-64 platform

The following distributions have been tested with the framework:

ノ Expand table

Database Versions

Red Hat 7.9, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2

SUSE 12 SP4, 15 SP2, 15 SP3, 15 SP4, 15 SP5

Oracle 8.2, 8.4, 8.6, 8.8, 8.9

Windows Server 2016, 2019, 2022


Supported database back ends
The automation framework supports the following database back ends:

ノ Expand table

Database Versions

SAP HANA (S4/NW) 1909, 2020, 2021, 2022, 2023

ASE 1603SP11, 1603SP14

DB2 11.5

MS SQL Server 2016, 2019, 2022

Supported storage types


The automation framework supports the following storage types:

ノ Expand table

Storage Solution Notes

Premium_SSD

Premium_SSDv2

Ultra_SSD Limited to certain scenarios. For instance, /hana/log on eligible SKU.

Azure NetApp Files For HANA, AVG support also available

Azure Files NFS For shared files, not for database files

Encryption using Azure Disk Encryption with customer managed keys is supported.

Supported SAP topologies


By default, the automation framework deploys with database and application tiers. The
application tier is split into three more tiers: application, central services, and web
dispatchers.

ノ Expand table
Deployment Notes

Standalone All SAP roles are installed on a single server.

Distributed Separate Database server and application tier. The application tier can further
split by having SAP central services on one VM and one or more application
servers on another.

Distributed Database and/or SAP Central Services are deployed highly available using
(HA) Pacemaker

You can also deploy the automation framework to a standalone server by specifying a
configuration without an application tier.

Supported deployment topologies


The automation framework supports both green-field and brown-field deployments.

Green-field deployments
In a green-field deployment, the automation framework creates all the required
resources.

In this scenario, you provide the relevant data (address spaces for networks and
subnets) when you configure the environment. For more examples, see Configure the
workload zone.

Brown-field deployments
In a brown-field deployment, you can use existing Azure resources as part of the
deployment.

In this scenario, you provide the Azure resource identifiers for the existing resources
when you configure the environment. For more examples, see Configure the workload
zone.

Supported Azure features


The automation framework can use the following Azure services, features, and
capabilities:

Azure Virtual Machines


Accelerated networking
Anchor VMs (optional)
SSH authentication/Username and password authentication
SKU configuration
Custom images
New or existing proximity placement groups
Azure Virtual Network
Deployment in networks peered to your SAP network
Customer-specified IP addressing
Azure-provided IP addressing
New or existing network security groups
New or existing virtual networks
New or existing subnets
Private Endpoints
Azure availability zones
High availability (HA)
Azure Firewall
Azure Load Balancer
Standard load balancers
Azure Storage
Boot diagnostics storage
SAP installation media storage
Terraform state file storage
Cloud Witness storage for HA scenarios
Azure Key Vault
New or existing key vaults
Customer-managed keys for disk encryption
Azure application security groups
Azure Files for NFS
Azure NetApp Files
For shared files
For database files

Next step
Get started with the automation framework
Plan your deployment of the SAP
automation framework
Article • 03/11/2024

There are multiple considerations for planning SAP deployments using the SAP
Deployment Automation Framework. These include subscription planning, credentials
management virtual network design.

For generic SAP on Azure design considerations, see Introduction to an SAP adoption
scenario.

7 Note

The Terraform deployment uses Terraform templates provided by Microsoft from


the SAP Deployment Automation Framework repository . The templates use
parameter files with your system-specific information to perform the deployment.

Subscription planning
You should deploy the control plane and the workload zones in different subscriptions.
The control plane should reside in a hub subscription that is used to host the
management components of the SAP automation framework.

The SAP systems should be hosted in spoke subscriptions, which are dedicated to the
SAP systems. An example of partitioning the systems would be to host the development
systems in a separate subscription with a dedicated virtual network and the production
systems would be hosted in their own subscription with a dedicated virtual network.

This approach provides a both a security boundary and allows for clear separation of
duties and responsibilities. For example, the SAP Basis team can deploy systems into the
workload zones, and the infrastructure team can manage the control plane.

Control plane planning


You can perform the deployment and configuration activities from either Azure Pipelines
or by using the provided shell scripts directly from Azure-hosted Linux virtual machines.
This environment is referred to as the control plane. For setting up Azure DevOps for the
deployment framework, see Set up Azure DevOps for SAP Deployment Automation
Framework. For setting up a Linux virtual machines as the deployer, see Set up Linux
virtual machines for SAP Deployment Automation Framework.

Before you design your control plane, consider the following questions:

In which regions do you need to deploy SAP systems?


Is there a dedicated subscription for the control plane?
Is there a dedicated deployment credential (service principal) for the control plane?
Is there an existing virtual network or is a new virtual network needed?
How is outbound internet provided for the virtual machines?
Are you going to deploy Azure Firewall for outbound internet connectivity?
Are private endpoints required for storage accounts and the key vault?
Are you going to use an existing private DNS zone for the virtual machines or use
the control plane for hosting Private DNS?
Are you going to use Azure Bastion for secure remote access to the virtual
machines?
Are you going to use the SAP Deployment Automation Framework configuration
web application for performing configuration and deployment activities?

Control plane
The control plane provides the following services:

Deployment VMs, which do Terraform deployments and Ansible configuration.


Acts as Azure DevOps self-hosted agents.
A key vault, which contains the deployment credentials (service principals) used by
Terraform when performing the deployments.
Azure Firewall for providing outbound internet connectivity.
Azure Bastion for providing secure remote access to the deployed virtual
machines.
An SAP Deployment Automation Framework configuration Azure web application
for performing configuration and deployment activities.

The control plane is defined by using two configuration files, one for the deployer and
one for the SAP Library.

The deployment configuration file defines the region, environment name, and virtual
network information. For example:

tfvars

# Deployer Configuration File


environment = "MGMT"
location = "westeurope"
management_network_logical_name = "DEP01"

management_network_address_space = "10.170.20.0/24"
management_subnet_address_prefix = "10.170.20.64/28"

firewall_deployment = true
management_firewall_subnet_address_prefix = "10.170.20.0/26"

bastion_deployment = true
management_bastion_subnet_address_prefix = "10.170.20.128/26"

use_webapp = true

webapp_subnet_address_prefix = "10.170.20.192/27"
deployer_assign_subscription_permissions = true

deployer_count = 2

use_service_endpoint = false
use_private_endpoint = false
public_network_access_enabled = true

DNS considerations
When you plan the DNS configuration for the automation framework, consider the
following questions:

Is there an existing private DNS that the solutions can integrate with or do you
need to use a custom private DNS zone for the deployment environment?
Are you going to use predefined IP addresses for the virtual machines or let Azure
assign them dynamically?

You can integrate with an existing private DNS zone by providing the following values in
your tfvars files:

tfvars

management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
#management_dns_resourcegroup_name = "RESOURCEGROUPNAME"
use_custom_dns_a_registration = false

Without these values, a private DNS zone is created in the SAP library resource group.

For more information, see the in-depth explanation of how to configure the deployer.
SAP library configuration
The SAP library resource group provides storage for SAP installation media, Bill of
Material files, Terraform state files, and, optionally, the private DNS zones. The
configuration file defines the region and environment name for the SAP library. For
parameter information and examples, see Configure the SAP library for automation.

Workload zone planning


Most SAP application landscapes are partitioned in different tiers. In SAP Deployment
Automation Framework, these tiers are called workload zones. For example, you might
have different workload zones for development, quality assurance, and production
systems. For more information, see Workload zones.

The workload zone provides the following shared services for the SAP applications:

Azure Virtual Network, for virtual networks, subnets, and network security groups.
Azure Key Vault, for storing the virtual machine and SAP system credentials.
Azure Storage accounts for boot diagnostics and Cloud Witness.
Shared storage for the SAP systems, either Azure Files or Azure NetApp Files.

Before you design your workload zone layout, consider the following questions:

In which regions do you need to deploy workloads?


How many workload zones does your scenario require (development, quality
assurance, and production)?
Are you deploying into new virtual networks or are you using existing virtual
networks?
What storage type do you need for the shared storage (Azure Files NFS or Azure
NetApp Files)?

The default naming convention for workload zones is [ENVIRONMENT]-[REGIONCODE]-


[NETWORK]-INFRASTRUCTURE . For example, DEV-WEEU-SAP01-INFRASTRUCTURE is for a

development environment hosted in the West Europe region by using the SAP01 virtual
network. PRD-WEEU-SAP02-INFRASTRUCTURE is for a production environment hosted in the
West Europe region by using the SAP02 virtual network.

The SAP01 and SAP02 designations define the logical names for the Azure virtual
networks. They can be used to further partition the environments. Suppose you need
two Azure virtual networks for the same workload zone. For example, you might have a
multi-subscription scenario where you host development environments in two
subscriptions. You can use the different logical names for each virtual network. For
example, you can use DEV-WEEU-SAP01-INFRASTRUCTURE and DEV-WEEU-SAP02-
INFRASTRUCTURE .

For more information, see Configure a workload zone deployment for automation.

Windows-based deployments
When you perform Windows-based deployments, the virtual machines in the workload
zone's virtual network need to be able to communicate with Active Directory to join the
SAP virtual machines to the Active Directory domain. The provided DNS name needs to
be resolvable by Active Directory.

SAP Deployment Automation Framework doesn't create accounts in Active Directory, so


the accounts need to be precreated and stored in the workload zone key vault.

ノ Expand table

Credential Name Example

Account that can perform [IDENTIFIER]-ad-svc-account DEV-WEEU-SAP01-ad-svc-


domain join activities account

Password for the account [IDENTIFIER]-ad-svc-account- DEV-WEEU-SAP01-ad-svc-


that performs the domain password account-password
join

sidadm account password [IDENTIFIER]-[SID]-win- DEV-WEEU-SAP01-W01-


sidadm_password_id winsidadm_password_id

SID Service account [IDENTIFIER]-[SID]-svc- DEV-WEEU-SAP01-W01-svc-


password sidadm-password sidadm-password

SQL Server Service account [IDENTIFIER]-[SID]-sql-svc- DEV-WEEU-SAP01-W01-sql-svc-


account account

SQL Server Service account [IDENTIFIER]-[SID]-sql-svc- DEV-WEEU-SAP01-W01-sql-svc-


password password password

SQL Server Agent Service [IDENTIFIER]-[SID]-sql-agent- DEV-WEEU-SAP01-W01-sql-


account account agent-account

SQL Server Agent Service [IDENTIFIER]-[SID]-sql-agent- DEV-WEEU-SAP01-W01-sql-


account password password agent-password

DNS settings
For high-availability scenarios, a DNS record is needed in the Active Directory for the
SAP central services cluster. The DNS record needs to be created in the Active Directory
DNS zone. The DNS record name is defined as [sid]>scs[scs instance number]cl1 . For
example, w01scs00cl1 is used for the cluster, with W01 for the SID and 00 for the
instance number.

Credentials management
The automation framework uses service principals for infrastructure deployment. We
recommend using different deployment credentials (service principals) for each
workload zone. The framework stores these credentials in the deployer's key vault. Then,
the framework retrieves these credentials dynamically during the deployment process.

SAP and virtual machine credentials management


The automation framework uses the workload zone key vault for storing both the
automation user credentials and the SAP system credentials. The following table lists the
names of the virtual machine credentials.

ノ Expand table

Credential Name Example

Private key [IDENTIFIER]-sshkey DEV-WEEU-SAP01-sid-sshkey

Public key [IDENTIFIER]-sshkey-pub DEV-WEEU-SAP01-sid-sshkey-pub

Username [IDENTIFIER]-username DEV-WEEU-SAP01-sid-username

Password [IDENTIFIER]-password DEV-WEEU-SAP01-sid-password

sidadm password [IDENTIFIER]-[SID]-sap-password DEV-WEEU-SAP01-X00-sap-


password

sidadm account [IDENTIFIER]-[SID]- DEV-WEEU-SAP01-W01-


password winsidadm_password_id winsidadm_password_id

SID Service account [IDENTIFIER]-[SID]-svc-sidadm- DEV-WEEU-SAP01-W01-svc-sidadm-


password password password

Service principal creation


To create your service principal:
1. Sign in to the Azure CLI with an account that has permissions to create a service
principal

2. Create a new service principal by running the command az ad sp create-for-rbac .


Make sure to use a description name for --name . For example:

Azure CLI

az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --
name="DEV-Deployment-Account"

3. Note the output. You need the application identifier ( appId ), password ( password ),
and tenant identifier ( tenant ) for the next step. For example:

JSON

{
"appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"displayName": "DEV-Deployment-Account",
"name": "http://DEV-Deployment-Account",
"password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

4. Assign the User Access Administrator role to your service principal. For example:

Azure CLI

az role assignment create --assignee <your-application-ID> --role "User


Access Administrator" --scope /subscriptions/<your-subscription-
ID>/resourceGroups/<your-resource-group-name>

For more information, see the Azure CLI documentation for creating a service principal.

) Important

If you don't assign the User Access Administrator role to the service principal, you
can't assign permissions by using the automation.

Permissions management
In a locked-down environment, you might need to assign another permission to the
service principals. For example, you might need to assign the User Access Administrator
role to the service principal.

Required permissions
The following table shows the required permissions for the service principals.

ノ Expand table

Credential Area Required permissions Duration

Control Plane SPN Control plane Contributor


subscription

Control Plane SPN Deployer resource Contributor


group

Control Plane SPN Deployer resource User Access Administrator During


group setup

Control Plane SPN SAP Library resource Contributor


group

Control Plane SPN SAP Library resource User Access Administrator


group

Workload Zone SPN Target subscription Contributor

Workload Zone SPN Workload zone resource Contributor, User Access


group Administrator

Workload Zone SPN Control plane Reader


subscription

Workload Zone SPN Control plane virtual Network contributor


network

Workload Zone SPN SAP library tfstate Storage account contributor


storage account

Workload Zone SPN SAP library sapbits Reader


storage account

Workload Zone SPN Private DNS zone Private DNS zone contributor

Web Application Target subscription Reader


Identity

Cluster Virtual Machine Resource group Fencing role


Identity
Firewall configuration

ノ Expand table

Component Addresses Duration Notes

SDAF github.com/Azure/sap- Setup of deployer


automation ,
github.com/Azure/sap-
automation-samples ,
githubusercontent.com

Terraform releases.hashicorp.com , Setup of deployer See Installing


registry.terraform.io , Terraform .
checkpoint-api.hashicorp.com

Azure CLI Installing Azure CLI Setup of deployer The firewall requirements
and during for the Azure CLI
deployments installation are defined in
Installing Azure CLI.

PIP bootstrap.pypa.io Setup of deployer See Installing Ansible .

Ansible pypi.org , pythonhosted.org , Setup of deployer


files.pythonhosted.org ,
galaxy.ansible.com ,
'https://ansible-galaxy-
ng.s3.dualstack.us-east-
1.amazonaws.com'

PowerShell onegetcdn.azureedge.net , psg- Setup of See PowerShell Gallery.


Gallery prod-centralus.azureedge.net , Windows-based
psg-prod-eastus.azureedge.net systems

Windows download.visualstudio.microsof Setup of See Visual Studio


components t.com , Windows-based components.
download.visualstudio.microsof systems
t.com ,
download.visualstudio.com

SAP softwaredownloads.sap.com SAP software See SAP downloads .


downloads download

Azure https://vstsagentpackage.azure Setup of Azure


DevOps edge.net DevOps
agent
You can test the connectivity to the URLs from a Linux Virtual Machine in Azure using a
PowerShell script that uses the 'run-command' feature in Azure to test the connectivity
to the URLs.

The following example shows how to test the connectivity to the URLs by using an
interactive PowerShell script.

PowerShell

$sdaf_path = Get-Location
if ( $PSVersionTable.Platform -eq "Unix") {
if ( -Not (Test-Path "SDAF") ) {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( -not (Test-Path $sdaf_path)) {
New-Item -Path $sdaf_path -Type Directory
}
}

Set-Location -Path $sdaf_path

git clone https://github.com/Azure/sap-automation.git

cd sap-automation
cd deploy
cd scripts

if ( $PSVersionTable.Platform -eq "Unix") {


./Test-SDAFURLs.ps1
}
else {
.\Test-SDAFURLs.ps1
}

DevOps structure
The deployment framework uses three separate repositories for the deployment
artifacts. For your own parameter files, it's a best practice to keep these files in a source
control repository that you manage.

Main repository
This repository contains the Terraform parameter files and the files needed for the
Ansible playbooks for all the workload zone and system deployments.

You can create this repository by cloning the SAP Deployment Automation Framework
bootstrap repository into your source control repository.

) Important

This repository must be the default repository for your Azure DevOps project.

Folder structure
The following sample folder hierarchy shows how to structure your configuration files
along with the automation framework files.

ノ Expand table

Folder Contents Description


name

BOMS BoM Files Used for manual BoM download

DEPLOYER Configuration A folder with deployer configuration files for all deployments
files for the that the environment manages. Name each subfolder by the
deployer naming convention of Environment - Region - Virtual Network.
For example, PROD-WEEU-DEP00-INFRASTRUCTURE.

LIBRARY Configuration A folder with SAP library configuration files for all deployments
files for SAP that the environment manages. Name each subfolder by the
library naming convention of Environment - Region - Virtual Network.
For example, PROD-WEEU-SAP-LIBRARY.

LANDSCAPE Configuration A folder with configuration files for all workload zones that the
files for environment manages. Name each subfolder by the naming
workload zone convention Environment - Region - Virtual Network. For
example, PROD-WEEU-SAP00-INFRASTRUCTURE.

SYSTEM Configuration A folder with configuration files for all SAP System Identification
files for the (SID) deployments that the environment manages. Name each
SAP systems subfolder by the naming convention Environment - Region -
Virtual Network - SID. For example, PROD-WEEU-SAPO00-ABC.
Your parameter file's name becomes the name of the Terraform state file. Make sure to
use a unique parameter file name for this reason.

Code repository
This repository contains the Terraform automation templates, the Ansible playbooks,
and the deployment pipelines and scripts. For most use cases, consider this repository
as read-only and don't modify it.

To create this repository, clone the SAP Deployment Automation Framework


repository into your source control repository.

Name this repository sap-automation .

Sample repository
This repository contains the sample Bill of Materials files and the sample Terraform
configuration files.

To create this repository, clone the SAP Deployment Automation Framework samples
repository into your source control repository.

Name this repository samples .

Supported deployment scenarios


The automation framework supports deployment into both new and existing scenarios.

Azure regions
Before you deploy a solution, it's important to consider which Azure regions to use.
Different Azure regions might be in scope depending on your specific scenario.

The automation framework supports deployments into multiple Azure regions. Each
region hosts:

The deployment infrastructure.


The SAP library with state files and installation media.
1-N workload zones.
1-N SAP systems in the workload zones.

Deployment environments
If you're supporting multiple workload zones in a region, use a unique identifier for your
deployment environment and SAP library. Don't use the identifier for the workload zone.
For example, use MGMT for management purposes.

The automation framework also supports having the deployment environment and SAP
library in separate subscriptions than the workload zones.

The deployment environment provides the following services:

One or more deployment virtual machines, which perform the infrastructure


deployments by using Terraform and perform the system configuration and SAP
installation by using Ansible playbooks.
A key vault, which contains service principal identity information for use by
Terraform deployments.
An Azure Firewall component, which provides outbound internet connectivity.

The deployment configuration file defines the region, environment name, and virtual
network information. For example:

Terraform

# The environment value is a mandatory field, it is used for partitioning


the environments, for example (PROD and NP)
environment = "MGMT"

# The location/region value is a mandatory field, it is used to control


where the resources are deployed
location = "westeurope"

# management_network_address_space is the address space for management


virtual network
management_network_address_space = "10.10.20.0/25"

# management_subnet_address_prefix is the address prefix for the management


subnet
management_subnet_address_prefix = "10.10.20.64/28"

# management_firewall_subnet_address_prefix is the address prefix for the


firewall subnet
management_firewall_subnet_address_prefix = "10.10.20.0/26"

# management_bastion_subnet_address_prefix is a mandatory parameter if


bastion is deployed and if the subnets are not defined in the workload or if
existing subnets are not used
management_bastion_subnet_address_prefix = "10.10.20.128/26"

deployer_enable_public_ip = false

firewall_deployment = true

bastion_deployment = true

For more information, see the in-depth explanation of how to configure the deployer.

Workload zone structure


Most SAP configurations have multiple workload zones for different application tiers.
For example, you might have different workload zones for development, quality
assurance, and production.

You create or grant access to the following services in each workload zone:

Azure Virtual Networks, for virtual networks, subnets, and network security groups.
Azure Key Vault, for system credentials and the deployment service principal.
Azure Storage accounts, for boot diagnostics and Cloud Witness.
Shared storage for the SAP systems, either Azure Files or Azure NetApp Files.

Before you design your workload zone layout, consider the following questions:

How many workload zones does your scenario require?


In which regions do you need to deploy workloads?
What's your deployment scenario?

For more information, see Configure a workload zone deployment for automation.
SAP system setup
The SAP system contains all Azure components required to host the SAP application.

Before you configure the SAP system, consider the following questions:

What database back end do you want to use?


How many database servers do you need?
Does your scenario require high availability?
How many application servers do you need?
How many web dispatchers do you need, if any?
How many central services instances do you need?
What size virtual machine do you need?
Which virtual machine image do you want to use? Is the image on Azure
Marketplace or custom?
Are you deploying to a new or existing deployment scenario?
What's your IP allocation strategy? Do you want Azure to set IPs or use custom
settings?

For more information, see Configure the SAP system for automation.

Deployment flow
When you plan a deployment, it's important to consider the overall flow. There are three
main steps of an SAP deployment on Azure with the automation framework.

1. Deploy the control plane. This step deploys components to support the SAP
automation framework in a specified Azure region.
a. Create the deployment environment.
b. Create shared storage for Terraform state files.
c. Create shared storage for SAP installation media.

2. Deploy the workload zone. This step deploys the workload zone components, such
as the virtual network and key vaults.

3. Deploy the system. This step includes the infrastructure for the SAP system
deployment and the SAP configuration and SAP installation.

Naming conventions
The automation framework uses a default naming convention. If you want to use a
custom naming convention, plan and define your custom names before deployment. For
more information, see Configure the naming convention.

Disk sizing
If you want to configure custom disk sizes, make sure to plan your custom setup before
deployment.

Next step
Manual deployment of the automation framework
Naming conventions for SAP
Deployment Automation Framework
Article • 12/12/2023

SAP Deployment Automation Framework uses standard naming conventions. Consistent


naming helps the automation framework run correctly with Terraform. Standard naming
helps you deploy the automation framework smoothly. For example, consistent naming
helps you to:

Deploy the SAP virtual network infrastructure into any supported Azure region.
Do multiple deployments with partitioned virtual networks.
Deploy the SAP system into any SAP workload zone.
Run regular and high availability instances.
Do disaster recovery and fall forward behavior.

Review the standard terms, area paths, and variable names before you begin your
deployment. If necessary, you can also configure custom naming.

Placeholder values
The naming convention's example formats use the following placeholder values.

ノ Expand table

Placeholder Concept Character limit Example

{ENVIRONMENT} Environment 5 DEV , PROTO , NP , PROD

{REGION_MAP} Region map 4 weus for westus

{SAP_VNET} SAP virtual network 7 SAP0

{SID} SAP system identifier 3 X01

{PREFIX} SAP resource prefix DEV-WEEU-SAP01-X01

{DEPLOY_VNET} Deployer virtual network 7

{REMOTE_VNET} Remote virtual network 7

{LOCAL_VNET} Local virtual network 7

{CODENAME} Logical name for version version1 , beta


Placeholder Concept Character limit Example

{VM_NAME} VM name

{SUBNET} Subnet

{DBSID} Database system identifier

{DIAG} 5

{RND} 3

{USER} 12

{COMPUTER_NAME} 14

Deployer names
For an explanation of the Format column, see the definitions for placeholder values.

ノ Expand table

Concept Character Format Example


limit

Resource 80 {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-DEP00-


group {DEPLOY_VNET}-INFRASTRUCTURE INFRASTRUCTURE

Virtual 38 (64) {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-DEP00-vnet


network {DEPLOY_VNET}-vnet

Subnet 80 {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-DEP00_deployment-


{DEPLOY_VNET}_deployment-subnet subnet

Storage 24 {ENVIRONMENT}{REGION_MAP} mgmtweeudep00diagxxx


account {SAP_VNET}{DIAG}{RND}

Network 80 {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-DEP00_deployment-


security {DEPLOY_VNET}_deployment-nsg nsg
group

Route table {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-DEP00_route-table


{DEPLOY_VNET}_routeTable

Network 80 {ENVIRONMENT}-{REGION_MAP}- -ipconfig1


interface {DEPLOY_VNET}_{COMPUTER_NAME}-
component nic
Concept Character Format Example
limit

Disk {vm.name}-deploy00 PROTO-WUS2-DEPLOY_deploy00-


disk00

Virtual {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-


Machine {SAP_VNET}_deploy## DEP00_permweeudep00deploy00
name

Operating {ENVIRONMENT}-{REGION_MAP}- PERM-WEEU-


system (OS) {DEPLOY_VNET}_deploy##-OsDisk DEP00_permweeudep00deploy00-
disk OsDisk

Computer {environment[_map]}{DEPLOY_VNET} MGMT-WEEU-


name {region_map}deploy## DEP00_permweeudep00deploy00

Key vault 24 {ENVIRONMENT}{REGION_MAP} MGMTWEEUDEP00userxxx


{DEPLOY_VNET}{USER}{RND}

Public IP {ENVIRONMENT}-{REGION_MAP}- MGMT-WEEU-


address {DEPLOY_VNET}_{COMPUTER_NAME}- DEP00_permweeudep00deploy00-
pip pip

SAP library names


For an explanation of the Format column, see the definitions for placeholder values.

ノ Expand table

Concept Character Format Example


limit

Resource 80 {ENVIRONMENT}-{REGION_MAP}-SAP_LIBRARY MGMT-WEEU-


group SAP_LIBRARY

Storage 24 {ENVIRONMENT} mgmtweeusaplibxxx


account {REGION_MAP}saplib(12CHAR){RND}

Storage 24 {ENVIRONMENT} mgmtweeutfstatexxx


account {REGION_MAP}tfstate(12CHAR){RND}

SAP workload zone names


For an explanation of the Format column, see the definitions for placeholder values.
ノ Expand table

Concept Character limit Format Example

Resource 80 {ENVIRONMENT}-{REGION_MAP}-{SAP_VNET}- DEV-WEEU-SAP01-


group INFRASTRUCTURE INFRASTRUCTURE

Virtual 38 (64) {ENVIRONMENT}-{REGION_MAP}-{SAP_VNET}- DEV-WEEU-SAP01-vnet


network vnet

Peering 80 {LOCAL_VNET}_to_{REMOTE_VNET} DEV-WEEU-SAP01-


vnet_to_MGMT-WEEU-
DEP00-vnet

Subnet 80 {ENVIRONMENT}-{REGION_MAP}- DEV-WEEU-SAP01_db-


{SAP_VNET}_utility-subnet subnet

Network 80 {ENVIRONMENT}-{REGION_MAP}- DEV-WEEU-


security {SAP_VNET}_utility-nsg SAP01_dbSubnet-nsg
group

Route table {ENVIRONMENT}-{REGION_MAP}- DEV-WEEU-


{SAP_VNET}_routeTable SAP01_route-table

Storage 80 {ENVIRONMENT}{REGION_MAP} devweeusap01diagxxx


account {SAP_VNET}diag(5CHAR){RND}

User- {remote_vnet}_Hub-udr
defined
route

User- {ENVIRONMENT}-{REGION_MAP}- DEV-WEEU-


defined {SAP_VNET}_firewall-route SAP01_firewall-route
route
(firewall)

Availability {ENVIRONMENT}-{REGION_MAP}-
set (AV set) {SAP_VNET}_iscsi-avset

Network 80 {ENVIRONMENT}-{REGION_MAP}-
interface {SAP_VNET}_iscsi##-nic
component

Disk {vm.name}-iscsi00 or DEV-WEEU-


${azurerm_virtual_machine.iscsi.*.name}- SAP01_iscsi00-
iscsi00 (code) iscsi00

VM {ENVIRONMENT}-{REGION_MAP}-
{SAP_VNET}_iscsi##
Concept Character limit Format Example

OS disk {ENVIRONMENT}-
{REGION_MAP}-
{SAP_VNET}_iscsi##-
OsDisk

Computer {ENVIRONMENT}_{REGION_MAP}{SAP_VNET}
name {region_map}iscsi##

Key vault 24 {ENVIRONMENT}{REGION_MAP}{SAP_VNET} DEVWEEUSAP01userxxx


{USER}{RND}

NetApp {ENVIRONMENT}{REGION_MAP} DEV-WEEU-


account {SAP_VNET}_netapp_account SAP01_netapp_account

NetApp 24 {ENVIRONMENT}{REGION_MAP} DEV-WEEU-


capacity {SAP_VNET}_netapp_pool SAP01_netapp_pool
pool

SAP system names


For an explanation of the Format column, see the definitions for placeholder values.

ノ Expand table

Concept Character Format Example


limit

Resource prefix 80 {ENVIRONMENT}-{REGION_MAP}- DEV-WEEU-SAP01-X01


{SAP-VNET}-{SID} or
{ENVIRONMENT}-{REGION_MAP}-
{SAP-VNET}_{CODENAME}-{SID}

Resource group 80 {PREFIX} DEV-WEEU-SAP01-X01

Azure proximity {PREFIX}_ppg


placement group
(PPG)

Availability set {PREFIX}_app-avset DEV-WEEU-SAP01-X01_app-


avset

Subnet 80 {PREFIX}_utility-subnet DEV-WEEU-SAP01_X01_db-


subnet

Network security 80 {PREFIX}_utility-nsg DEV-WEEU-


group SAP01_X01_dbSubnet-nsg
Concept Character Format Example
limit

Network interface {PREFIX}_{VM_NAME}-{SUBNET}- -app-nic , -web-nic , -


component nic admin-nic , -db-nic

Computer name 14 {SID}d{DBSID}##{OS flag l/w} DEV-WEEU-SAP01-


(database) {primary/secondary 0/1}{RND} X01_x01dxdb00l0xxx

Computer name 14 {SID}{ROLE}##{OS flag l/w} DEV-WEEU-SAP01-


(nondatabase) {RND} X01_x01app01l538 , DEV-
WEEU-SAP01-
X01_x01scs01l538

VM {PREFIX}_{COMPUTER-NAME}

Disk {PREFIX}_{VM_NAME}-{disk_type} {VM-NAME}-sap00 , {VM-


{counter} NAME}-data00 , {VM-NAME}-
log00 , {VM-NAME}-backup00

OS disk {PREFIX}_{VM_NAME}-osDisk DEV-WEEU-SAP01-


X01_x01scs00lxxx-OsDisk

Azure load 80 {PREFIX}_db-alb DEV-WEEU-SAP01-X01_db-alb


balancer (utility)

Load balancer {PREFIX}_dbAlb-feip DEV-WEEU-SAP01-X01_dbAlb-


front-end IP feip
address (utility)

Load balancer {PREFIX}_dbAlb-bePool DEV-WEEU-SAP01-X01_dbAlb-


back-end pool bePool
(utility)

Load balancer {PREFIX}_dbAlb-hp DEV-WEEU-SAP01-X01_dbAlb-


health probe hp
(utility)

Key vault (user) 24 {SHORTPREFIX}u{RND} DEVWEEUSAP01uX01xxx

NetApp volume 24 {PREFIX}-utility DEV-WEEU-SAP01-X01_sapmnt


(utility)

7 Note

Disk numbering starts at zero. The naming convention uses a two-character format;
for example, 00 .
Azure region names
The automation framework uses short forms of Azure region names. The short Azure
region names are mapped to the normal region names.

You can set the mapping under the variable _region_mapping in the name generator's
configuration file, ../../../deploy/terraform/terraform-
units/modules/sap_namegenerator/variables_local.tf .

Then, you can use the _region_mapping variable elsewhere, such as an area path. The
format for an area path is {ENVIRONMENT}-{REGION_MAP}-{SAP_VNET}-{ARTIFACT} where:

{ENVIRONMENT} is the name of the environment or workload zone.

{REGION_MAP} is the short form of the Azure region name.

{SAP_VNET} is the SAP virtual network within the environment.


{ARTIFACT} is the deployment artifact within the virtual network, such as

INFRASTRUCTURE .

You can use the _region_mapping variable as follows:

"${upper(var.__environment)}-${upper(element(split(",",
lookup(var.__region_mapping, var.__region,
"-,unknown")),1))}-${upper(var.__SAP_VNET)}-INFRASTRUCTURE"

Next steps
Learn about configuring the custom naming module
Configure custom naming for the
automation framework
Article • 09/03/2023

SAP Deployment Automation Framework uses a standard naming convention for Azure
resource naming.

The Terraform module sap_namegenerator defines the names of all resources that the
automation framework deploys. The module is located at /deploy/terraform/terraform-
units/modules/sap_namegenerator/ in the repository. The framework also supports

providing your own names for some of the resources by using the parameter files.

The naming of the resources uses the following format:

resource prefix + resource_group_prefix + separator + resource name + resource suffix.

If these capabilities aren't enough, you can also use custom naming logic by either
providing a custom JSON file that contains the resource names or by modifying the
naming module used by the automation.

Provide name overrides by using a JSON file


You can specify a custom naming JSON file in your tfvars parameter file by using the
name_override_file parameter.

The JSON file has sections for the different resource types.

The deployment types are:

DEPLOYER (control plane)


SDU (SAP system infrastructure)
WORKLOAD_ZONE (workload zone)

Availability set names


The names for the availability sets are defined in the availabilityset_names structure.
The following example lists the availability set names for a deployment.

JSON
"availabilityset_names" : {
"app": "app-avset",
"db" : "db-avset",
"scs": "scs-avset",
"web": "web-avset"
}

Key vault names


The names for the key vaults are defined in the keyvault_names structure. The following
example lists the key vault names for a deployment in the DEV environment in West
Europe.

JSON

"keyvault_names": {
"DEPLOYER": {
"private_access": "DEVWEEUprvtABC",
"user_access": "DEVWEEUuserABC"
},
"SDU": {
"private_access": "DEVWEEUSAP01X00pABC",
"user_access": "DEVWEEUSAP01X00uABC"
},
"WORKLOAD_ZONE": {
"private_access": "DEVWEEUSAP01prvtABC",
"user_access": "DEVWEEUSAP01userABC"
}
}

The key vault names need to be unique across Azure. SAP Deployment Automation
Framework appends three random characters (ABC in the example) at the end of the key
vault name to reduce the likelihood for name conflicts.

The private_access names are currently not used.

Storage account names


The names for the storage accounts are defined in the storageaccount_names structure.
The following example lists the storage account names for a deployment in the DEV
environment in West Europe.

JSON
"storageaccount_names": {
"DEPLOYER": "devweeudiagabc",
"LIBRARY": {
"library_storageaccount_name": "devweeusaplibabc",
"terraformstate_storageaccount_name": "devweeutfstateabc"
},
"SDU": "devweeusap01diagabc",
"WORKLOAD_ZONE": {
"landscape_shared_transport_storage_account_name":
"devweeusap01sharedabc",
"landscape_storageaccount_name": "devweeusap01diagabc",
"witness_storageaccount_name": "devweeusap01witnessabc"
}
}

The key vault names need to be unique across Azure. SAP Deployment Automation
Framework appends three random characters (abc in the example) at the end of the key
vault name to reduce the likelihood for name conflicts.

Virtual machine names


The names for the virtual machines are defined in the virtualmachine_names structure.
Both the computer and the virtual machine names can be provided.

The following example lists the virtual machine names for a deployment in the DEV
environment in West Europe. The deployment has a database server, two application
servers, a central services server, and a web dispatcher.

JSON

"virtualmachine_names": {
"ANCHOR_COMPUTERNAME": [],
"ANCHOR_SECONDARY_DNSNAME": [],
"ANCHOR_VMNAME": [],
"ANYDB_COMPUTERNAME": [
"x00db00l0abc"
],
"ANYDB_SECONDARY_DNSNAME": [
"x00dhdb00l0abc",
"x00dhdb00l1abc"
],
"ANYDB_VMNAME": [
"x00db00l0abc"
],
"APP_COMPUTERNAME": [
"x00app00labc",
"x00app01labc"
],
"APP_SECONDARY_DNSNAME": [
"x00app00labc",
"x00app01labc"
],
"APP_VMNAME": [
"x00app00labc",
"x00app01labc"
],
"DEPLOYER": [
"devweeudeploy00"
],
"HANA_COMPUTERNAME": [
"x00dhdb00l0af"
],
"HANA_SECONDARY_DNSNAME": [
"x00dhdb00l0abc"
],
"HANA_VMNAME": [
"x00dhdb00l0abc"
],
"ISCSI_COMPUTERNAME": [
"devsap01weeuiscsi00"
],
"OBSERVER_COMPUTERNAME": [
"x00observer00labc"
],
"OBSERVER_VMNAME": [
"x00observer00labc"
],
"SCS_COMPUTERNAME": [
"x00scs00labc"
],
"SCS_SECONDARY_DNSNAME": [
"x00scs00labc"
],
"SCS_VMNAME": [
"x00scs00labc"
],
"WEB_COMPUTERNAME": [
"x00web00labc"
],
"WEB_SECONDARY_DNSNAME": [
"x00web00labc"
],
"WEB_VMNAME": [
"x00web00labc"
]
}

Configure the custom naming module


There are multiple files within the module for naming resources:
Virtual machine and computer names are defined in ( vm.tf ).
Resource group naming is defined in ( resourcegroup.tf ).
Key vaults are defined in ( keyvault.tf ).
Resource suffixes are defined in ( variables_local.tf ).

The different resource names are identified by prefixes in the Terraform code:

SAP deployer deployments use resource names with the prefix deployer_ .
SAP library deployments use resource names with the prefix library .
SAP landscape deployments use resource names with the prefix vnet_ .
SAP system deployments use resource names with the prefix sdu_ .

The calculated names are returned in a data dictionary, which is used by all the
Terraform modules.

Use custom names


Some of the resource names can be changed by providing parameters in the tfvars
parameter file.

Resource Parameter Notes

Prefix custom_prefix Used as prefix for all the resources in the resource
group

Resource group resourcegroup_name

admin subnet admin_subnet_name


name

admin nsg name admin_subnet_nsg_name

db subnet name db_subnet_name

db nsg name db_subnet_nsg_name

app subnet name app_subnet_name

app nsg name app_subnet_nsg_name

web subnet name web_subnet_name

web nsg name web_subnet_nsg_name

admin nsg name admin_subnet_nsg_name


Change the naming module
To prepare your Terraform environment for custom naming, you first need to create a
custom naming module. The easiest way is to copy the existing module and make the
required changes in the copied module.

1. Create a root-level folder in your Terraform environment. An example is


Azure_SAP_Automated_Deployment .
2. Go to your new root-level folder.
3. Clone the automation framework repository . This step creates a new folder sap-
automation .

4. Create a folder within the root-level folder called Contoso_naming .


5. Go to the sap-automation folder.
6. Check out the appropriate branch in Git.
7. Go to \deploy\terraform\terraform-units\modules within the sap-automation
folder.
8. Copy the folder sap_namegenerator to the Contoso_naming folder.

The naming module is called from the root terraform folders:

Terraform

module "sap_namegenerator" {
source = "../../terraform-units/modules/sap_namegenerator"
environment = local.infrastructure.environment
location = local.infrastructure.region
codename = lower(try(local.infrastructure.codename, ""))
random_id = module.common_infrastructure.random_id
sap_vnet_name = local.vnet_logical_name
sap_sid = local.sap_sid
db_sid = local.db_sid
app_ostype = try(local.application.os.os_type, "LINUX")
anchor_ostype = upper(try(local.anchor_vms.os.os_type, "LINUX"))
db_ostype = try(local.databases[0].os.os_type, "LINUX")
db_server_count = var.database_server_count
app_server_count = try(local.application.application_server_count, 0)
web_server_count = try(local.application.webdispatcher_count, 0)
scs_server_count = local.application.scs_high_availability ? 2 *
local.application.scs_server_count : local.application.scs_server_count
app_zones = local.app_zones
scs_zones = local.scs_zones
web_zones = local.web_zones
db_zones = local.db_zones
resource_offset = try(var.options.resource_offset, 0)
custom_prefix = var.custom_prefix
}
Next, you need to point your other Terraform module files to your custom naming
module. These module files include:

deploy\terraform\run\sap_system\module.tf
deploy\terraform\bootstrap\sap_deployer\module.tf

deploy\terraform\bootstrap\sap_library\module.tf

deploy\terraform\run\sap_library\module.tf
deploy\terraform\run\sap_deployer\module.tf

For each file, change the source for the module sap_namegenerator to point to your new
naming module's location. For example:

module "sap_namegenerator" { source = "../../terraform-

units/modules/sap_namegenerator" becomes module "sap_namegenerator" { source =


"../../../../Contoso_naming" .

Change resource group naming logic


To change your resource group's naming logic, go to your custom naming module
folder (for example, Workspaces\Contoso_naming ). Then, edit the file resourcegroup.tf .
Modify the following code with your own naming logic.

Terraform

locals {

// Resource group naming


sdu_name = length(var.codename) > 0 ? (
upper(format("%s-%s-%s_%s-%s", local.env_verified, local.location_short,
local.sap_vnet_verified, var.codename, var.sap_sid))) : (
upper(format("%s-%s-%s-%s", local.env_verified, local.location_short,
local.sap_vnet_verified, var.sap_sid))
)

deployer_name = upper(format("%s-%s-%s", local.deployer_env_verified,


local.deployer_location_short, local.dep_vnet_verified))
landscape_name = upper(format("%s-%s-%s", local.landscape_env_verified,
local.location_short, local.sap_vnet_verified))
library_name = upper(format("%s-%s", local.library_env_verified,
local.location_short))

// Storage account names must be between 3 and 24 characters in length and


use numbers and lower-case letters only. The name must be unique.
deployer_storageaccount_name =
substr(replace(lower(format("%s%s%sdiag%s", local.deployer_env_verified,
local.deployer_location_short, local.dep_vnet_verified,
local.random_id_verified)), "/[^a-z0-9]/", ""), 0, var.azlimits.stgaccnt)
landscape_storageaccount_name =
substr(replace(lower(format("%s%s%sdiag%s", local.landscape_env_verified,
local.location_short, local.sap_vnet_verified, local.random_id_verified)),
"/[^a-z0-9]/", ""), 0, var.azlimits.stgaccnt)
library_storageaccount_name =
substr(replace(lower(format("%s%ssaplib%s", local.library_env_verified,
local.location_short, local.random_id_verified)), "/[^a-z0-9]/", ""), 0,
var.azlimits.stgaccnt)
sdu_storageaccount_name =
substr(replace(lower(format("%s%s%sdiag%s", local.env_verified,
local.location_short, local.sap_vnet_verified, local.random_id_verified)),
"/[^a-z0-9]/", ""), 0, var.azlimits.stgaccnt)
terraformstate_storageaccount_name =
substr(replace(lower(format("%s%stfstate%s", local.library_env_verified,
local.location_short, local.random_id_verified)), "/[^a-z0-9]/", ""), 0,
var.azlimits.stgaccnt)

Change resource suffixes


To change your resource suffixes, go to your custom naming module folder (for
example, Workspaces\Contoso_naming ). Then, edit the file variables_local.tf . Modify the
following map with your own resource suffixes.

7 Note

Only change the map values. Don't change the map key, which the Terraform code
uses. For example, if you want to rename the administrator network interface
component, change "admin-nic" = "-admin-nic" to "admin-nic" = "yourNICname" .

Terraform

variable resource_suffixes {
type = map(string)
description = "Extension of resource name"

default = {
"admin_nic" = "-admin-nic"
"admin_subnet" = "admin-subnet"
"admin_subnet_nsg" = "adminSubnet-nsg"
"app_alb" = "app-alb"
"app_avset" = "app-avset"
"app_subnet" = "app-subnet"
"app_subnet_nsg" = "appSubnet-nsg"
"db_alb" = "db-alb"
"db_alb_bepool" = "dbAlb-bePool"
"db_alb_feip" = "dbAlb-feip"
"db_alb_hp" = "dbAlb-hp"
"db_alb_rule" = "dbAlb-rule_"
"db_avset" = "db-avset"
"db_nic" = "-db-nic"
"db_subnet" = "db-subnet"
"db_subnet_nsg" = "dbSubnet-nsg"
"deployer_rg" = "-INFRASTRUCTURE"
"deployer_state" = "_DEPLOYER.terraform.tfstate"
"deployer_subnet" = "_deployment-subnet"
"deployer_subnet_nsg" = "_deployment-nsg"
"iscsi_subnet" = "iscsi-subnet"
"iscsi_subnet_nsg" = "iscsiSubnet-nsg"
"library_rg" = "-SAP_LIBRARY"
"library_state" = "_SAP-LIBRARY.terraform.tfstate"
"kv" = ""
"msi" = "-msi"
"nic" = "-nic"
"osdisk" = "-OsDisk"
"pip" = "-pip"
"ppg" = "-ppg"
"sapbits" = "sapbits"
"storage_nic" = "-storage-nic"
"storage_subnet" = "_storage-subnet"
"storage_subnet_nsg" = "_storageSubnet-nsg"
"scs_alb" = "scs-alb"
"scs_alb_bepool" = "scsAlb-bePool"
"scs_alb_feip" = "scsAlb-feip"
"scs_alb_hp" = "scsAlb-hp"
"scs_alb_rule" = "scsAlb-rule_"
"scs_avset" = "scs-avset"
"scs_ers_feip" = "scsErs-feip"
"scs_ers_hp" = "scsErs-hp"
"scs_ers_rule" = "scsErs-rule_"
"scs_scs_rule" = "scsScs-rule_"
"sdu_rg" = ""
"tfstate" = "tfstate"
"vm" = ""
"vnet" = "-vnet"
"vnet_rg" = "-INFRASTRUCTURE"
"web_alb" = "web-alb"
"web_alb_bepool" = "webAlb-bePool"
"web_alb_feip" = "webAlb-feip"
"web_alb_hp" = "webAlb-hp"
"web_alb_inrule" = "webAlb-inRule"
"web_avset" = "web-avset"
"web_subnet" = "web-subnet"
"web_subnet_nsg" = "webSubnet-nsg"

}
}

Next step
Learn about naming conventions
Use SAP Deployment Automation
Framework from Azure DevOps Services
Article • 11/29/2023

Azure DevOps streamlines the deployment process by providing pipelines that you can
run to perform the infrastructure deployment and the configuration and SAP installation
activities.

You can use Azure Repos to store your configuration files and use Azure Pipelines to
deploy and configure the infrastructure and the SAP application.

Sign up for Azure DevOps Services


To use Azure DevOps Services, you need an Azure DevOps organization. An organization
is used to connect groups of related projects. Use your work or school account to
automatically connect your organization to your Microsoft Entra ID. To create an
account, open Azure DevOps and either sign in or create a new account.

Configure Azure DevOps Services for SAP


Deployment Automation Framework
You can use the following script to do a basic installation of Azure DevOps Services for
SAP Deployment Automation Framework.

Open PowerShell ISE and copy the following script and update the parameters to match
your environment.

PowerShell

$Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
$Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
$Env:SDAF_CONTROL_PLANE_CODE = "MGMT"
$Env:SDAF_WORKLOAD_ZONE_CODE = "DEV"
$Env:SDAF_ControlPlaneSubscriptionID = "xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx"
$Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy"
$Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"

$UniqueIdentifier = Read-Host "Please provide an identifier that makes


the service principal names unique, for instance a project code"
$confirmation = Read-Host "Do you want to create a new Application
registration (needed for the Web Application) y/n?"
if ($confirmation -eq 'y') {
$Env:SDAF_APP_NAME = $UniqueIdentifier + " SDAF Control Plane"
}

else {
$Env:SDAF_APP_NAME = Read-Host "Please provide the Application
registration name"
}

$confirmation = Read-Host "Do you want to create a new Service Principal


for the Control plane y/n?"
if ($confirmation -eq 'y') {
$Env:SDAF_MGMT_SPN_NAME = $UniqueIdentifier + " SDAF " +
$Env:SDAF_CONTROL_PLANE_CODE + " SPN"
}
else {
$Env:SDAF_MGMT_SPN_NAME = Read-Host "Please provide the Control Plane
Service Principal Name"
}

$confirmation = Read-Host "Do you want to create a new Service Principal


for the Workload zone y/n?"
if ($confirmation -eq 'y') {
$Env:SDAF_WorkloadZone_SPN_NAME = $UniqueIdentifier + " SDAF " +
$Env:SDAF_WORKLOAD_ZONE_CODE + " SPN"
}
else {
$Env:SDAF_WorkloadZone_SPN_NAME = Read-Host "Please provide the
Workload Zone Service Principal Name"
}

if ( $PSVersionTable.Platform -eq "Unix") {


if ( Test-Path "SDAF") {
}
else {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( Test-Path $sdaf_path) {
}
else {
New-Item -Path $sdaf_path -Type Directory
}
}

Set-Location -Path $sdaf_path

if ( Test-Path "New-SDAFDevopsProject.ps1") {
remove-item .\New-SDAFDevopsProject.ps1
}
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/New-SDAFDevopsProject.ps1 -OutFile .\New-
SDAFDevopsProject.ps1 ; .\New-SDAFDevopsProject.ps1

Run the script and follow the instructions. The script opens browser windows for
authentication and for performing tasks in the Azure DevOps project.

You can choose to either run the code directly from GitHub or you can import a copy of
the code into your Azure DevOps project.

To confirm that the project was created, go to the Azure DevOps portal and select the
project. Ensure that the repo was populated and that the pipelines were created.

) Important

Run the following steps on your local workstation. Also ensure that you have the
latest Azure CLI installed by running the az upgrade command.

Configure Azure DevOps Services artifacts for a new


workload zone
Use the following script to deploy the artifacts that are needed to support a new
workload zone. This process creates the variable group and the service connection in
Azure DevOps and, optionally, the deployment service principal.

Open PowerShell ISE and copy the following script and update the parameters to match
your environment.

PowerShell

$Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
$Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
$Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy"
$Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"

if ( $PSVersionTable.Platform -eq "Unix") {


if ( Test-Path "SDAF") {
}
else {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( Test-Path $sdaf_path) {
}
else {
New-Item -Path $sdaf_path -Type Directory
}
}

Set-Location -Path $sdaf_path

if ( Test-Path "New-SDAFDevopsWorkloadZone.ps1") {
remove-item .\New-SDAFDevopsWorkloadZone.ps1
}

Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-


automation/main/deploy/scripts/New-SDAFDevopsWorkloadZone.ps1 -OutFile
.\New-SDAFDevopsWorkloadZone.ps1 ; .\New-SDAFDevopsWorkloadZone.ps1

Create a sample control plane configuration


You can run the Create Sample Deployer Configuration pipeline to create a sample
configuration for the control plane. When it's running, choose the appropriate Azure
region. You can also control if you want to deploy Azure Firewall and Azure Bastion.

Manual configuration of Azure DevOps


Services for SAP Deployment Automation
Framework
You can manually configure Azure DevOps Services for SAP Deployment Automation
Framework.

Create a new project


You can use Azure Repos to store the code from the sap-automation GitHub repository
and the environment configuration files.

Open Azure DevOps and create a new project by selecting New Project and entering
the project details. The project contains the Azure Repos source control repository and
Azure Pipelines for performing deployment activities.

If you don't see New Project, ensure that you have permissions to create new projects in
the organization.

Record the URL of the project.


Import the repository
Start by importing the SAP Deployment Automation Framework Bootstrap GitHub
repository into Azure Repos.

Go to the Repositories section and select Import a repository. Import the


https://github.com/Azure/sap-automation-bootstrap.git repository into Azure DevOps.

For more information, see Import a repository.

If you're unable to import a repository, you can create the repository manually. Then
you can import the content from the SAP Deployment Automation Framework GitHub
Bootstrap repository to it.

Create the repository for manual import


Only do this step if you're unable to import the repository directly.

To create the workspaces repository, in the Repos section, under Project settings, select
Create.

Choose the repository, enter Git, and provide a name for the repository. For example,
use SAP Configuration Repository.

Clone the repository


To provide a more comprehensive editing capability of the content, you can clone the
repository to a local folder and edit the contents locally.

To clone the repository to a local folder, on the Repos section of the portal, under Files,
select Clone. For more information, see Clone a repository.
Manually import the repository content by using a local
clone
You can also manually download the content from the SAP Deployment Automation
Framework repository and add it to your local clone of the Azure DevOps repository.

Go to the https://github.com/Azure/SAP-automation-samples repository and download


the repository content as a .zip file. Select Code and choose Download ZIP.

Copy the content from the .zip file to the root folder of your local clone.

Open the local folder in Visual Studio Code. You should see that changes need to be
synchronized by the indicator by the source control icon shown here.

Select the source control icon and provide a message about the change. For example,
enter Import from GitHub and select Ctrl+Enter to commit the changes. Next, select
Sync Changes to synchronize the changes back to the repository.

Choose the source for the Terraform and Ansible code


You can either run the SAP Deployment Automation Framework code directly from
GitHub or you can import it locally.

Run the code from a local repository

If you want to run the SAP Deployment Automation Framework code from the local
Azure DevOps project, you need to create a separate code repository and a
configuration repository in the Azure DevOps project:
Name of configuration repository: Same as the DevOps Project name . Source is
https://github.com/Azure/sap-automation-bootstrap.git .

Name of code repository: sap-automation . Source is


https://github.com/Azure/sap-automation.git .

Name of sample and template repository: sap-samples . Source is


https://github.com/Azure/sap-automation-samples.git .

Run the code directly from GitHub


If you want to run the code directly from GitHub, you need to provide credentials for
Azure DevOps to be able to pull the content from GitHub.

Create the GitHub service connection

To pull the code from GitHub, you need a GitHub service connection. For more
information, see Manage service connections.

To create the service connection, go to Project Settings and under the Pipelines section,
go to Service connections.
Select GitHub as the service connection type. Select Azure Pipelines in the OAuth
Configuration dropdown.

Select Authorize to sign in to GitHub.

Enter a service connection name, for instance, SDAF Connection to GitHub. Ensure that
the Grant access permission to all pipelines checkbox is selected. Select Save to save
the service connection.

Set up the web app


The automation framework optionally provisions a web app as a part of the control
plane to assist with the SAP workload zone and system configuration files. If you want to
use the web app, you must first create an app registration for authentication purposes.
Open Azure Cloud Shell and run the following commands.

Windows

Replace MGMT with your environment, as necessary.

PowerShell

Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-


0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-
4d61-89e7-88639da4683d","type":"Scope"}]}]'

$TF_VAR_app_registration_app_id=(az ad app create --display-name MGMT-


webapp-registration --enable-id-token-issuance true --sign-in-audience
AzureADMyOrg --required-resource-access .\manifest.json --query
"appId").Replace('"',"")

echo $TF_VAR_app_registration_app_id

az ad app credential reset --id $TF_VAR_app_registration_app_id --append


--query "password"

del manifest.json

Save the app registration ID and password values for later use.

Create Azure Pipelines


Azure Pipelines are implemented as YAML files. They're stored in the deploy/pipelines
folder in the repository.
Control plane deployment pipeline
Create the control plane deployment pipeline. Under the Pipelines section, select New
Pipeline. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/01-deploy-control-plane.yml

Name Control plane deployment

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Control plane deployment.

SAP workload zone deployment pipeline


Create the SAP workload zone pipeline. Under the Pipelines section, select New
Pipeline. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/02-sap-workload-zone.yml

Name SAP workload zone deployment

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP workload zone deployment.
SAP system deployment pipeline
Create the SAP system deployment pipeline. Under the Pipelines section, select New
Pipeline. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/03-sap-system-deployment.yml

Name SAP system deployment (infrastructure)

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP system deployment (infrastructure).

SAP software acquisition pipeline


Create the SAP software acquisition pipeline. Under the Pipelines section, select New
Pipeline. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path deploy/pipelines/04-sap-software-download.yml

Name SAP software acquisition

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP software acquisition.
SAP configuration and software installation
pipeline
Create the SAP configuration and software installation pipeline. Under the Pipelines
section, select New Pipeline. Select Azure Repos Git as the source for your code.
Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline
with the following settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/05-DB-and-SAP-installation.yml

Name Configuration and SAP installation

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP configuration and software installation.

Deployment removal pipeline


Create the deployment removal pipeline. Under the Pipelines section, select New
Pipeline. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/10-remover-terraform.yml

Name Deployment removal


Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Deployment removal.

Control plane removal pipeline


Create the control plane deployment removal pipeline. Under the Pipelines section,
select New Pipeline. Select Azure Repos Git as the source for your code. Configure your
pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the
following settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/12-remove-control-plane.yml

Name Control plane removal

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Control plane removal.

Deployment removal pipeline by using Azure


Resource Manager
Create the deployment removal Azure Resource Manager pipeline. Under the Pipelines
section, select New Pipeline. Select Azure Repos Git as the source for your code.
Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline
with the following settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/11-remover-arm-fallback.yml
Setting Value
Name Deployment removal using Azure Resource Manager

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Deployment removal using ARM processor.

7 Note

Only use this pipeline as a last resort. Removing just the resource groups leaves
remnants that might complicate redeployments.

Repository updater pipeline


Create the repository updater pipeline. Under the Pipelines section, select New Pipeline.
Select Azure Repos Git as the source for your code. Configure your pipeline to use an
existing Azure Pipelines YAML file. Specify the pipeline with the following settings:

ノ Expand table

Setting Value

Repo "Root repo" (same as project name)

Branch main

Path pipelines/20-update-ado-repository.yml

Name Repository updater

Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Repository updater.

This pipeline should be used when there's an update in the sap-automation repository
that you want to use.

Import the cleanup task from Visual Studio


Marketplace
The pipelines use a custom task to perform cleanup activities post deployment. You can
install the custom task from Post Build Cleanup . Install it to your Azure DevOps
organization before you run the pipelines.

Preparations for a self-hosted agent


1. Create an agent pool by going to Organizational Settings. Under the Pipelines
section, select Agent Pools > Add Pool. Select Self-hosted as the pool type. Name
the pool to align with the control plane environment. For example, use MGMT-WEEU-
POOL . Ensure that Grant access permission to all pipelines is selected and select
Create to create the pool.

2. Sign in with the user account you plan to use in your Azure DevOps
organization.

3. From your home page, open your user settings and select Personal access tokens.

4. Create a personal access token with these settings:

Agent Pools: Select Read & manage.

Build: Select Read & execute.


Code: Select Read & write.

Variable Groups: Select Read, create, & manage.

Write down the created token value.


Variable definitions
The deployment pipelines are configured to use a set of predefined parameter values
defined by using variable groups.

Common variables
Common variables are used by all the deployment pipelines. They're stored in a variable
group called SDAF-General .

Create a new variable group named SDAF-General by using the Library page in the
Pipelines section. Add the following variables:

ノ Expand table

Variable Value Notes

Deployment_Configuration_Path WORKSPACES For testing the sample configuration,


use samples/WORKSPACES instead of
WORKSPACES.

Branch main

S-Username <SAP Support user


account name>

S-Password <SAP Support user Change the variable type to secret by


password> selecting the lock icon.

tf_version 1.6.0 The Terraform version to use. See


Terraform download .

Save the variables.

Alternatively, you can use the Azure DevOps CLI to set up the groups.

Bash

s-user="<SAP Support user account name>"


s-password="<SAP Support user password>"

az devops login

az pipelines variable-group create --name SDAF-General --variables


ANSIBLE_HOST_KEY_CHECKING=false Deployment_Configuration_Path=WORKSPACES
Branch=main S-Username=$s-user S-Password=$s-password tf_varsion=1.3.0 --
output yaml
Remember to assign permissions for all pipelines by using Pipeline permissions.

Environment-specific variables
Because each environment might have different deployment credentials, you need to
create a variable group per environment. For example, use SDAF-MGMT , SDAF-DEV , and
SDAF-QA .

Create a new variable group named SDAF-MGMT for the control plane environment by
using the Library page in the Pipelines section. Add the following variables:

ノ Expand table

Variable Value Notes

Agent Azure Pipelines or the This pool is created in a later step.


name of the agent pool

CP_ARM_CLIENT_ID Service principal


application ID

CP_ARM_OBJECT_ID Service principal


object ID

CP_ARM_CLIENT_SECRET Service principal Change the variable type to secret by


password selecting the lock icon.

CP_ARM_SUBSCRIPTION_ID Target subscription


ID

CP_ARM_TENANT_ID Tenant ID for the


service principal

AZURE_CONNECTION_NAME Previously created


connection name

sap_fqdn SAP fully qualified Only needed if Private DNS isn't used.
domain name, for
example,
sap.contoso.net

FENCING_SPN_ID Service principal Required for highly available


application ID for the deployments that use a service
fencing agent principal for the fencing agent.

FENCING_SPN_PWD Service principal Required for highly available


password for the deployments that use a service
fencing agent principal for the fencing agent.
Variable Value Notes

FENCING_SPN_TENANT Service principal Required for highly available


tenant ID for the deployments that use a service
fencing agent principal for the fencing agent.

PAT <Personal Access Use the personal token defined in the


Token> previous step.

POOL <Agent Pool name> The agent pool to use for this
environment.

APP_REGISTRATION_APP_ID App registration Required if deploying the web app.


application ID

WEB_APP_CLIENT_SECRET App registration Required if deploying the web app.


password

SDAF_GENERAL_GROUP_ID The group ID for the The ID can be retrieved from the URL
SDAF-General group parameter variableGroupId when
accessing the variable group by using
a browser. For example:
variableGroupId=8 .

WORKLOADZONE_PIPELINE_ID The ID for the SAP The ID can be retrieved from the URL
workload zone parameter definitionId from the
deployment pipeline pipeline page in Azure DevOps. For
example: definitionId=31 .

SYSTEM_PIPELINE_ID The ID for the SAP The ID can be retrieved from the URL
system deployment parameter definitionId from the
(infrastructure) pipeline page in Azure DevOps. For
pipeline example: definitionId=32 .

Save the variables.

Remember to assign permissions for all pipelines by using Pipeline permissions.

When you use the web app, ensure that the Build Service has at least Contribute
permissions.

You can use the clone functionality to create the next environment variable group.
APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID,
WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-
MGMT group.
Create a service connection
To remove the Azure resources, you need an Azure Resource Manager service
connection. For more information, see Manage service connections.

To create the service connection, go to Project Settings. Under the Pipelines section,
select Service connections.

Select Azure Resource Manager as the service connection type and Service principal
(manual) as the authentication method. Enter the target subscription, which is typically
the control plane subscription. Enter the service principal details. Select Verify to
validate the credentials. For more information on how to create a service principal, see
Create a service principal.

Enter a Service connection name, for instance, use Connection to MGMT subscription .
Ensure that the Grant access permission to all pipelines checkbox is selected. Select
Verify and save to save the service connection.

Permissions
Most of the pipelines add files to the Azure Repos and therefore require pull
permissions. On Project Settings, under the Repositories section, select the Security tab
of the source code repository and assign Contribute permissions to the Build Service .

Deploy the control plane


Newly created pipelines might not be visible in the default view. Select the Recent tab
and go back to All tabs to view the new pipelines.

Select the Control plane deployment pipeline and enter the configuration names for
the deployer and the SAP library. Select Run to deploy the control plane. Make sure to
select the Deploy the configuration web application checkbox if you want to set up the
configuration web app.

Configure the Azure DevOps Services self-hosted agent


manually
Manual configuration is only needed if the Azure DevOps Services agent isn't
automatically configured. Check that the agent pool is empty before you proceed.

To connect to the deployer:

1. Sign in to the Azure portal .


2. Go to the resource group that contains the deployer virtual machine.

3. Connect to the virtual machine by using Azure Bastion.

4. The default username is azureadm.

5. Select SSH Private Key from Azure Key Vault.

6. Select the subscription that contains the control plane.

7. Select the deployer key vault.

8. From the list of secrets, select the secret that ends with -sshkey.

9. Connect to the virtual machine.

Run the following script to configure the deployer:

Bash

mkdir -p ~/Azure_SAP_Automated_Deployment

cd ~/Azure_SAP_Automated_Deployment

git clone https://github.com/Azure/sap-automation.git

cd sap-automation/deploy/scripts

./configure_deployer.sh

Reboot the deployer, reconnect, and run the following script to set up the Azure DevOps
agent:

Bash

cd ~/Azure_SAP_Automated_Deployment/

$DEPLOYMENT_REPO_PATH/deploy/scripts/setup_ado.sh

Accept the license and, when you're prompted for the server URL, enter the URL you
captured when you created the Azure DevOps project. For authentication, select PAT
and enter the token value from the previous step.

When prompted, enter the application pool name that you created in the previous step.
Accept the default agent name and the default work folder name. The agent is now
configured and starts.
Deploy the control plane web application
Selecting the deploy the web app infrastructure parameter when you run the control
plane deployment pipeline provisions the infrastructure necessary for hosting the web
app. The Deploy web app pipeline publishes the application's software to that
infrastructure.

Wait for the deployment to finish. Select the Extensions tab and follow the instructions
to finalize the configuration. Update the reply-url values for the app registration.

As a result of running the control plane pipeline, part of the web app URL that is needed
is stored in a variable named WEBAPP_URL_BASE in your environment-specific variable
group. At any time, you can update the URLs of the registered application web app by
using the following command.

Windows

PowerShell

$webapp_url_base="<WEBAPP_URL_BASE>"
az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-
url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris
https://${webapp_url_base}.azurewebsites.net/
https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback

You also need to grant reader permissions to the app service system-assigned managed
identity. Go to the app service resource. On the left side, select Identity. On the System
assigned tab, select Azure role assignments > Add role assignment. Select
Subscription as the scope and Reader as the role. Then select Save. Without this step,
the web app dropdown functionality will not work.

You should now be able to visit the web app and use it to deploy SAP workload zones
and SAP system infrastructure.

Next step
Azure DevOps hands-on lab
Configure the control plane
Article • 12/12/2023

The control plane for SAP Deployment Automation Framework consists of the following
components:

Deployer
SAP Library

Deployer
The deployer is the execution engine of SAP Deployment Automation Framework. It's a
preconfigured virtual machine (VM) that's used for running Terraform and Ansible
commands. When you use Azure DevOps, the deployer is a self-hosted agent.

The configuration of the deployer is performed in a Terraform tfvars variable file.

If you want to use an existing resource group for the Deployer provide the Azure
resource ID for the resource group using the resource_group_arm_id parameter in the
deployer's tfvars file. If the parameter isn't defined, the resource group is created using
the default naming. You can change the default name using the resource_group_name
parameter.

Terraform parameters
This table shows the Terraform parameters. These parameters need to be entered
manually if you aren't using the deployment scripts.

ノ Expand table

Variable Description Type

tfstate_resource_id Azure resource identifier for the storage account in the SAP Required
library that contains the Terraform state files

Environment parameters
This table shows the parameters that define the resource naming.

ノ Expand table

Variable Description Type Notes

environment Identifier for the Mandatory For example, PROD for a


control plane (max production environment
5 characters). and NP for a nonproduction
environment.

location Azure region in Required Use lowercase.


which to deploy.

name_override_file Name override Optional See Custom naming.


file.

place_delete_lock_on_resources Place a delete lock Optional


on the key
resources.

Resource group
This table shows the parameters that define the resource group.

ノ Expand table
Variable Description Type

resource_group_name Name of the resource group to be created Optional

resource_group_arm_id Azure resource identifier for an existing resource group Optional

resourcegroup_tags Tags to be associated with the resource group Optional

Network parameters
The automation framework supports both creating the virtual network and the subnets
(green field) or using an existing virtual network and existing subnets (brown field) or a
combination of green field and brown field:

Green-field scenario: The virtual network address space and the subnet address
prefixes must be specified.
Brown-field scenario: The Azure resource identifier for the virtual network and the
subnets must be specified.

The recommended CIDR of the virtual network address space is /27, which allows space
for 32 IP addresses. A CIDR value of /28 only allows 16 IP addresses. If you want to
include Azure Firewall, use a CIDR value of /25, because Azure Firewall requires a range
of /26.

The recommended CIDR value for the management subnet is /28, which allows 16 IP
addresses. The recommended CIDR value for the firewall subnet is /26, which allows 64
IP addresses.

This table shows the networking parameters.

ノ Expand table

Variable Description Type Notes

management_network_name The name of the Optional For green-field


virtual network into deployments
which the deployer
will be deployed

management_network_logical_name The logical name of Required


the network (DEV-
WEEU-MGMT01-
INFRASTRUCTURE)

management_network_arm_id The Azure resource Optional For brown-


identifier for the field
Variable Description Type Notes

virtual network deployments

management_network_address_space The address range Mandatory For green-field


for the virtual deployments
network

management_subnet_name The name of the Optional


subnet

management_subnet_address_prefix The address range Mandatory For green-field


for the subnet deployments

management_subnet_arm_id The Azure resource Mandatory For brown-


identifier for the field
subnet deployments

management_subnet_nsg_name The name of the Optional


network security
group

management_subnet_nsg_arm_id The Azure resource Mandatory For brown-


identifier for the field
network security deployments
group

management_subnet_nsg_allowed_ips Range of allowed IP Optional


addresses to add to
Azure Firewall

management_firewall_subnet_arm_id The Azure resource Mandatory For brown-


identifier for the field
Azure Firewall deployments
subnet

management_firewall_subnet_address_prefix The address range Mandatory For green-field


for the subnet deployments

management_bastion_subnet_arm_id The Azure resource Mandatory For brown-


identifier for the field
Azure Bastion deployments
subnet

management_bastion_subnet_address_prefix The address range Mandatory For green-field


for the subnet deployments

webapp_subnet_arm_id The Azure resource Mandatory For brown-


identifier for the field
Variable Description Type Notes

web app subnet deployments

webapp_subnet_address_prefix The address range Mandatory For green-field


for the subnet deployments

use_private_endpoint Use private Optional


endpoints.

use_service_endpoint Use service Optional


endpoints for
subnets.

7 Note

When you use an existing subnet for the web app, the subnet must be empty, in
the same region as the resource group being deployed, and delegated to
Microsoft.Web/serverFarms.

Deployer virtual machine parameters


This table shows the parameters related to the deployer VM.

ノ Expand table

Variable Description Type

deployer_size Defines the VM SKU to use, default: Optional


Standard_D4ds_v4

deployer_count Defines the number of deployers Optional

deployer_image Defines the VM image to use, default: Ubuntu 22.04 Optional

plan Defines the plan associated to the VM image Optional

deployer_disk_type Defines the disk type, default: Premium_LRS Optional

deployer_use_DHCP Controls if the Azure subnet-provided IP addresses Optional


should be used (dynamic) true

deployer_private_ip_address Defines the private IP address to use Optional

deployer_enable_public_ip Defines if the deployer has a public IP Optional


Variable Description Type

auto_configure_deployer Defines if the deployer is configured with the Optional


required software (Terraform and Ansible)

add_system_assigned_identity Defines if the deployer is assigned a system identity Optional

The VM image is defined by using the following structure:

Terraform

xxx_vm_image = {
os_type = ""
source_image_id = ""
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
type = "marketplace"
}

7 Note

The type can be marketplace/marketplace_with_plan/custom . Using an image of


type marketplace_with_plan requires that the image in question was used at least
once in the subscription. The first usage prompts the user to accept the license
terms and the automation has no means to approve it.

Authentication parameters
This section defines the parameters used for defining the VM authentication.

ノ Expand table

Variable Description Type

deployer_vm_authentication_type Defines the default authentication for Optional


the deployer

deployer_authentication_username Administrator account name Optional

deployer_authentication_password Administrator password Optional

deployer_authentication_path_to_public_key Path to the public key used for Optional


authentication
Variable Description Type

deployer_authentication_path_to_private_key Path to the private key used for Optional


authentication

Key vault parameters


This section defines the parameters used for defining the Azure Key Vault information.

ノ Expand table

Variable Description Type

user_keyvault_id Azure resource identifier for the user Optional


key vault.

spn_keyvault_id Azure resource identifier for the key Optional


vault that contains the deployment
credentials.

deployer_private_key_secret_name The key vault secret name for the Optional


deployer private key.

deployer_public_key_secret_name The key vault secret name for the Optional


deployer public key.

deployer_username_secret_name The key vault secret name for the Optional


deployer username.

deployer_password_secret_name The key vault secret name for the Optional


deployer password.

additional_users_to_add_to_keyvault_policies A list of user object IDs to add to the Optional


deployment key vault access
policies.

set_secret_expiry Set expiry of 12 months for key vault Optional


secrets.

DNS support

ノ Expand table

Variable Description Type

dns_label DNS name of the Private DNS zone. Optional


Variable Description Type

use_custom_dns_a_registration Uses an external system for DNS, set to false for Optional
Azure native.

management_dns_subscription_id Subscription ID for the subscription that Optional


contains the Private DNS zone.

management_dns_resourcegroup_name Resource group that contains the Private DNS Optional


zone.

Other parameters

ノ Expand table

Variable Description Type Notes

firewall_deployment Boolean flag that Optional


controls if an Azure
firewall is to be
deployed.

bastion_deployment Boolean flag that Optional


controls if Azure
Bastion host is to be
deployed.

bastion_sku SKU for Azure Optional


Bastion host to be
deployed
(Basic/Standard).

enable_purge_control_for_keyvaults Boolean flag that Optional Use only for


controls if purge test
control is enabled on deployments.
the key vault.

enable_firewall_for_keyvaults_and_storage Restrict access to Optional


selected subnets.

Web App parameters

ノ Expand table
Variable Description Type Notes

use_webapp Boolean value indicating if a webapp Optional


should be deployed.

app_service_SKU_name The SKU of the App Service Plan. Optional

app_registration_app_id The app registration id to be used Optional


for the webapp.

webapp_client_secret The SKU of the App Service Plan. Optional Will be persisted in
Key Vault

Example parameters file for deployer (required


parameters only)
Terraform

# The environment value is a mandatory field, it is used for partitioning


the environments, for example (PROD and NP)
environment="MGMT"

# The location/region value is a mandatory field, it is used to control


where the resources are deployed
location="westeurope"

# management_network_address_space is the address space for management


virtual network
management_network_address_space="10.10.20.0/25"

# management_subnet_address_prefix is the address prefix for the management


subnet
management_subnet_address_prefix="10.10.20.64/28"

# management_firewall_subnet_address_prefix is the address prefix for the


firewall subnet
management_firewall_subnet_address_prefix="10.10.20.0/26"

# management_bastion_subnet_address_prefix is a mandatory parameter if


bastion is deployed and if the subnets are not defined in the workload or if
existing subnets are not used
management_bastion_subnet_address_prefix = "10.10.20.128/26"

deployer_enable_public_ip=false

firewall_deployment=true

bastion_deployment=true
SAP library
The SAP library provides the persistent storage of the Terraform state files and the
downloaded SAP installation media for the control plane.

The configuration of the SAP library is performed in a Terraform tfvars variable file.

If you want to use an existing resource group for the SAP Library provide the Azure
resource ID for the resource group using the resource_group_arm_id parameter in the
deployer's tfvars file. If the parameter isn't defined, the resource group is created using
the default naming. You can change the default name using the resource_group_name
parameter.

Terraform parameters
This table shows the Terraform parameters. These parameters need to be entered
manually if you aren't using the deployment scripts or Azure Pipelines.

ノ Expand table

Variable Description Type Notes

deployer_tfstate_key State file name for the deployer Required

Environment parameters
This table shows the parameters that define the resource naming.

ノ Expand table

Variable Description Type Notes

environment Identifier for the control Mandatory For example, PROD for a
plane (maximum of five production environment and NP
characters) for a nonproduction environment.

location Azure region in which to Required Use lowercase.


deploy

name_override_file Name override file Optional See Custom naming.

Resource group
This table shows the parameters that define the resource group.

ノ Expand table

Variable Description Type

resource_group_name Name of the resource group to be created Optional

resource_group_arm_id Azure resource identifier for an existing resource group Optional

resourcegroup_tags Tags to be associated with the resource group Optional

SAP installation media storage account

ノ Expand table

Variable Description Type

library_sapmedia_arm_id Azure resource identifier Optional

Terraform remote state storage account

ノ Expand table

Variable Description Type

library_terraform_state_arm_id Azure resource identifier Optional

DNS support

ノ Expand table

Variable Description Type

dns_label DNS name of the Private DNS zone. Optional

use_custom_dns_a_registration Use an existing Private DNS zone. Optional

management_dns_subscription_id Subscription ID for the subscription that Optional


contains the Private DNS zone.

management_dns_resourcegroup_name Resource group that contains the Private DNS Optional


zone.
Extra parameters

ノ Expand table

Variable Description Type

use_private_endpoint Use private endpoints. Optional

use_service_endpoint Use service endpoints for Optional


subnets.

enable_firewall_for_keyvaults_and_storage Restrict access to selected Optional


subnets.

subnets_to_add_to_firewall_for_keyvaults_and_storage Subnets that need access to Optional


key vaults and storage
accounts.

Example parameters file for the SAP library (required


parameters only)
Terraform

# The environment value is a mandatory field, it is used for partitioning


the environments, for example (PROD and NP)
environment = "MGMT"

# The location/region value is a mandatory field, it is used to control


where the resources are deployed
location = "westeurope"

Next step
Configure SAP system
Workload zone configuration in the SAP
automation framework
Article • 03/05/2024

An SAP application typically has multiple development tiers. For example, you might
have development, quality assurance, and production tiers. SAP Deployment
Automation Framework calls these tiers workload zones. See the following diagram for
an example of a workload zone with two SAP systems.

The workload zone provides shared services to all of the SAP Systems in the workload
zone. These shared services include:

Azure Virtual Network


Azure Key Vault
Shared Azure Storage Account for installation media
Azure NetApp Files account and capacity pool (optional)

The workload zone is typically deployed in a spoke subscription and the deployment of
all the artifacts in the workload zone is done using unique service principal.
Workload zone deployment configuration
The configuration of the SAP workload zone is done via a Terraform tfvars variable file.
You can find examples of the variable file in the samples/WORKSPACES/LANDSCAPE folder.

The following sections show the different sections of the variable file.

Environment parameters
This table contains the parameters that define the environment settings.

ノ Expand table

Variable Description Type Notes

environment Identifier for the Mandatory For example, PROD for a production
workload zone (max environment and QA for a Quality
five characters) Assurance environment.

location The Azure region in Required


which to deploy

name_override_file Name override file Optional See Custom naming.

tags A dictionary of tags to Optional


associate with all
resources.

Resource group parameters


This table contains the parameters that define the resource group.

ノ Expand table

Variable Description Type

resource_group_name Name of the resource group to be created Optional

resource_group_arm_id Azure resource identifier for an existing resource group Optional

Network parameters
The automation framework supports both creating the virtual network and the subnets
(green field) or using an existing virtual network and existing subnets (brown field) or a
combination of green field and brown field:

Green-field scenario: The virtual network address space and the subnet address
prefixes must be specified.
Brown-field scenario: The Azure resource identifier for the virtual network and the
subnets must be specified.

Ensure that the virtual network address space is large enough to host all the resources.

This table contains the networking parameters.

ノ Expand table

Variable Description Type Notes

network_logical_name The logical name of the Required Used for resource


network, for example, SAP01 naming

network_name The name of the network Optional

network_arm_id The Azure resource identifier Optional For brown-field


for the virtual network deployments

network_address_space The address range for the Mandatory For green-field


virtual network deployments

admin_subnet_address_prefix The address range for the Mandatory For green-field


admin subnet deployments

admin_subnet_arm_id The Azure resource identifier Mandatory For brown-field


for the admin subnet deployments

admin_subnet_name The name of the admin subnet Optional

admin_subnet_nsg_name The name of the admin network Optional


security group

admin_subnet_nsg_arm_id The Azure resource identifier Mandatory For brown-field


for the admin network security deployments
group

db_subnet_address_prefix The address range for the db Mandatory For green-field


subnet deployments
Variable Description Type Notes

db_subnet_arm_id The Azure resource identifier Mandatory For brown-field


for the db subnet deployments

db_subnet_name The name of the db subnet Optional

db_subnet_nsg_name The name of the db network Optional


security group

db_subnet_nsg_arm_id The Azure resource identifier Mandatory For brown-field


for the db network security deployments
group

app_subnet_address_prefix The address range for the app Mandatory For green-field
subnet deployments

app_subnet_arm_id The Azure resource identifier Mandatory For brown-field


for the app subnet deployments

app_subnet_name The name of the app subnet Optional

app_subnet_nsg_name The name of the app network Optional


security group

app_subnet_nsg_arm_id The Azure resource identifier Mandatory For brown-field


for the app network security deployments
group

web_subnet_address_prefix The address range for the web Mandatory For green-field
subnet deployments

web_subnet_arm_id The Azure resource identifier Mandatory For brown-field


for the web subnet deployments

web_subnet_name The name of the web subnet Optional

web_subnet_nsg_name The name of the web network Optional


security group

web_subnet_nsg_arm_id The Azure resource identifier Mandatory For brown-field


for the web network security deployments
group

This table contains the networking parameters if Azure NetApp Files is used.
ノ Expand table

Variable Description Type Notes

anf_subnet_arm_id The Azure resource identifier Required When using existing


for the ANF subnet subnets

anf_subnet_address_prefix The address range for the ANF Required When using ANF for
subnet deployments

anf_subnet_name The name of the ANF subnet Optional

This table contains the networking parameters if iSCSI devices are hosted from this
workload zone.

ノ Expand table

Variable Description Type Notes

iscsi_subnet_address_prefix The address range for the Mandatory For green-field


iscsi subnet deployments

iscsi_subnet_arm_id The Azure resource identifier Mandatory For brown-field


for the iscsi subnet deployments

iscsi_subnet_name The name of the iscsi subnet Optional

iscsi_subnet_nsg_arm_id The Azure resource identifier Mandatory For brown-field


for the iscsi network security deployments
group

iscsi_subnet_nsg_name The name of the iscsi Optional


network security group

This table contains the networking parameters if Azure Monitor for SAP is hosted from
this workload zone.

ノ Expand table

Variable Description Type Notes

ams_subnet_address_prefix The address range for the iscsi Mandatory For green-field
subnet deployments

ams_subnet_arm_id The Azure resource identifier for Mandatory For brown-field


the iscsi subnet deployments

ams_subnet_name The name of the iscsi subnet Optional


Variable Description Type Notes

ams_subnet_nsg_arm_id The Azure resource identifier for Mandatory For brown-field


the iscsi network security deployments
group

ams_subnet_nsg_name The name of the iscsi network Optional


security group

This table contains additional networking parameters.

ノ Expand table

Variable Description Type Notes

use_private_endpoint Are private endpoints created Optional


for storage accounts and key
vaults.

use_service_endpoint Are service endpoints defined Optional


for the subnets.

peer_with_control_plane_vnet Are virtual networks peered Optional Required for the


with the control plane virtual SAP Installation
network.

public_network_access_enabled Is public access enabled on the Optional


storage accounts and key
vaults

Minimum required network definition

Terraform

network_logical_name = "SAP01"
network_address_space = "10.110.0.0/16"

db_subnet_address_prefix = "10.110.96.0/19"
app_subnet_address_prefix = "10.110.32.0/19"

Authentication parameters
This table defines the credentials used for defining the virtual machine authentication.

ノ Expand table
Variable Description Type Notes

automation_username Administrator account name Optional Default: azureadm

automation_password Administrator password Optional

automation_path_to_public_key Path to existing public key Optional

automation_path_to_private_key Path to existing private key Optional

Minimum required authentication definition

Terraform

automation_username = "azureadm"

Key vault parameters


This table defines the parameters used for defining the key vault information.

ノ Expand table

Variable Description Type Notes

additional_users_to_add_to_keyvault_policies A list of user Optional


object IDs to add
to the
deployment key
vault access
policies

enable_purge_control_for_keyvaults Disables the Optional Use only for


purge protection test
for Azure key environments.
vaults

spn_keyvault_id Azure resource Optional


identifier for
existing
deployment
credentials (SPNs)
key vault

user_keyvault_id Azure resource Optional


identifier for
Variable Description Type Notes

existing system
credentials key
vault

Private DNS
ノ Expand table

Variable Description Type

dns_label If specified, is the DNS name of the private DNS Optional


zone

dns_resource_group_name The name of the resource group that contains the Optional
private DNS zone

register_virtual_network_to_dns Controls if the SAP Virtual Network is registered Optional


with the private DNS zone

dns_server_list If specified, a list of DNS Server IP addresses Optional

NFS support
ノ Expand table

Variable Description Type Notes

create_transport_storage If defined, create storage for the Optional


transport directories.

export_install_path If provided, export mount path for the Optional


installation media.

export_transport_path If provided, export mount path for the Optional


transport share.

install_private_endpoint_id Azure resource ID for the install Optional For existing


private endpoint. endpoints

install_volume_size Defines the size (in GB) for the Optional


install volume.

NFS_provider Defines what NFS back end to use. Optional


The options are AFS for Azure Files
NFS or ANF for Azure NetApp Files,
Variable Description Type Notes

NONE for NFS from the SCS server, or


NFS for an external NFS solution.

transport_volume_size Defines the size (in GB) for the Optional


transport volume.

`use_AFS_for_installation_media If provided, uses AFS for the Optional


installation media.

Azure Files NFS support

ノ Expand table

Variable Description Type Notes

install_storage_account_id Azure resource identifier for Optional For brown-field


the install storage account deployments

transport_storage_account_id Azure resource identifier for Optional For brown-field


the transport storage account deployments

Minimum required Azure Files NFS definition

Terraform

NFS_provider = "AFS"
use_private_endpoint = true

Azure NetApp Files support

ノ Expand table

Variable Description Type Notes

ANF_account_name Name for the Azure Optional


NetApp Files account

ANF_service_level Service level for the Azure Optional


NetApp Files capacity pool

ANF_pool_size The size (in GB) of the Optional


Azure NetApp Files
Variable Description Type Notes

capacity pool

ANF_qos_type The quality of service type Optional


of the pool (auto or
manual)

ANF_use_existing_pool Use existing for the Azure Optional


NetApp Files capacity pool

ANF_pool_name The name of the Azure Optional


NetApp Files capacity pool

ANF_account_arm_id Azure resource identifier Optional For brown-field


for the Azure NetApp Files deployments
account

ANF_transport_volume_use_existing Defines if an existing Optional


transport volume is used

ANF_transport_volume_name Defines the transport Optional For brown-field


volume name deployments

ANF_transport_volume_size Defines the size of the Optional


transport volume in GB

ANF_transport_volume_throughput Defines the throughput of Optional


the transport volume

ANF_install_volume_use_existing Defines if an existing install Optional


volume is used

ANF_install_volume_name Defines the install volume Optional For brown-field


name deployments

ANF_install_volume_size Defines the size of the Optional


install volume in GB

ANF_install_volume_throughput Defines the throughput of Optional


the install volume

Minimum required ANF definition

Terraform

NFS_provider = "ANF"
anf_subnet_address_prefix = "10.110.64.0/27"
ANF_service_level = "Ultra"

DNS support

ノ Expand table

Variable Description Type

dns_label DNS name of the private DNS zone. Optional

management_dns_resourcegroup_name Resource group that contains the private DNS Optional


zone.

management_dns_subscription_id Subscription ID for the subscription that Optional


contains the private DNS zone.

use_custom_dns_a_registration Use an existing private DNS zone. Optional

Other parameters
ノ Expand table

Variable Description Type Notes

diagnostics_storage_account_arm_id The Azure resource Required For brown-field


identifier for the deployments.
diagnostics storage
account.

enable_purge_control_for_keyvaults If purge control is enabled Optional Use only for test


on the key vault. deployments.

place_delete_lock_on_resources Places delete locks on the Optional


key vaults and the virtual
network

witness_storage_account_arm_id The Azure resource Required For brown-field


identifier for the witness deployments.
storage account.

iSCSI parameters
ノ Expand table
Variable Description Type Notes

iscsi_authentication_type Defines the default Optional


authentication for the iSCSI
virtual machines

iscsi_authentication_username Administrator account name Optional

iscsi_count The number of iSCSI virtual Optional


machines

iscsi_image Defines the virtual machine Optional


image to use (next table)

iscsi_nic_ips IP addresses for the iSCSI Optional Ignored if


virtual machines iscsi_use_DHCP is
defined

iscsi_use_DHCP Controls whether to use Optional


dynamic IP addresses
provided by the Azure
subnet

iscsi_vm_zones Availability zones for the Optional


iSCSI Virtual Machines

Utility VM parameters
ノ Expand table

Variable Description Type Notes

utility_vm_count Defines the number of Optional Use the utility virtual


utility virtual machines to machine to host SAPGui
deploy

utility_vm_image Defines the virtual machine Optional Default: Windows Server


image to use 2019

utility_vm_nic_ips Defines the IP addresses for Optional


the virtual machines

utility_vm_os_disk_size Defines the size of the OS Optional Default: 128


disk for the Virtual Machine

utility_vm_os_disk_type Defines the type of the OS Optional Default: Premium_LRS


disk for the Virtual Machine
Variable Description Type Notes

utility_vm_size Defines the SKU for the Optional Default: Standard_D4ds_v4


utility virtual machines

utility_vm_useDHCP Defines if Azure subnet Optional


provided IPs should be used

Azure Monitor parameters


ノ Expand table

Variable Description Type Notes

create_ams_instance Defines if an Azure Monitor for SAP instance should Optional


be created

ams_instance_name Defines the name of the instance Optional

ams_laws_arm_id Defines the ARM resource ID for the Log Analytics Optional
Workspace

Terraform parameters
This table contains the Terraform parameters. These parameters need to be entered
manually if you're not using the deployment scripts.

ノ Expand table

Variable Description Type

tfstate_resource_id The Azure resource identifier for the storage account in the Required
SAP library that contains the Terraform state files.

deployer_tfstate_key The name of the state file for the deployer. Required

Next step
About SAP system deployment in automation framework
Configure SAP system parameters
Article • 03/10/2024

Configuration for SAP Deployment Automation Framework happens through


parameters files. You provide information about your SAP system infrastructure in a
tfvars file, which the automation framework uses for deployment. You can find

examples of the variable file in the samples repository.

The automation supports creating resources (green-field deployment) or using existing


resources (brown-field deployment):

Green-field scenario: The automation defines default names for resources, but
some resource names might be defined in the tfvars file.
Brown-field scenario: The Azure resource identifiers for the resources must be
specified.

Deployment topologies
You can use the automation framework to deploy the following SAP architectures:

Standalone
Distributed
Distributed (highly available)

Standalone
In the standalone architecture, all the SAP roles are installed on a single server.

To configure this topology, define the database tier values and set
enable_app_tier_deployment to false.

Distributed
The distributed architecture has a separate database server and application tier. The
application tier can further be separated by having SAP central services on a virtual
machine and one or more application servers.

To configure this topology, define the database tier values and define scs_server_count
= 1, application_server_count >= 1.
High availability
The distributed (highly available) deployment is similar to the distributed architecture. In
this deployment, the database and/or SAP central services can both be configured by
using a highly available configuration that uses two virtual machines, each with
Pacemaker clusters or Windows failover clustering.

To configure this topology, define the database tier values and set
database_high_availability to true. Set scs_server_count = 1 and
scs_high_availability = true and application_server_count >= 1.

Environment parameters
This section contains the parameters that define the environment settings.

ノ Expand table

Variable Description Type Notes

environment Identifier for the Mandatory For example, PROD for a


workload zone (max production environment and
five characters) NP for a nonproduction
environment.

location The Azure region in Required


which to deploy

custom_prefix Specifies the custom Optional


prefix used in the
resource naming

use_prefix Controls if the Optional DEV-WEEU-SAP01-X00_xxxx


resource naming
includes the prefix

name_override_file Name override file Optional See Custom naming.

save_naming_information Creates a sample Optional See Custom naming.


naming JSON file

tags A dictionary of tags Optional


to associate with all
resources.

Resource group parameters


This section contains the parameters that define the resource group.

ノ Expand table

Variable Description Type

resourcegroup_name Name of the resource group to be created Optional

resourcegroup_arm_id Azure resource identifier for an existing resource group Optional

resourcegroup_tags Tags to be associated to the resource group Optional

Infrastructure parameters
This section contains the parameters related to the Azure infrastructure.

ノ Expand table

Variable Description Type

custom_disk_sizes_filename Defines the disk sizing file name, See Optional


Custom sizing.

resource_offset Provides an offset for resource Optional


naming.

use_loadbalancers_for_standalone_deployments Controls if load balancers are Optional


deployed for standalone
installations

user_assigned_identity_id User assigned identity to assign to Optional


the virtual machines

vm_disk_encryption_set_id The disk encryption key to use for Optional


encrypting managed disks by using
customer-provided keys.

use_random_id_for_storageaccounts If defined will append a random Optional


string to the storage account name

use_scalesets_for_deployment Use Flexible Virtual Machine Scale Optional


Sets for the deployment

scaleset_id Azure resource identifier for the Optional


virtual machine scale set

proximityplacementgroup_arm_ids Specifies the Azure resource


identifiers of existing proximity
Variable Description Type

placement groups.

proximityplacementgroup_names Specifies the names of the proximity


placement groups.

use_app_proximityplacementgroups Controls if the app tier virtual Optional


machines are placed in a different
ppg from the database.

app_proximityplacementgroup_arm_ids Specifies the Azure resource


identifiers of existing proximity
placement groups for the app tier.

app_proximityplacementgroup_names Specifies the names of the proximity


placement groups for the app tier.

use_spn If defined the deployment will be Optional


performed using a Service Principal,
otherwise an MSI

use_private_endpoint Use private endpoints. Optional

The resource_offset parameter controls the naming of resources. For example, if you
set the resource_offset to 1, the first disk will be named disk1 . The default value is 0.

SAP Application parameters


This section contains the parameters related to the SAP Application.

ノ Expand table

Variable Description Type

sid Defines the SAP application SID Required

database_sid Defines the database SID Required

web_sid Defines the Web Dispatcher SID Required

scs_instance_number The instance number of SCS Optional

ers_instance_number The instance number of ERS Optional

pas_instance_number The instance number of the Primary Application Server Optional

app_instance_number The instance number of the Application Server Optional


Variable Description Type

database_instance_number The instance number of SCS Optional

web_instance_number The instance number of the Web Dispatcher Optional

bom_name Defines the name of the Bill of MAterials file Optional

SAP virtual hostname parameters


In SAP Deployment Automation Framework, the SAP virtual hostname is defined by
specifying the use_secondary_ips parameter.

ノ Expand table

Variable Description Type

use_secondary_ips Boolean flag that indicates if SAP should be installed by using Optional
virtual hostnames

Database tier parameters


The database tier defines the infrastructure for the database tier. Supported database
back ends are:

HANA
DB2

ORACLE
ORACLE-ASM

ASE
SQLSERVER

NONE (in this case, no database tier is deployed)

See High-availability configuration for information on how to configure high availability.

ノ Expand table

Variable Description Type Notes

database_platform Defines the database back end Required

database_vm_image Defines the virtual machine Optional


image to use
Variable Description Type Notes

database_vm_sku Defines the virtual machine Optional


SKU to use

database_server_count Defines the number of Optional


database servers

database_high_availability Defines if the database tier is Optional


deployed highly available

database_vm_zones Defines the availability zones Optional


for the database servers

db_sizing_dictionary_key Defines the database sizing Required See Custom


information sizing.

database_vm_use_DHCP Controls if Azure subnet- Optional


provided IP addresses should
be used

database_vm_db_nic_ips Defines the IP addresses for Optional


the database servers (database
subnet)

database_vm_db_nic_secondary_ips Defines the secondary IP Optional


addresses for the database
servers (database subnet)

database_vm_admin_nic_ips Defines the IP addresses for Optional


the database servers (admin
subnet)

database_loadbalancer_ips List of IP addresses for the Optional


database load balancer (db
subnet)

database_vm_authentication_type Defines the authentication Optional


type (key/password)

database_use_avset Controls if the database Optional


servers are placed in
availability sets

database_use_ppg Controls if the database Optional


servers are placed in proximity
placement groups

database_vm_avset_arm_ids Defines the existing availability Optional Primarily used


sets Azure resource IDs with ANF
pinning.
Variable Description Type Notes

database_use_premium_v2_storage Controls if the database tier Optional


will use premium storage v2
(HANA)

database_dual_nics Controls if the HANA database Optional


servers will have dual network
interfaces

database_tags Defines a list of tags to be Optional


applied to the database
servers

The virtual machine and the operating system image are defined by using the following
structure:

Python

{
os_type="linux"
type="marketplace"
source_image_id=""
publisher="SUSE"
offer="sles-sap-15-sp3"
sku="gen2"
version="latest"
}

Common application tier parameters


The application tier defines the infrastructure for the application tier, which can consist
of application servers, central services servers, and web dispatch servers.

ノ Expand table

Variable Description Type Notes

enable_app_tier_deployment Defines if the application tier is Optional


deployed

app_tier_sizing_dictionary_key Lookup value that defines the VM Optional


SKU and the disk layout for the
application tier servers

app_disk_sizes_filename Defines the custom disk size file for Optional See
the application tier servers Custom
Variable Description Type Notes

sizing.

app_tier_authentication_type Defines the authentication type for Optional


the application tier virtual machines

app_tier_use_DHCP Controls if Azure subnet-provided IP Optional


addresses should be used (dynamic)

app_tier_dual_nics Defines if the application tier server Optional


will have two network interfaces

SAP central services parameters


ノ Expand table

Variable Description Type Notes

scs_server_count Defines the number of SCS Required


servers

scs_high_availability Defines if the central Optional See High


services is highly available availability
configuration.

scs_server_sku Defines the virtual machine Optional


SKU to use

scs_server_image Defines the virtual machine Required


image to use

scs_server_zones Defines the availability Optional


zones of the SCS servers

scs_server_app_nic_ips List of IP addresses for the Optional


SCS servers (app subnet)

scs_server_app_nic_secondary_ips List of secondary IP Optional


addresses for the SCS
servers (app subnet)

scs_server_app_admin_nic_ips List of IP addresses for the Optional


SCS servers (admin subnet)

scs_server_loadbalancer_ips List of IP addresses for the Optional


scs load balancer (app
subnet)
Variable Description Type Notes

scs_server_use_ppg Controls if the SCS servers Optional


are placed in availability
sets

scs_server_use_avset Controls if the SCS servers Optional


are placed in proximity
placement groups

scs_server_tags Defines a list of tags to be Optional


applied to the SCS servers

Application server parameters


ノ Expand table

Variable Description Type Notes

application_server_count Defines the number of application Required


servers

application_server_sku Defines the virtual machine SKU Optional


to use

application_server_image Defines the virtual machine image Required


to use

application_server_zones Defines the availability zones to Optional


which the application servers are
deployed

application_server_admin_nic_ips List of IP addresses for the Optional


application server (admin subnet)

application_server_app_nic_ips[] List of IP addresses for the Optional


application servers (app subnet)

application_server_nic_secondary_ips[] List of secondary IP addresses for Optional


the application servers (app
subnet)

application_server_use_ppg Controls if application servers are Optional


placed in availability sets

application_server_use_avset Controls if application servers are Optional


placed in proximity placement
groups
Variable Description Type Notes

application_server_tags Defines a list of tags to be applied Optional


to the application servers

application_server_vm_avset_arm_ids[] List of Availability Set Resource Optional


Ids for the application servers

Web dispatcher parameters


ノ Expand table

Variable Description Type Notes

webdispatcher_server_count Defines the number of web Required


dispatcher servers

webdispatcher_server_sku Defines the virtual machine SKU Optional


to use

webdispatcher_server_image Defines the virtual machine Optional


image to use

webdispatcher_server_zones Defines the availability zones to Optional


which the web dispatchers are
deployed

webdispatcher_server_app_nic_ips[] List of IP addresses for the web Optional


dispatcher server (app/web
subnet)

webdispatcher_server_nic_secondary_ips[] List of secondary IP addresses Optional


for the web dispatcher server
(app/web subnet)

webdispatcher_server_app_admin_nic_ips List of IP addresses for the web Optional


dispatcher server (admin
subnet)

webdispatcher_server_use_ppg Controls if web dispatchers are Optional


placed in availability sets

webdispatcher_server_use_avset Controls if web dispatchers are Optional


placed in proximity placement
groups

webdispatcher_server_tags Defines a list of tags to be Optional


applied to the web dispatcher
servers
Variable Description Type Notes

webdispatcher_server_loadbalancer_ips List of IP addresses for the web Optional


load balancer (web/app subnet)

Network parameters
If the subnets aren't deployed using the workload zone deployment, they can be added
in the system's tfvars file.

The automation framework can either deploy the virtual network and the subnets
(green-field deployment) or use an existing virtual network and existing subnets (brown-
field deployments):

Green-field scenario: The virtual network address space and the subnet address
prefixes must be specified.
Brown-field scenario: The Azure resource identifier for the virtual network and the
subnets must be specified.

Ensure that the virtual network address space is large enough to host all the resources.

This section contains the networking parameters.

ノ Expand table

Variable Description Type Notes

network_logical_name The logical name of the Required


network

admin_subnet_name The name of the admin Optional


subnet

admin_subnet_address_prefix The address range for Mandatory For green-field


the admin subnet deployments

admin_subnet_arm_id * The Azure resource Mandatory For brown-field


identifier for the admin deployments
subnet

admin_subnet_nsg_name The name of the admin Optional


network security group

admin_subnet_nsg_arm_id * The Azure resource Mandatory For brown-field


identifier for the admin deployments
network security group
Variable Description Type Notes

db_subnet_name The name of the db Optional


subnet

db_subnet_address_prefix The address range for Mandatory For green-field


the db subnet deployments

db_subnet_arm_id * The Azure resource Mandatory For brown-field


identifier for the db deployments
subnet

db_subnet_nsg_name The name of the db Optional


network security group
name

db_subnet_nsg_arm_id * The Azure resource Mandatory For brown-field


identifier for the db deployments
network security group

app_subnet_name The name of the app Optional


subnet

app_subnet_address_prefix The address range for Mandatory For green-field


the app subnet deployments

app_subnet_arm_id * The Azure resource Mandatory For brown-field


identifier for the app deployments
subnet

app_subnet_nsg_name The name of the app Optional


network security group
name

app_subnet_nsg_arm_id * The Azure resource Mandatory For brown-field


identifier for the app deployments
network security group

web_subnet_name The name of the web Optional


subnet

web_subnet_address_prefix The address range for Mandatory For green-field


the web subnet deployments

web_subnet_arm_id * The Azure resource Mandatory For brown-field


identifier for the web deployments
subnet

web_subnet_nsg_name The name of the web Optional


network security group
name
Variable Description Type Notes

web_subnet_nsg_arm_id * The Azure resource Mandatory For brown-field


identifier for the web deployments
network security group

deploy_application_security_groups Controls application Optional


security group
deployments

nsg_asg_with_vnet If true, the network Optional


security group will be
placed with the VNet

* = Required for brown-field deployments

Key vault parameters


If you don't want to use the workload zone key vault but another one, you can define
the key vault's Azure resource identifier in the system's tfvar file.

This section defines the parameters used for defining the key vault information.

ノ Expand table

Variable Description Type Notes

user_keyvault_id Azure resource identifier Optional


for existing system
credentials key vault

spn_keyvault_id Azure resource identifier Optional


for existing deployment
credentials (SPNs) key
vault

enable_purge_control_for_keyvaults Disables the purge Optional Only use for test


protection for Azure key environments.
vaults

Anchor virtual machine parameters


SAP Deployment Automation Framework supports having an anchor virtual machine.
The anchor virtual machine is the first virtual machine to be deployed. It's used to
anchor the proximity placement group.
This section contains the parameters related to the anchor virtual machine.

ノ Expand table

Variable Description Type

deploy_anchor_vm Defines if the anchor virtual machine is used Optional

anchor_vm_accelerated_networking Defines if the anchor VM is configured to use Optional


accelerated networking

anchor_vm_authentication_type Defines the authentication type for the anchor Optional


VM (key or password)

anchor_vm_authentication_username Defines the username for the anchor VM Optional

anchor_vm_image Defines the VM image to use (as shown in the Optional


following code sample)

anchor_vm_nic_ips[] List of IP addresses for the anchor VMs (app Optional


subnet)

anchor_vm_sku Defines the VM SKU to use, for example, Optional


Standard_D4s_v3

anchor_vm_use_DHCP Controls whether to use dynamic IP addresses Optional


provided by Azure subnet

The virtual machine and the operating system image are defined by using the following
structure:

Python

{
os_type = "linux"
type = "marketplace"
source_image_id = ""
publisher = "SUSE"
offer = "sles-sap-15-sp5"
sku = "gen2"
version= " latest"
}

Authentication parameters
By default, the SAP system deployment uses the credentials from the SAP workload
zone. If the SAP system needs unique credentials, you can provide them by using these
parameters.
ノ Expand table

Variable Description Type

automation_username Administrator account name Optional

automation_password Administrator password Optional

automation_path_to_public_key Path to existing public key Optional

automation_path_to_private_key Path to existing private key Optional

Miscellaneous parameters
ノ Expand table

Variable Description

license_type Specifies the license type for the virtual machines. Possible
values are RHEL_BYOS and SLES_BYOS . For Windows, the
possible values are None , Windows_Client , and Windows_Server .

use_zonal_markers Specifies if zonal virtual machines will include a zonal identifier:


xooscs_z1_00l### versus xooscs00l### .

deploy_v1_monitoring_extension Defines if the Microsoft.AzureCAT.AzureEnhancedMonitoring


extension will be deployed

NFS support
ノ Expand table

Variable Description Type

NFS_provider Defines what NFS back end to use. The options are AFS for Optional
Azure Files NFS or ANF for Azure NetApp files.

sapmnt_volume_size Defines the size (in GB) for the sapmnt volume. Optional

Azure files NFS support

ノ Expand table
Variable Description Type

azure_files_sapmnt_id If provided, the Azure resource ID of the storage Optional


account used for sapmnt

sapmnt_private_endpoint_id If provided, the Azure resource ID of the sapmnt private Optional


endpoint

HANA Scaleout support

ノ Expand table

Variable Description Type Notes

database_HANA_use_ANF_scaleout_scenario Defines if HANA scaleout is used. Optional

stand_by_node_count The number of standby nodes. Optional

Azure NetApp Files support

ノ Expand table

Variable Description Type Notes

ANF_HANA_use_AVG Use Application Volume Optional


Group for the volumes.

ANF_HANA_use_Zones Deploy the Azure NetApp Optional


Files volume zonally.

ANF_HANA_data Create Azure NetApp Optional


Files volume for HANA
data.

ANF_HANA_data_use_existing_volume Use existing Azure Optional Use for pre-


NetApp Files volume for created
HANA data. volumes.

ANF_HANA_data_volume_count Number of HANA data Optional


volumes.

ANF_HANA_data_volume_name Azure NetApp Files Optional


volume name for HANA
data.

ANF_HANA_data_volume_size Azure NetApp Files Optional Default size is


volume size in GB for 256.
Variable Description Type Notes

HANA data.

ANF_HANA_data_volume_throughput Azure NetApp Files Optional Default is 128


volume throughput for MBs/s.
HANA data.

ANF_HANA_log Create Azure NetApp Optional


Files volume for HANA
log.

ANF_HANA_log_use_existing Use existing Azure Optional Use for pre-


NetApp Files volume for created
HANA log. volumes.

ANF_HANA_log_volume_count Number of HANA log Optional


volumes.

ANF_HANA_log_volume_name Azure NetApp Files Optional


volume name for HANA
log.

ANF_HANA_log_volume_size Azure NetApp Files Optional Default size is


volume size in GB for 128.
HANA log.

ANF_HANA_log_volume_throughput Azure NetApp Files Optional Default is 128


volume throughput for MBs/s.
HANA log.

ANF_HANA_shared Create Azure NetApp Optional


Files volume for HANA
shared.

ANF_HANA_shared_use_existing Use existing Azure Optional Use for pre-


NetApp Files volume for created
HANA shared. volumes.

ANF_HANA_shared_volume_name Azure NetApp Files Optional


volume name for HANA
shared.

ANF_HANA_shared_volume_size Azure NetApp Files Optional Default size is


volume size in GB for 128.
HANA shared.

ANF_HANA_shared_volume_throughput Azure NetApp Files Optional Default is 128


volume throughput for MBs/s.
HANA shared.
Variable Description Type Notes

ANF_sapmnt Create Azure NetApp Optional


Files volume for sapmnt .

ANF_sapmnt_use_existing_volume Use existing Azure Optional Use for pre-


NetApp Files volume for created
sapmnt . volumes.

ANF_sapmnt_volume_name Azure NetApp Files Optional


volume name for sapmnt .

ANF_sapmnt_volume_size Azure NetApp Files Optional Default size is


volume size in GB for 128.
sapmnt .

ANF_sapmnt_throughput Azure NetApp Files Optional Default is 128


volume throughput for MBs/s.
sapmnt .

ANF_sapmnt_use_clone_in_secondary_zone Create the secondary Optional Default is 128


sapmnt volume as a MBs/s.
clone

ANF_usr_sap Create Azure NetApp Optional


Files volume for usrsap .

ANF_usr_sap_use_existing Use existing Azure Optional Use for pre-


NetApp Files volume for created
usrsap . volumes.

ANF_usr_sap_volume_name Azure NetApp Files Optional


volume name for usrsap .

ANF_usr_sap_volume_size Azure NetApp Files Optional Default size is


volume size in GB for 128.
usrsap .

ANF_usr_sap_throughput Azure NetApp Files Optional Default is 128


volume throughput for MBs/s.
usrsap .

Oracle parameters
These parameters need to be updated in the sap-parameters.yaml file when you deploy
Oracle-based systems.

ノ Expand table
Variable Description Type Notes

ora_release Release of Oracle, for example, 19 Mandatory

ora_version Version of Oracle, for example, 19.0.0 Mandatory

oracle_sbp_patch Oracle SBP patch file name, for Mandatory Must be part of the
example, SAP19P_2202-70004508.ZIP Bill of Materials

use_observer Defines if an observer will be used Optional

You can use the configuration_settings variable to let Terraform add them to sap-
parameters.yaml file.

Terraform

configuration_settings = {
ora_release = "19",
ora_version = "19.0.0",
oracle_sbp_patch = "SAP19P_2202-
70004508.ZIP",
oraclegrid_sbp_patch = "GIRU19P_2202-
70004508.ZIP",
}

DNS support

ノ Expand table

Variable Description Type

management_dns_resourcegroup_name Resource group that contains the private DNS Optional


zone.

management_dns_subscription_id Subscription ID for the subscription that Optional


contains the private DNS zone.

use_custom_dns_a_registration Use an existing private DNS zone. Optional

dns_a_records_for_secondary_names Registers A records for the secondary IP Optional


addresses.

Azure Monitor for SAP parameters


ノ Expand table
Variable Description Type Notes

ams_resource_id Defines the ARM resource ID for Azure Monitor for Optional
SAP

enable_ha_monitoring Defines if Prometheus high availability cluster Optional


monitoring is enabled

enable_os_monitoring Defines if Prometheus high availability OS Optional


monitoring is enabled

Other parameters
ノ Expand table

Variable Description Type Notes

Agent_IP IP address of the agent. Optional

add_Agent_IP Controls if Agent IP is added to the key vault and storage Optional
account firewalls

Terraform parameters
This section contains the Terraform parameters. These parameters need to be entered
manually if you're not using the deployment scripts.

ノ Expand table

Variable Description Type

tfstate_resource_id Azure resource identifier for the storage account in the Required
SAP library that will contain the Terraform state files *

deployer_tfstate_key The name of the state file for the deployer Required
*

landscaper_tfstate_key The name of the state file for the workload zone Required
*

* = Required for manual deployments

High-availability configuration
The high-availability configuration for the database tier and the SCS tier is configured by
using the database_high_availability and scs_high_availability flags. Red Hat and
SUSE should use the appropriate HA version of the virtual machine images (RHEL-SAP-
HA, sles-sap-15-sp?).

High-availability configurations use Pacemaker with Azure fencing agents.

Cluster parameters
This section contains the parameters related to the cluster configuration.

ノ Expand table

Variable Description Type

database_cluster_disk_lun Specifies the The LUN of the shared disk for the Optional
Database cluster.

database_cluster_disk_size The size of the shared disk for the Database cluster. Optional

database_cluster_type Cluster quorum type; AFA (Azure Fencing Agent), ASD Optional
(Azure Shared Disk), ISCSI

fencing_role_name Specifies the Azure role assignment to assign to Optional


enable fencing.

idle_timeout_scs_ers Sets the idle timeout setting for the SCS and ERS Optional
loadbalancer.

scs_cluster_disk_lun Specifies the The LUN of the shared disk for the Optional
Central Services cluster.

scs_cluster_disk_size The size of the shared disk for the Central Services Optional
cluster.

scs_cluster_type Cluster quorum type; AFA (Azure Fencing Agent), ASD Optional
(Azure Shared Disk), ISCSI

use_msi_for_clusters If defined, configures the Pacemaker cluster by using Optional


managed identities.

use_simple_mount Specifies if simple mounts are used (applicable for Optional


SLES 15 SP# or newer).

use_fence_kdump Configure fencing device based on the fence agent Optional


fence_kdump

use_fence_kdump_lun_db Default lun number of the kdump disk (database) Optional


Variable Description Type

use_fence_kdump_lun_scs Default lun number of the kdump disk (Central Optional


Services)

use_fence_kdump_size_gb_db Default size of the kdump disk (database) Optional

use_fence_kdump_size_gb_scs Default size of the kdump disk (Central Services) Optional

7 Note

The highly available central services deployment requires using a shared file system
for sap_mnt . You can use Azure Files or Azure NetApp Files by using the
NFS_provider attribute. The default is Azure Files. To use Azure NetApp Files, set

the NFS_provider attribute to ANF .

Fencing agent configuration


SAP Deployment Automation Framework supports using either managed identities or
service principals for fencing agents. The following section describes how to configure
each option.

If you set the variable use_msi_for_clusters to true , the fencing agent uses managed
identities.

If you want to use a service principal for the fencing agent, set that variable to false.

The fencing agents should be configured to use a unique service principal with
permissions to stop and start virtual machines. For more information, see Create a
fencing agent.

Azure CLI

az ad sp create-for-rbac --role="Linux Fence Agent Role" --


scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent"

Replace <prefix> with the name prefix of your environment, such as DEV-WEEU-SAP01 .
Replace <subscriptionID> with the workload zone subscription ID.

) Important
The name of the fencing agent service principal must be unique in the tenant. The
script assumes that a role Linux Fence Agent Role was already created.

Record the values from the fencing agent SPN:

appId
password
tenant

The fencing agent details must be stored in the workload zone key vault by using a
predefined naming convention. Replace <prefix> with the name prefix of your
environment, such as DEV-WEEU-SAP01 . Replace <workload_kv_name> with the name of the
key vault from the workload zone resource group. For the other values, use the values
recorded from the previous step and run the script.

Azure CLI

az keyvault secret set --name "<prefix>-fencing-spn-id" --vault-name "


<workload_kv_name>" --value "<appId>";
az keyvault secret set --name "<prefix>-fencing-spn-pwd" --vault-name "
<workload_kv_name>" --value "<password>";
az keyvault secret set --name "<prefix>-fencing-spn-tenant" --vault-name "
<workload_kv_name>" --value "<tenant>";

Next steps
Deploy SAP system
Configure SAP installation parameters
Article • 08/29/2023

The Ansible playbooks use a combination of default parameters and parameters defined
by the Terraform deployment for the SAP installation.

Default parameters
The following tables contain the default parameters defined by the framework.

User IDs
This table contains the IDs for the SAP users and groups for the different platforms.

Parameter Description Default value

HANA

sapadm_uid The UID for the sapadm account 2100

sidadm_uid The UID for the sidadm account 2003

hdbadm_uid The UID for the hdbadm account 2200

sapinst_gid The GID for the sapinst group 2001

sapsys_gid The GID for the sapsys group 2000

hdbshm_gid The GID for the hdbshm group 2002

DB2

db2sidadm_uid The UID for the db2sidadm account 3004

db2sapsid_uid The UID for the db2sapsid account 3005

db2sysadm_gid The UID for the db2sysadm group 3000

db2sysctrl_gid The UID for the db2sysctrl group 3001

db2sysmaint_gid The UID for the db2sysmaint group 3002

db2sysmon_gid The UID for the db2sysmon group 2003

ORACLE

orasid_uid The UID for the orasid account 3100


Parameter Description Default value

oracle_uid The UID for the oracle account 3101

observer_uid The UID for the observer account 4000

dba_gid The GID for the dba group 3100

oper_gid The GID for the oper group 3101

asmoper_gid The GID for the asmoper group 3102

asmadmin_gid The GID for the asmadmin group 3103

asmdba_gid The GID for the asmdba group 3104

oinstall_gid The GID for the oinstall group 3105

backupdba_gid The GID for the backupdba group 3106

dgdba_gid The GID for the dgdba group 3107

kmdba_gid The GID for the kmdba group 3108

racdba_gid The GID for the racdba group 3108

Windows parameters
This table contains the information pertinent to Windows deployments.

Parameter Description Default value

mssserver_version SQL Server version mssserver2019

Parameters
The following tables contain the parameters stored in the sap-parameters.yaml file. Most
of the values are prepopulated via the Terraform deployment.

Infrastructure

Parameter Description Type

sap_fqdn The FQDN suffix for the virtual machines to be added to the local hosts Required
file
Application tier

Parameter Description Type

bom_base_name The name of the SAP Application Bill of Required


Materials file

sap_sid The SID of the SAP application Required

scs_high_availability Defines if the central services is Required


deployed highly available

scs_instance_number Defines the instance number for ASCS Optional

scs_lb_ip IP address of ASCS instance Optional

scs_virtual_hostname The host name of the ASCS instance Optional

ers_instance_number Defines the instance number for ERS Optional

ers_lb_ip IP address of ERS instance Optional

ers_virtual_hostname The host name of the ERS instance Optional

pas_instance_number Defines the instance number for PAS Optional

web_sid The SID for the web dispatcher Required if web dispatchers
are deployed

scs_clst_lb_ip IP address of Windows cluster service Optional

Database tier

Parameter Description Type

db_sid The SID of the SAP database. Required

db_instance_number Defines the instance number for the database. Required

db_high_availability Defines if the database is deployed highly available. Required

db_lb_ip IP address of the database load balancer. Optional

platform The database platform. Valid values are ASE, DB2, HANA, Required
ORACLE, and SQLSERVER.

db_clst_lb_ip IP address of database cluster for Windows. Optional

NFS
Parameter Description Type

NFS_provider Defines what NFS back end to use. The options are AFS Optional
for Azure Files NFS or ANF for Azure NetApp Files, NONE
for NFS from the SCS server, or NFS for an external NFS
solution.

sap_mnt The NFS path for sap_mnt. Required

sap_trans The NFS path for sap_trans. Required

usr_sap_install_mountpoint The NFS path for usr/sap/install. Required

Azure NetApp Files

Parameter Description Type

hana_data The NFS path for hana_data volumes Required

hana_log The NFS path for hana_log volumes Required

hana_shared The NFS path for hana_shared volumes Required

usr_sap The NFS path for /usr/sap volumes Required

Windows support

Parameter Description Type

domain_name Defines the Windows domain name, for example, Required


sap.contoso.net

domain Defines the Windows domain Netbios name, for example, Optional
sap

SQL

use_sql_for_SAP Uses the SAP-defined SQL Server media, defaults to true Optional

win_cluster_share_type Defines the cluster type (CSD/FS), defaults to CSD Optional

Miscellaneous
Parameter Description Type

kv_name The name of the Azure key vault that contains the system Required
credentials

secret_prefix The prefix for the name of the secrets for the SID stored in Required
the key vault

upgrade_packages Updates all installed packages on the virtual machines Required

use_msi_for_clusters Uses managed identities for fencing Required

Disks
Disks define a dictionary with information about the disks of all the virtual machines in
the SAP application virtual machines.

Attribute Description Type

host The computer name of the virtual machine. Required

LUN Defines the LUN number that the disk is attached to. Required

type This attribute is used to group the disks. Each disk of the same type is Required
added to the LVM on the virtual machine.

Example of the disks dictionary:

YAML

disks:
- { host: 'rh8dxdb00l084', LUN: 0, type: 'sap' }
- { host: 'rh8dxdb00l084', LUN: 10, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 11, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 12, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 13, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 20, type: 'log' }
- { host: 'rh8dxdb00l084', LUN: 21, type: 'log' }
- { host: 'rh8dxdb00l084', LUN: 22, type: 'log' }
- { host: 'rh8dxdb00l084', LUN: 2, type: 'backup' }
- { host: 'rh8dxdb00l184', LUN: 0, type: 'sap' }
- { host: 'rh8dxdb00l184', LUN: 10, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 11, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 12, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 13, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 20, type: 'log' }
- { host: 'rh8dxdb00l184', LUN: 21, type: 'log' }
- { host: 'rh8dxdb00l184', LUN: 22, type: 'log' }
- { host: 'rh8dxdb00l184', LUN: 2, type: 'backup' }
- { host: 'rh8app00l84f', LUN: 0, type: 'sap' }
- { host: 'rh8app01l84f', LUN: 0, type: 'sap' }
- { host: 'rh8scs00l84f', LUN: 0, type: 'sap' }
- { host: 'rh8scs01l84f', LUN: 0, type: 'sap' }

Oracle support
From the v3.4 release, it's possible to deploy SAP on Azure systems in a shared home
configuration by using an Oracle database back end. For more information on running
SAP on Oracle in Azure, see Azure Virtual Machines Oracle DBMS deployment for SAP
workload.

To install the Oracle back end by using SAP Deployment Automation Framework, you
need to provide the following parameters:

Parameter Description Type

platform The database back end, ORACLE Required

ora_release The Oracle release version, for example, 19 Required

ora_release The Oracle release version, for example, 19.0.0 Required

oracle_sbp_patch The Oracle SBP patch file name Required

Shared home support


To configure shared home support for Oracle, you need to add a dictionary that defines
the SIDs to be deployed. You can do that by adding the parameter MULTI_SIDS that
contains a list of the SIDs and the SID details.

YAML

MULTI_SIDS:
- {sid: 'DE1', dbsid_uid: '3005', sidadm_uid: '2001', ascs_inst_no: '00',
pas_inst_no: '00', app_inst_no: '00'}
- {sid: 'QE1', dbsid_uid: '3006', sidadm_uid: '2002', ascs_inst_no: '01',
pas_inst_no: '01', app_inst_no: '01'}

Each row must specify the following parameters:

Parameter Description Type

sid The SID for the instance Required


Parameter Description Type

dbsid_uid The UID for the DB admin user for the instance Required

sidadm_uid The UID for the SID admin user for the instance Required

ascs_inst_no The ASCS instance number for the instance Required

pas_inst_no The PAS instance number for the instance Required

app_inst_no The APP instance number for the instance Required

Override the default parameters


You can override the default parameters by either specifying them in the sap-
parameters.yaml file or by passing them as command-line parameters to the Ansible
playbooks.

For example, if you want to override the default value of the group ID for the sapinst
group ( sapinst_gid ) parameter, add the following line to the sap-parameters.yaml file:

YAML

sapinst_gid: 1000

If you want to provide them as parameters for the Ansible playbooks, add the following
parameter to the command line:

Bash

ansible-playbook -i hosts SID_hosts.yaml --extra-vars "sapinst_gid=1000"


.....

You can also override the default parameters by specifying them in the
configuration_settings variable in your tfvars file. For example, if you want to

override sapinst_gid , your tfvars file should contain the following line:

Terraform

configuration_settings = {
sapinst_gid = "1000"
}
Next step
Deploy the SAP system
Change the disk configuration for SAP
Deployment Automation Framework
Article • 08/29/2023

By default, SAP Deployment Automation Framework defines the disk configuration for
SAP systems. As needed, you can change the default configuration by providing a
custom disk configuration JSON file.

 Tip

When possible, it's a best practice to increase the disk size instead of adding more
disks.

HANA databases
The table shows the default disk configuration for HANA systems.

Size VM SKU OS Data Log HANA User Backup


disk disks disks shared SAP

Default Standard_D8s_v3 E6 P20 P20 E20 (512 E6 (64 E20 (512


(64 (512 (512 GB) GB) GB)
GB) GB) GB)

S4DEMO Standard_E32ds_v4 P10 P10x4 P10x3 P20 P20 (512


(128 (128 (128 (512 GB)
GB) GB) GB) GB)

M32ts Standard_M32ts P6 P6x4 P10x3 P20 P6 (64 P20 (512


(64 (64 GB) (128 (512 GB) GB) GB)
GB) GB)

M32ls Standard_M32ls P6 P6x4 P10x3 P20 P6 (64 P20 (512


(64 (64 GB) (128 (512 GB) GB) GB)
GB) GB)

M64ls Standard_M64ls P6 P10x4 P10x3 P20 P6 (64 P30


(64 (128 (128 (512 GB) GB) (1024
GB) GB) GB) GB)

M64s Standard_M64s P10 P15x4 P15x3 P30 P6 (64 P30


(128 (256 (256 (1024 GB) (1024
GB) GB) GB) GB) GB)
Size VM SKU OS Data Log HANA User Backup
disk disks disks shared SAP

M64ms Standard_M64ms P6 P20x4 P15x3 P30 P6 (64 P30x2


(64 (512 (256 (1024 GB) (1024
GB) GB) GB) GB) GB)

M128s Standard_M128s P10 P20x4 P15x3 P30 P6 (64 P30x2


(128 (512 (256 (1024 GB) (1024
GB) GB) GB) GB) GB)

M128ms Standard_M128m P10 P30x4 P15x3 P30 P6 (64 P30x4


(128 (1024 (256 (1024 GB) (1024
GB) GB) GB) GB) GB)

M208s_v2 Standard_M208s_v2 P10 P30x4 P15x3 P30 P6 (64 P40x3


(128 (1024 (256 (1024 GB) (2048
GB) GB) GB) GB) GB)

M208ms_v2 Standard_M208ms_v2 P10 P40x4 P15x3 P30 P6 (64 P40x3


(128 (2048 (256 (1024 GB) (2048
GB) GB) GB) GB) GB)

M416s_v2 Standard_M416s_v2 P10 P40x4 P15x3 P30 P6 (64 P40x3


(128 (2048 (256 (1024 GB) (2048
GB) GB) GB) GB) GB)

M416ms_v2 Standard_M416m_v2 P10 P50x4 P15x3 P30 P6 (64 P50x4


(128 (4096 (256 (1024 GB) (4096
GB) GB) GB) GB) GB)

E20ds_v4 Standard_E20ds_v4 P6 P10x3 Ultra P15 P6 (64 P15 (256


(64 (128 (80 GB) (256 GB) GB) GB)
GB) GB)

E20ds_v5 Standard_E20ds_v5 P6 P10x3 Ultra P15 P6 (64 P15 (256


(64 (128 (80 GB) (256 GB) GB) GB)
GB) GB)

E32ds_v4 Standard_E32ds_v4 P6 P10x3 Ultra P15 P6 (64 P15 (256


(64 (128 (128 (256 GB) GB) GB)
GB) GB) GB)

E32ds_v5 Standard_E32ds_v5 P6 P10x3 Ultra P15 P6 (64 P15 (256


(64 (128 (128 (256 GB) GB) GB)
GB) GB) GB)

E48ds_v4 Standard_E48ds_v4 P6 P15x3 Ultra P20 P6 (64 P15 (256


(64 (256 (192 (512 GB) GB) GB)
GB) GB) GB)
Size VM SKU OS Data Log HANA User Backup
disk disks disks shared SAP

E48ds_v5 Standard_E48ds_v4 P6 P15x3 Ultra P20 P6 (64 P15 (256


(64 (256 (192 (512 GB) GB) GB)
GB) GB) GB)

E64ds_v3 Standard_E64ds_v3 P6 P15x3 Ultra P20 P6 (64 P15 (256


(64 (256 (220 (512 GB) GB) GB)
GB) GB) GB)

E64ds_v4 Standard_E64ds_v4 P6 P15x3 Ultra P20 P6 (64 P15 (256


(64 (256 (256 (512 GB) GB) GB)
GB) GB) GB)

E64ds_v5 Standard_E64ds_v5 P6 P15x3 Ultra P20 P6 (64 P15 (256


(64 (256 (256 (512 GB) GB) GB)
GB) GB) GB)

E96ds_v5 Standard_E96ds_v4 P6 P15x3 Ultra P20 P6 (64 P15 (256


(64 (256 (256 (512 GB) GB) GB)
GB) GB) GB)

AnyDB databases
The table shows the default disk configuration for AnyDB systems.

Size VM SKU OS disk Data disks Log disks

Default Standard_E4s_v3 P6 (64 GB) P15 (256 GB) P10 (128 GB)

200 GB Standard_E4s_v3 P6 (64 GB) P15 (256 GB) P10 (128 GB)

500 GB Standard_E8s_v3 P6 (64 GB) P20 (512 GB) P15 (256 GB)

1 TB Standard_E16s_v3 P10(128 GB) P20x2 (512 GB) P15x2 (256 GB)

2 TB Standard_E32s_v3 P10(128 GB) P30x2 (1024 GB) P20x2 (512 GB)

5 TB Standard_M64ls P10(128 GB) P30x5 (1024 GB) P20x2 (512 GB)

10 TB Standard_M64s P10(128 GB) P40x5 (2048 GB) P20x2 (512 GB)

15 TB Standard_M64s P10(128 GB) P50x4 (4096 GB) P20x2 (512 GB)

20 TB Standard_M64s P10(128 GB) P50x5 (4096 GB) P20x2 (512 GB)

30 TB Standard_M128s P10(128 GB) P50x8 (4096 GB) P40x2 (2048 GB)

40 TB Standard_M128s P10(128 GB) P50x10 (4096 GB) P40x2 (2048 GB)


Size VM SKU OS disk Data disks Log disks

50 TB Standard_M128s P10(128 GB) P50x13 (4096 GB) P40x2 (2048 GB)

Custom sizing file


You can define the disk sizing for an SAP system by using a custom sizing JSON file. The
file is grouped in four sections: db , app , scs , and web . Each section contains a list of
disk configuration names. For example, for the database tier, the names might be M32ts
or M64s .

These sections contain the information for the default virtual machine size and the list of
disks to be deployed for each tier.

Create a file by using the structure shown in the following code sample. Save the file in
the same folder as the parameter file for the system. For instance, use XO1_sizes.json .
Then define the parameter custom_disk_sizes_filename in the parameter file. For
example, use custom_disk_sizes_filename = "XO1_db_sizes.json" .

 Tip

The path to the disk configuration needs to be relative to the folder that contains
the tfvars file.

The following sample code is an example configuration file. It defines three data disks
(LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU), and a backup disk (LUN 13).
The application tier servers (application, central services, and web dispatchers) are
deployed with just a single sap data disk.

The three data disks are striped by using LVM. The log disk and the backup disk are
each mounted as a single disk.

JSON

{
"db" : {
"Default": {
"compute": {
"vm_size" : "Standard_E20ds_v4",
"swap_size_gb" : 2
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "data",
"count" : 3,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
},
{
"name" : "log",
"count" : 1,
"disk_type" : "UltraSSD_LRS",
"size_gb": 512,
"disk-iops-read-write" : 2048,
"disk-mbps-read-write" : 8,
"caching" : "None",
"write_accelerator" : false,
"lun_start" : 9
},
{
"name" : "backup",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 13
}

]
}
},
"app" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3"
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "sap",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
}

]
}
},
"scs" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3"
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "sap",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
}

]
}
},
"web" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3"
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "sap",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
}

]
}
}
}

Add extra disks to an existing system


If you need to add disks to an already deployed system, you can add a new block to
your JSON structure. Include the attribute append in this block, and set the value to
true . For example, in the following sample code, the last block contains the attribute

"append" : true, . The last block adds a new disk to the database tier, which is already

configured in the first "data" block in the code.

JSON

{
"db" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3",
"swap_size_gb" : 2
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "data",
"count" : 3,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"start_lun" : 0
},
{
"name" : "log",
"count" : 1,
"disk_type" : "UltraSSD_LRS",
"size_gb": 512,
"disk-iops-read-write" : 2048,
"disk-mbps-read-write" : 8,
"caching" : "None",
"write_accelerator" : false,
"start_lun" : 9
},
{
"name" : "backup",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"start_lun" : 13
}
,
{
"name" : "data",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"append" : true,
"start_lun" : 4
}

]
}
}
}

Next step
Configure custom naming
Extending the SAP Deployment Automation Framework
Article • 02/26/2024

Within the SAP Deployment Automation Framework (SDAF), we recognize the importance of adaptability and customization to meet the
unique needs of various deployments. This document describes the ways to extend the framework's capabilities, ensuring that it aligns with
your specific requirements.

Some of the common scenarios for extending the framework include:

Forking the Source Code Repository: One method of extending SDAF is by forking the source code repository. This approach grants
you the flexibility to make tailored modifications within your own forked version of the code. By doing so, you gain control over the
framework's core functionality, enabling you to tailor it precisely to your deployment objectives.

Adding Stages to the SAP Configuration Pipeline: Another way to customization is by adding stages to the SAP configuration pipeline.
This approach allows you to integrate specific processes or steps that are integral to your deployment workflows into the automation
pipeline.

Streamlined Extensibility: This capability allows you to effortlessly incorporate your existing Ansible playbooks directly into the SDAF.
By using this feature, you can seamlessly integrate your Ansible automation scripts with the framework, further enhancing its
versatility.

Configuration extensibility: This feature allows you to extend the framework's configuration capabilities by adding custom
repositories, packages, kernel parameters, logical volumes, mounts, and exports without the need to write any code.

Throughout this documentation, we provide comprehensive guidance on each of these extensibility options, ensuring that you have the
knowledge and tools needed to tailor the SAP Deployment Automation Framework to your specific deployment needs.

7 Note

If you fork the source code repository, you must maintain your fork of the code. You must also merge the changes from the source
code repository into your fork of the code whenever there is a new release of the SDAF codebase.

Executing your own Ansible playbooks as part of the Azure DevOps


orchestration
You can implement your own Ansible playbooks, which are automatically be called as part of the Azure DevOps 'OS Configuration and SAP
Installation' pipeline.

The Ansible playbooks must be located in a folder called 'Ansible' located in the root folder in your configuration repository. They're called
with the same parameter files as the SDAF playbooks so you have access to all the configuration.

The Ansible playbooks must be named according to the following naming convention:

'Playbook name_pre' for playbooks to be run before the SDAF playbook and 'Playbook name_post' for playbooks to be run after the SDAF
playbook.

ノ Expand table

Playbook name Playbook name for 'pre' tasks Playbook name for 'post' tasks

playbook_01_os_base_config.yaml playbook_01_os_base_config_pre.yaml playbook_01_os_base_config_post.yaml

playbook_02_os_sap_specific_config.yaml playbook_02_os_sap_specific_config_pre.yaml playbook_02_os_sap_specific_config_post.yaml

playbook_03_bom_processing.yaml playbook_03_bom_processing_pre.yaml playbook_03_bom_processing_post.yaml

playbook_04_00_00_db_install.yaml playbook_04_00_00_db_install_pre.yaml playbook_04_00_00_db_install_post.yaml


Playbook name Playbook name for 'pre' tasks Playbook name for 'post' tasks

playbook_04_00_01_db_ha.yaml playbook_04_00_01_db_ha_pre.yaml playbook_04_00_01_db_ha_post.yaml

playbook_05_00_00_sap_scs_install.yaml playbook_05_00_00_sap_scs_install_pre.yaml playbook_05_00_00_sap_scs_install_post.yaml

playbook_05_01_sap_dbload.yaml playbook_05_01_sap_dbload_pre.yaml playbook_05_01_sap_dbload_post.yaml

playbook_05_02_sap_pas_install.yaml playbook_05_02_sap_pas_install_pre.yaml playbook_05_02_sap_pas_install_post.yaml

playbook_05_03_sap_app_install.yaml playbook_05_03_sap_app_install_pre.yaml playbook_05_03_sap_app_install_post.yaml

playbook_05_04_sap_web_install.yaml playbook_05_04_sap_web_install_pre.yaml playbook_05_04_sap_web_install_post.yaml

playbook_08_00_00_post_configuration_actions.yaml playbook_08_00_00_post_configuration_actions_pre.yml playbook_08_00_00_post_configuration_actions_post

7 Note

The playbook_08_00_00_post_configuration_actions.yaml step has no SDAF provided roles/tasks, it's only there to facilitate _pre and
_post hooks after SDAF has completed the installation and configuration.

Sample Ansible playbook


YAML

---
# /*---------------------------------------------------------------------------8
# | |
# | Run commands on all remote hosts |
# | |
# +------------------------------------4--------------------------------------*/

- hosts: "{{ sap_sid | upper }}_DB :


{{ sap_sid | upper }}_SCS :
{{ sap_sid | upper }}_ERS :
{{ sap_sid | upper }}_PAS :
{{ sap_sid | upper }}_APP :
{{ sap_sid | upper }}_WEB"

name: "Examples on how to run commands on remote hosts"


gather_facts: true
tasks:

- name: "Calculate information about the OS distribution"


ansible.builtin.set_fact:
distro_family: "{{ ansible_os_family | upper }}"
distribution_id: "{{ ansible_distribution | lower ~ ansible_distribution_major_version }}"
distribution_full_id: "{{ ansible_distribution | lower ~ ansible_distribution_version }}"

- name: "Show information"


ansible.builtin.debug:
msg:
- "Distro family: {{ distro_family }}"
- "Distribution id: {{ distribution_id }}"
- "Distribution full id: {{ distribution_full_id }}"

- name: "Show how to run a command on all remote host"


ansible.builtin.command: "whoami"
register: whoami_results

- name: "Show results"


ansible.builtin.debug:
var: whoami_results
verbosity: 0

- name: "Show how to run a command on just the 'SCS' and 'ERS' hosts"
ansible.builtin.command: "whoami"
register: whoami_results
when:
- "'scs' in supported_tiers or 'ers' in supported_tiers "
...

Updating the user and group IDs (Linux)


If you want to change the user and group IDs used by the framework, you can add the following section to the sap-parameters.yaml file.

YAML

# User and group IDs


sapadm_uid: "3000"
sidadm_uid: "3100"
sapinst_gid: "300"
sapsys_gid: "400"

You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.

Terraform

configuration_settings = {
sapadm_uid = "3000",
sidadm_uid = "3100",
sapinst_gid = "300",
sapsys_gid = "400"
}

Adding custom host names for instances (Linux)


In addition to the host names generated by the framework, you can add custom host names for the instances in your SAP deployment. To
do so, add the following section to the sap-parameters.yaml file.

YAML

custom_scs_virtual_hostname: "myscshostname"
custom_ers_virtual_hostname: "myershostname"
custom_db_virtual_hostname: "mydbhostname"
custom_pas_virtual_hostname: "mypashostname"
custom_app_virtual_hostname: "myapphostname"

You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.

Terraform

configuration_settings = {
custom_scs_virtual_hostname = "myscshostname",
custom_ers_virtual_hostname = "myershostname",
custom_db_virtual_hostname = "mydbhostname",
custom_pas_virtual_hostname = "mypashostname",
custom_app_virtual_hostname = "myapphostname"
}

Adding custom repositories (Linux)


If you need to register extra Linux package repositories to the Virtual Machines deployed by the framework, you can add the following
section to the sap-parameters.yaml file.

In this example, the repository 'epel' is registered on all the hosts in your SAP deployment that are running RedHat 8.2.

YAML

custom_repos:
redhat8.2:
- { tier: 'ha', repo: 'epel', url: 'https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm', state:
'present' }

Adding custom packages (Linux)


If you need to install more Linux packages to the Virtual Machines deployed by the framework, you can add the following section to the
sap-parameters.yaml file.

In this example, the package 'openssl' is installed on all the hosts in your SAP deployment that are running SUSE Enterprise Linux for SAP
Applications version 15.3.

YAML

custom_packages:
sles_sap15.3:
- { tier: 'os', package: 'openssl', node_tier: 'all', state: 'present' }

If you want to install a package on a specific server type ( app , ers , pas , scs , hana ) you can add the following section to the sap-
parameters.yaml file.

YAML

custom_packages:
sles_sap15.3:
- { tier: 'ha', package: 'pacemaker', node_tier: 'hana', state: 'present' }

Adding custom kernel parameters (Linux)


You can extend the SAP Deployment Automation Framework by adding custom kernel parameters to the SDAF installation.

When you add the following section to the sap-parameters.yaml file, the parameter 'fs.suid_dumpable' is set to 0 on all the hosts in your
SAP deployment.

YAML

custom_parameters:
common:
- { tier: 'os', node_tier: 'all', name: 'fs.suid_dumpable', value: '0', state: 'present' }

Adding custom services (Linux)


If you need to manage other services on the Virtual Machines deployed by the framework, you can add the following section to the sap-
parameters.yaml file.

In this example, the 'firewalld' service is stopped and disabled on all the hosts in your SAP deployment that are running RedHat 7.x.

YAML

custom_services:
redhat7:
- { tier: 'os', service: 'firewalld', node_tier: 'all', state: 'stopped' }
- { tier: 'os', service: 'firewalld', node_tier: 'all', state: 'disabled' }

Adding custom logical volumes (Linux)


You can extend the SAP Deployment Automation Framework by adding logical volumes based on extra disks in your SDAF installation.

When you add the following section to the sap-parameters.yaml file, a logical volume 'lv_custom' is created on all Virtual machines with a
disk with the name 'custom' in your SAP deployment. A filesystem is mounted on the logical volume and available on '/custompath.'

YAML

custom_logical_volumes:
- tier: 'sapos'
node_tier: 'all'
vg: 'vg_custom'
lv: 'lv_custom'
size: '100%FREE'
fstype: 'xfs'
path: '/custompath'

7 Note

In order to use this functionality you need to add an extra disk named 'custom' to one or more of your Virtual machines. For more
information, see Custom disk sizing.

You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.

Terraform

configuration_settings = {
custom_logical_volumes = [
{
tier = 'sapos'
node_tier = 'all'
vg = 'vg_custom'
lv = 'lv_custom'
size = '100%FREE'
fstype = 'xfs'
path = '/custompath'
}
]
}

Adding custom mount (Linux)


You can extend the SAP Deployment Automation Framework by mounting extra mount points in your installation.

When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is mounted from an NFS share on
'xxxxxxxxx.file.core.windows.net:/xxxxxxxxx/custom.'

YAML

custom_mounts:
- path: "/usr/custom"
opts: "vers=4,minorversion=1,sec=sys"
mount: "xxxxxxxxx.file.core.windows.net:/xxxxxxxx/custom"
target_nodes: "scs,pas,app"

The target_nodes attribute defines which nodes have the mount defined. Use 'all' if you want all nodes to have the mount defined.

You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.

Terraform
configuration_settings = {
custom_mounts = [
{
path = "/usr/custom",
opts = "vers=4,minorversion=1,sec=sys",
mount = "xxxxxxxxx.file.core.windows.net:/xxxxxxxx/custom",
target_nodes = "scs,pas,app"
}
]
}

Adding custom export (Linux)


You can extend the SAP Deployment Automation Framework by adding extra folders to be exported from the Central Services virtual
machine.

When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is exported from the Central Services virtual
machine and available via NFS.

YAML

custom_exports:
path: "/usr/custom"

You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.

Terraform

configuration_settings = {
custom_mounts = [
{
path = "/usr/custom",
}
]
}

7 Note

This applies only for deployments with NFS_Provider set to 'NONE' as this makes the Central Services server an NFS Server.

Custom Stripe sizes (Linux)


If you want to the stripe sizes used by the framework when creating the disks, you can add the following section to the sap-
parameters.yaml file with the values you want.

YAML

# User and group IDs


hana_data_stripe_size: 256
hana_log_stripe_size: 64

db2_log_stripe_size: 64
db2_data_stripe_size: 256
db2_temp_stripe_size: 128

sybase_data_stripe_size: 256
sybase_log_stripe_size: 64
sybase_temp_stripe_size: 128

oracle_data_stripe_size: 256
oracle_log_stripe_size: 128

Custom volume sizes (Linux)


If you want to the default volume sizes used by the framework, you can add the following section to the sap-parameters.yaml file with the
values you want.

YAML

sapmnt_volume_size: 32g
usrsap_volume_size: 32g
hanashared_volume_size: 32g

Next step
Configure custom naming
Configure the Control Plane Web
Application credentials
Article • 12/07/2023

As a part of the SAP automation framework control plane, you can optionally create an
interactive web application that assists you in creating the required configuration files
and deploying SAP workload zones and systems using Azure Pipelines.

Create an app registration


If you would like to use the web app, you must first create an app registration for
authentication purposes. Open the Azure Cloud Shell and execute the following
commands:

Windows

Replace MGMT with your environment as necessary.

PowerShell

Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-


0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-
4d61-89e7-88639da4683d","type":"Scope"}]}]'

$TF_VAR_app_registration_app_id=(az ad app create `


--display-name $region_code-webapp-registration `
--enable-id-token-issuance true `
--sign-in-audience AzureADMyOrg `
--required-resource-accesses ./manifest.json `
--query "appId").Replace('"',"")

$TF_VAR_webapp_client_secret=(az ad app credential reset `


--id $TF_VAR_app_registration_app_id --append `
--query "password").Replace('"',"")

Write-Host "App registration ID: $TF_VAR_app_registration_app_id"


Write-Host "App registration password: $TF_VAR_webapp_client_secret"

rm ./manifest.json

Persist the values in the control plane variable group for later use.

ノ Expand table

Variable name Value Note

APP_REGISTRATION_APP_ID App registration ID from last step

WEB_APP_CLIENT_SECRET App registration password from last step Mark as secret

Deploy via Azure Pipelines


For full instructions on setting up the web app using Azure DevOps, see Use SAP on
Azure Deployment Automation Framework from Azure DevOps Services

Summary of steps required to access the web app after


deploying the control plane:
1. Update the app registration reply URLs.
2. Assign the reader role with the subscription scope to the app service system
assigned managed identity.
3. Run the web app deployment pipeline.
4. (Optionally) add another access policy to the app service.

Deploy via Azure CLI (Cloud Shell)


For full instructions on setting up the web app using the Azure CLI, see Deploy the
control plane
Accessing the web app
By default there's no inbound public internet access to the web app apart from the
deployer virtual network. To allow other access to the web app, navigate to the Azure
portal. In the deployer resource group, find the web app. Then under settings on the left
hand side, select networking. From here, select Access restriction. Add any allow or deny
rules you would like. For more information on configuring access restrictions, see Set up
Azure App Service access restrictions.

You'll also need to grant reader permissions to the app service system-assigned
managed identity. Navigate to the app service resource. On the left hand side, select
"Identity In the "system assigned" tab, select on "Azure role assignments" > "Add role
assignment." Select "subscription" as the scope, and "reader" as the role. Then select
save. Without this step, the web app dropdown functionality won't work. ".

You can sign in and visit the web app by following the URL from earlier or selecting
browse inside the app service resource. With the web app, you're able to configure SAP
workload zones and system infrastructure. Select download to obtain a parameter file of
the workload zone or system you specified, for use in the later deployment steps.

Using the web app


The web app allows you to create SAP workload zone objects and system infrastructure
objects. These objects are essentially another representation of the Terraform
configuration file. If deploying using Azure Pipelines, you have ability to deploy these
workload zones and system infrastructures right from the web app. If deploying using
the Azure CLI, you can download the parameter file for any landscape or system object
you create, and use that in your command line deployments.

Creating a landscape or system object from scratch


1. Navigate to the "Workload zones" or "Systems" tab at the top of the website.
2. Select "Create New" in the bottom left corner.
3. Fill out the required parameters in the "Basic" and "Advanced" tabs, and any other
parameters you desire.
4. Certain parameters are dropdowns populated with existing Azure resources.

If no results are shown for a dropdown, you probably need to specify another
dropdown before you can see any options. Or, see step 2 above regarding
the system assigned managed identity.
The subscription parameter must be specified before any other dropdown
functionality is enabled
The network_arm_id parameter must be specified before any subnet
dropdown functionality is enabled
5. Select submit in the bottom left hand corner

Creating a workload zone or system object from a file


1. Navigate to the "File" tab at the top of the website.
2. Your options are

Create a new file from scratch there in browser.


Import an existing.tfvars file, and (optionally) edit it before saving.
Use an existing template, and (optionally) edit it before saving.

3. Make sure your file conforms to the correct naming conventions.


4. Next to the file you would like to convert to a workload zone or system object,
select "Convert."
5. The workload zone or system object appears in its respective tab.

Deploying a workload zone or system object (Azure


Pipelines deployment)
1. Navigate to the Workload zones or Systems tab.
2. Next to the workload zone or system you would like to deploy, select "Deploy."

If you would like to deploy a file, first convert it to a workload zone or system
object.

3. Specify the necessary parameters, and confirm it's the correct object.
4. Select deploy.
5. The web app generates a 'tfvars' file from the object, updates your Azure DevOps
repository, and kicks off the workload zone or system (infrastructure) pipeline. You
can monitor the deployment in the Azure DevOps Portal.
Configure external tools to use with SAP
Deployment Automation Framework
Article • 09/03/2023

This article describes how to configure external tools to use SAP Deployment
Automation Framework.

Configure Visual Studio Code


Follow these steps to configure Visual Studio Code.

Copy the SSH key from the key vault


1. Sign in to the Azure portal .

2. Select or search for Key vaults.

3. On the Key vault page, find the deployer key vault. The name starts with
MGMT[REGION]DEP00user . Filter by Resource group or Location, if necessary.

4. On the Settings section in the left pane, select Secrets.

5. Find and select the secret that contains sshkey. It might look like MGMT-[REGION]-
DEP00-sshkey .

6. On the secret's page, select the current version. Copy the Secret value.

7. Create a new file in Visual Studio Code and copy in the secret value.

8. Save the file where you keep SSH keys. For example, use C:\\Users\\<your-
username>\\.ssh\weeu_deployer.ssh . Make sure that you save the file without an

extension.

After you've downloaded the SSH key for the deployer, you can use it to connect to the
deployer virtual machine.

Get the public IP of the deployer


1. Sign in to the Azure portal .
2. Find the resource group for the deployer. The name starts with MGMT-
[REGION_CODE]-DEP00 unless you've deployed the control plane by using a custom

naming convention. The contents of the deployer resource group should look like
the following image.

3. Find the public IP for the deployer. The name should end with -pip . Filter by type,
if necessary.

4. Copy the IP address.

Install the Remote Development extension


1. Open the Extensions window by selecting View > Extensions or by selecting
Ctrl+Shift+X.

2. Ensure that the Remote Development extension is installed.

Connect to the deployer


1. Open the command palette by selecting View > Command Palette or by selecting
Ctrl+Shift+P. Enter Connect to host. You can also select the icon in the lower-left
corner of Visual Studio Code and select Connect to host.

2. Select Add New SSH Host.

Bash

ssh -i `C:\\Users\\<your-username>\\weeu_deployer.ssh`
azureadm@<IP_Address>

7 Note
Change <IP_Address> to reflect the deployer IP.

3. Select Connect. Select Linux when you're prompted for the target operating
system, and accept the remaining dialogs (such as key and trust).

4. When connected, select Open Folder and open the


/Azure_SAP_Automated_Deployment folder.

Next step
Configure the SAP workload zone
Get started with SAP Deployment
Automation Framework
Article • 03/12/2024

Get started quickly with SAP Deployment Automation Framework.

Prerequisites
To get started with SAP Deployment Automation Framework, you need:

An Azure subscription. If you don't have an Azure subscription, you can create a
free account .
An SAP User account with permissions to download the SAP software in your
Azure environment. For more information on S-User, see SAP S-User .
An Azure CLI installation.
A user Assigned Identity (MS) or a service principal to use for the control plane
deployment.
A user Assigned Identity (MS) or a A service principal to use for the workload zone
deployment.
An ability to create an Azure DevOps project if you want to use Azure DevOps for
deployment.

Some of the prerequisites might already be installed in your deployment environment.


Both Azure Cloud Shell and the deployer come with Terraform and the Azure CLI
installed.

Create a service principal


The SAP automation deployment framework uses service principals for deployment.

When you choose a name for your service principal, make sure that the name is unique
within your Azure tenant. Make sure to use an account with service principals creation
permissions when running the script.

1. Create the service principal with Contributor permissions.

cloudshell

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export control_plane_env_code="LAB"
az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/$ARM_SUBSCRIPTION_ID" --
name="$control_plane_env_code-Deployment-Account"

Review the output. For example:

JSON

{
"appId": "<AppId>",
"displayName": "<environment>-Deployment-Account ",
"name": "<AppId>",
"password": "<AppSecret>",
"tenant": "<TenantId>"
}

2. Copy the output details. Make sure to save the values for appId , password , and
Tenant .

The output maps to the following parameters. You use these parameters in later
steps, with automation commands.

ノ Expand table

Parameter input name Output name

spn_id appId

spn_secret password

tenant_id tenant

3. Optionally, assign the User Access Administrator role to the service principal.

cloudshell

export appId="<appId>"

az role assignment create --assignee $appId --role "User Access


Administrator" --scope /subscriptions/$ARM_SUBSCRIPTION_ID

) Important

If you don't assign the User Access Administrator role to the service principal, you
can't assign permissions using the automation framework.
Create a user assigned Identity
The SAP automation deployment framework can also use a user assigned identity (MSI)
for the deployment. Make sure to use an account with permissions to create managed
identities when running the script that creates the identity.

1. Create the managed identity.

cloudshell

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export control_plane_env_code="LAB"

az identity create --name ${control_plane_env_code}-Deployment-Identity


--resource-group <ExistingResourceGroup>

Review the output. For example:

JSON

{
"clientId": "<appId>",
"id": "<armId>",
"location": "<location>",
"name": "${control_plane_env_code}-Deployment-Identity",
"principalId": "<objectId>",
"resourceGroup": "<ExistingResourceGroup>",
"systemData": null,
"tags": {},
"tenantId": "<TenantId>",
"type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}

2. Copy the output details.

The output maps to the following parameters. You use these parameters in later
steps, with automation commands.

ノ Expand table

Parameter input name Output name

app_id appId

msi_id armId

3. Assign the Contributor role to the identity.


cloudshell

export appId="<appId>"

az role assignment create --assignee $appId --role "Contributor" --


scope /subscriptions/$ARM_SUBSCRIPTION_ID

4. Optionally, assign the User Access Administrator role to the identity.

cloudshell

export appId="<appId>"

az role assignment create --assignee $appId --role "User Access


Administrator" --scope /subscriptions/$ARM_SUBSCRIPTION_ID

) Important

If you don't assign the User Access Administrator role to the managed identity, you
can't assign permissions using the automation framework.

Pre-flight checks
You can use the following script to perform pre-flight checks. The script performs the
following checks and tests:

Checks if the service principal has the correct permissions to create resources in
the subscription.
Checks if the service principal has user Access Administrator permissions.
Create a Azure Virtual Network.
Create a Azure Virtual Key Vault with private end point.
Create a Azure Files NSF share.
Create a Azure Virtual Virtual Machine with data disk using Premium Storage v2.
Check access to the required URLs using the deployed virtual machine.

PowerShell

$sdaf_path = Get-Location
if ( $PSVersionTable.Platform -eq "Unix") {
if ( -Not (Test-Path "SDAF") ) {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( -not (Test-Path $sdaf_path)) {
New-Item -Path $sdaf_path -Type Directory
}
}

Set-Location -Path $sdaf_path

git clone https://github.com/Azure/sap-automation.git

cd sap-automation
cd deploy
cd scripts

if ( $PSVersionTable.Platform -eq "Unix") {


./Test-SDAFReadiness.ps1
}
else {
.\Test-SDAFReadiness.ps1
}

Use SAP Deployment Automation Framework


from Azure DevOps Services
Using Azure DevOps streamlines the deployment process. Azure DevOps provides
pipelines that you can run to perform the infrastructure deployment and the
configuration and SAP installation activities.

You can use Azure Repos to store your configuration files. Azure Pipelines provides
pipelines, which can be used to deploy and configure the infrastructure and the SAP
application.

Sign up for Azure DevOps Services


To use Azure DevOps Services, you need an Azure DevOps organization. An organization
is used to connect groups of related projects. Use your work or school account to
automatically connect your organization to your Microsoft Entra ID. To create an
account, open Azure DevOps and either sign in or create a new account.

Create the SAP Deployment Automation


Framework environment with Azure DevOps
You can use the following script to do a basic installation of Azure DevOps Services for
SAP Deployment Automation Framework.

Open PowerShell ISE and copy the following script and update the parameters to match
your environment.

PowerShell

$Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
$Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
$Env:SDAF_CONTROL_PLANE_CODE = "MGMT"
$Env:SDAF_WORKLOAD_ZONE_CODE = "DEV"
$Env:SDAF_ControlPlaneSubscriptionID = "xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx"
$Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy"
$Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"

$UniqueIdentifier = Read-Host "Please provide an identifier that makes


the service principal names unique, for instance a project code"

$confirmation = Read-Host "Do you want to create a new Application


registration (needed for the Web Application) y/n?"
if ($confirmation -eq 'y') {
$Env:SDAF_APP_NAME = $UniqueIdentifier + " SDAF Control Plane"
}

else {
$Env:SDAF_APP_NAME = Read-Host "Please provide the Application
registration name"
}

$confirmation = Read-Host "Do you want to create a new Service Principal


for the Control plane y/n?"
if ($confirmation -eq 'y') {
$Env:SDAF_MGMT_SPN_NAME = $UniqueIdentifier + " SDAF " +
$Env:SDAF_CONTROL_PLANE_CODE + " SPN"
}
else {
$Env:SDAF_MGMT_SPN_NAME = Read-Host "Please provide the Control Plane
Service Principal Name"
}

$confirmation = Read-Host "Do you want to create a new Service Principal


for the Workload zone y/n?"
if ($confirmation -eq 'y') {
$Env:SDAF_WorkloadZone_SPN_NAME = $UniqueIdentifier + " SDAF " +
$Env:SDAF_WORKLOAD_ZONE_CODE + " SPN"
}
else {
$Env:SDAF_WorkloadZone_SPN_NAME = Read-Host "Please provide the
Workload Zone Service Principal Name"
}
if ( $PSVersionTable.Platform -eq "Unix") {
if ( Test-Path "SDAF") {
}
else {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( Test-Path $sdaf_path) {
}
else {
New-Item -Path $sdaf_path -Type Directory
}
}

Set-Location -Path $sdaf_path

if ( Test-Path "New-SDAFDevopsProject.ps1") {
remove-item .\New-SDAFDevopsProject.ps1
}

Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-


automation/main/deploy/scripts/New-SDAFDevopsProject.ps1 -OutFile .\New-
SDAFDevopsProject.ps1 ; .\New-SDAFDevopsProject.ps1

Run the script and follow the instructions. The script opens browser windows for
authentication and for performing tasks in the Azure DevOps project.

You can choose to either run the code directly from GitHub or you can import a copy of
the code into your Azure DevOps project.

To confirm that the project was created, go to the Azure DevOps portal and select the
project. Ensure that the repo was populated and that the pipelines were created.

) Important

Run the following steps on your local workstation. Also ensure that you have the
latest Azure CLI installed by running the az upgrade command.

For more information on how to configure Azure DevOps for SAP Deployment
Automation Framework, see Configure Azure DevOps for SAP Deployment Automation
Framework.
Create the SAP Deployment Automation
Framework environment without Azure DevOps
You can run SAP Deployment Automation Framework from a virtual machine in Azure.
The following steps describe how to create the environment.

) Important

Ensure that the virtual machine is using either a system-assigned or user-assigned


identity with permissions on the subscription to create resources.

Ensure the virtual machine has the following prerequisites installed:

git
jq
unzip
virtualenv (if running on Ubuntu)

You can install the prerequisites on an Ubuntu virtual machine by using the following
command:

Bash

sudo apt-get install -y git jq unzip virtualenv

You can then install the deployer components by using the following commands:

Bash

wget https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/configure_deployer.sh -O
configure_deployer.sh
chmod +x ./configure_deployer.sh
./configure_deployer.sh

# Source the new variables

. /etc/profile.d/deploy_server.sh

Samples
The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample
configuration files to start testing the deployment automation framework. You can copy
them by using the following commands:

Bash

cd ~/Azure_SAP_Automated_Deployment

cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment

Next step
Plan the deployment
Upgrade SAP Deployment Automation
Framework
Article • 12/21/2023

SAP Deployment Automation Framework is updated regularly. This article describes how
to update the framework.

Prerequisites
Before you upgrade the framework, make sure that you back up the remote state files
from the tfstate storage account in the SAP library.

Upgrade the pipelines


You can upgrade the pipeline definitions by running the Upgrade Pipelines pipeline.

Create the Upgrade Pipelines pipeline manually


If you don't have the Upgrade Pipelines pipeline, you can create it manually.

Go to the pipelines folder in your repository and create the pipeline definition by
selecting the file from the New menu. Name the file 21-update-pipelines.yml and paste
the following content into the file.

YAML

---
# /*----------------------------------------------------------------------
-----8
# |
|
# | This pipeline updates the ADO repository
|
# |
|
# +------------------------------------4----------------------------------
----*/

name: Update Azure DevOps repository from


GitHub $(branch) branch

parameters:
- name: repository
displayName: Source repository
type: string
default: https://github.com/Azure/sap-
automation-bootstrap.git

- name: branch
displayName: Source branch to update from
type: string
default: main

- name: force
displayName: Force the update
type: boolean
default: false

trigger: none

pool:
vmImage: ubuntu-latest

variables:
- name: repository
value: ${{ parameters.repository }}
- name: branch
value: ${{ parameters.branch }}
- name: force
value: ${{ parameters.force }}
- name: log
value: logfile_$(Build.BuildId)

stages:
- stage: Update_DEVOPS_repository
displayName: Update DevOps pipelines
jobs:
- job: Update_DEVOPS_repository
displayName: Update DevOps pipelines
steps:
- checkout: self
persistCredentials: true
- bash: |
#!/bin/bash
green="\e[1;32m" ; reset="\e[0m" ; boldred="\e[1;31m"

git config --global user.email "$(Build.RequestedForEmail)"


git config --global user.name "$(Build.RequestedFor)"
git config --global pull.ff false
git config --global pull.rebase false

git remote add remote-repo $(repository) >> /tmp/$(log) 2>&1

git fetch --all --tags >> /tmp/$(log) 2>&1


git checkout --quiet origin/main

git checkout --quiet remote-repo/main ./pipelines/01-deploy-


control-plane.yml
git checkout --quiet remote-repo/main ./pipelines/02-sap-
workload-zone.yml
git checkout --quiet remote-repo/main ./pipelines/03-sap-
system-deployment.yml
git checkout --quiet remote-repo/main ./pipelines/04-sap-
software-download.yml
git checkout --quiet remote-repo/main ./pipelines/05-DB-and-
SAP-installation.yml
git checkout --quiet remote-repo/main ./pipelines/10-
remover-terraform.yml
git checkout --quiet remote-repo/main ./pipelines/11-
remover-arm-fallback.yml
git checkout --quiet remote-repo/main ./pipelines/12-remove-
control-plane.yml
git checkout --quiet remote-repo/main ./pipelines/20-update-
repositories.yml
git checkout --quiet remote-repo/main ./pipelines/22-sample-
deployer-configuration.yml
git checkout --quiet remote-repo/main ./pipelines/21-update-
pipelines.yml
return_code=$?

if [[ "$(force)" == "True" ]]; then


echo "running git push to ADO with force option"
if ! git -c http.extraheader="AUTHORIZATION: bearer
$(System.AccessToken)" push --force origin HEAD:$(branch) >> /tmp/$(log)
2>&1
then
echo -e "$red--- Failed to push ---$reset"
exit 1
fi
else
git commit -m "Update ADO repository from GitHub $(branch)
branch" -a
echo "running git push to ADO"
if ! git -c http.extraheader="AUTHORIZATION: bearer
$(System.AccessToken)" push origin HEAD:$(branch) >> /tmp/$(log) 2>&1
then
echo -e "$red--- Failed to push ---$reset"
exit 1
fi

fi
# If Pull already failed then keep that error code
if [ 0 != $return_code ]; then
return_code=$?
fi

exit $return_code

displayName: Update DevOps pipelines


env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
failOnStderr: true
...
Commit the changes to save the file to the repository and create the pipeline in Azure
DevOps.

Create the Upgrade Pipelines pipeline by selecting New Pipeline from the Pipelines
section. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings.

ノ Expand table

Setting Value

Branch Main

Path deploy/pipelines/21-update-pipelines.yml

Name Upgrade pipelines

Save the pipeline. To see the Save option, select the chevron next to Run. Go to the
Pipelines section and select the pipeline. Rename the pipeline to Upgrade pipelines by
selecting Rename/Move from the ellipsis menu on the right.

Run the pipeline to upgrade all pipeline definitions.

Upgrade the control plane


The control plane is the first component you need to upgrade. To upgrade the control
plane, rerun the Deploy Control Plane pipeline or rerun the deploy_controlplane.sh
script.

Upgrade to version 3.8.1


Run the following commands before you perform the upgrade of the control plane.

Azure CLI

az login

az account set --subscription <subscription id>

az vm run-command invoke -g <DeployerResourceGroup> -n <deployerVMName> --


command-id RunShellScript --scripts "sudo rm
/etc/profile.d/deploy_server.sh"
az vm extension delete -g <DeployerResourceGroup> -n <deployerVMName> -n
configure_deployer

The script removes the old deployer configuration and allows the new configuration to
be applied.

Private DNS considerations


If you're using Private DNS zones from the control plane, run the following command
before you perform the upgrade.

Azure CLI

az network private-dns zone create --name privatelink.vaultcore.azure.net --


resource-group <SAPLibraryResourceGroup>

Agent sign-in
You can also configure the Azure DevOps agent to perform the sign-in to Azure by
using the service principal. Add the following variable to the variable group that's used
by the control plane pipeline, which is typically SDAF-MGMT .

ノ Expand table

Name Value

Logon_Using_SPN true

Upgrading Terraform on the agents


You can upgrade Terraform on the agents by running the following script:

Bash

tfversion="1.6.5"

# Terraform installation directories


tf_base=/opt/terraform
tf_dir="${tf_base}/terraform_${tfversion}"
tf_bin="${tf_base}/bin"
tf_zip="terraform_${tfversion}_linux_amd64.zip"

#
# Install terraform for all users
#
sudo mkdir -p \
"${tf_dir}" \
"${tf_bin}"
wget -nv -O /tmp/"${tf_zip}"
"https://releases.hashicorp.com/terraform/${tfversion}/${tf_zip}"
sudo unzip -o /tmp/"${tf_zip}" -d "${tf_dir}"
sudo ln -vfs "../$(basename "${tf_dir}")/terraform" "${tf_bin}/terraform"

Upgrading the SAP Automation code base on the


deployer
You can upgrade the SAP Automation code base on the deployer virtual machines by
running the following script:

Bash

cd ~/Azure_SAP_Automated_Deployment/sap-automation

git pull

cd ~/Azure_SAP_Automated_Deployment/sap-automation-samples

git pull

Upgrade the workload zone


The workload zone is the second component you need to upgrade. To upgrade the
control plane, rerun the SAP Workload Zone deployment pipeline or rerun the
install_workloadzone.sh script.

Upgrade to version 3.8.1


Prepare for the upgrade by first retrieving the Private DNS zone resource ID and the key
vault private endpoint name by running the following commands:
Azure CLI

az network private-dns zone show --name privatelink.vaultcore.azure.net --


resource-group <SAPLibraryResourceGroup> --query id --output tsv

az network private-endpoint list --resource-group


<WorkloadZoneResourceGroup> --query "[?contains(name,'keyvault')].
{Name:name} | [0] | Name" --output tsv

If you're using private endpoints, run the following command before you perform the
upgrade to update the DNS settings for the private endpoint. Replace the
privateDNSzoneResourceId and keyvaultEndpointName placeholders with the values

retrieved in the previous step.

Azure CLI

az network private-endpoint dns-zone-group create --resource-group


<WorkloadZoneResourceGroup> --endpoint-name <keyvaultEndpointName> --name
privatelink.vaultcore.azure.net --private-dns-zone
<privateDNSzoneResourceId> --zone-name privatelink.vaultcore.azure.net

Agent sign-in for workload zone and system deployments


You can also configure the Azure DevOps agent to perform the sign-in to Azure by
using the service principal. Add the following variable to the variable group that's used
by the control plane pipeline, which is typically SDAF-DEV .

ノ Expand table

Name Value

Logon_Using_SPN true

Next step
Configure the control plane
Troubleshooting the SAP Deployment
Automation Framework
Article • 01/08/2024

Within the SAP Deployment Automation Framework (SDAF), we recognize that there are
many moving parts. This article is intended to help you troubleshoot issues that you can
encounter.

Control plane deployment


The control plane deployment consists of the following steps:

1. Deploy the deployer infrastructure.


2. Add the Service Principal details to the Deployer key vault.
3. Deploy the SAP Library infrastructure
4. Migrate the Terraform state for the Deployer to the SAP Library.
5. Migrate the Terraform state for the SAP Library to the SAP Library.

To track the progress of the deployment, the state is persisted in a file in the
.sap_deployment_automation folder in the WORKSPACES directory.

ノ Expand table

Step What is being deployed State file


location

0 Deployment infrastructure (virtual machine, key vault, Firewall, local


Bastion)

1 Service Principal details persisted in the deployer's key vault local

2 SAP Library infrastructure (storage accounts, Private DNS) local

3 Deployer terraform state migrated to remote storage SAP Library

4 SAP Library terraform state migrated to remote storage SAP Library

Deployment
This section describes how to troubleshoot issues that you can encounter when
performing deployments using the SAP Deployment Automation Framework.
Unable to access keyvault: XXXXX error
If you see an error similar to the following error when running the deployment:

text

Unable to access keyvault: XXXXYYYYDEP00userBEB


Please ensure the key vault exists.

This error indicates that the specified key vault doesn't exist or that the deployment
environment is unable to access it.

Depending on the deployment stage, you can resolve this issue in the following ways:

You can either add the IP of the environment from which you're executing the
deployment (recommended) or you can allow public access to the key vault. For more
information about controlling access to the key vault, see Allow public access to a key
vault.

The following variables are used to configure the key vault access:

tfvars

Agent_IP = "10.0.0.5"
public_network_access_enabled = true

Failed to get existing workspaces error


If you see an error similar to the following error when running the deployment:

text

Error: : Error retrieving keys for Storage Account "mgmtweeutfstate###":


azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for
request to
https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/MGMT-WEEU-
SAP_LIBRARY/providers/Microsoft.Storage/storageAccounts/mgmtweeutfstate###/l
istKeys?api-version=2021-01-01
: StatusCode=400 -- Original Error: adal: Refresh request failed. Status
Code = '400'. Response body:
{"error":"invalid_request","error_description":"Identity not found"}
Endpoint
http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-
01&client_id=yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy&resource=https%3A%2F%2Fmanagement.azure.com%2F
This error indicates that the credentials used to do the deployment doesn't have access
to the storage account. To resolve this issue, assign the 'Storage Account Contributor'
role to the deployment credential on the terraform state storage account, the resource
group or the subscription (if feasible).

You can verify if the deployment is being performed using a service principal or a
managed identity by checking the output of the deployment. If the deployment is using
a service principal, the output contains the following section:

text

[set_executing_user_environment_variables]: Identifying the executing


user and client
[set_azure_cloud_environment]: Identifying the executing cloud
environment
[set_azure_cloud_environment]: Azure cloud environment: public
[set_executing_user_environment_variables]: User type:
servicePrincipal
[set_executing_user_environment_variables]: client id: yyyyyyyy-
yyyy-yyyy-yyyy-yyyyyyyyyyyy
[set_executing_user_environment_variables]: Identified login type as
'service principal'
[set_executing_user_environment_variables]: Initializing state with SPN
named: <SPN Name>
[set_executing_user_environment_variables]: exporting environment
variables
[set_executing_user_environment_variables]: ARM environment variables:
ARM_CLIENT_ID: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
ARM_SUBSCRIPTION_ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
ARM_USE_MSI: false

Look for the following line in the output: "ARM_USE_MSI: false"

If the deployment is using a managed identity, the output contains the following
section:

text

[set_executing_user_environment_variables]: Identifying the executing


user and client
[set_azure_cloud_environment]: Identifying the executing cloud
environment
[set_azure_cloud_environment]: Azure cloud environment: public
[set_executing_user_environment_variables]: User type:
servicePrincipal
[set_executing_user_environment_variables]: client id:
systemAssignedIdentity
[set_executing_user_environment_variables]: logged in using
'servicePrincipal'
[set_executing_user_environment_variables]: unset ARM_CLIENT_SECRET
[set_executing_user_environment_variables]: ARM environment variables:
ARM_CLIENT_ID: zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz
ARM_SUBSCRIPTION_ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
ARM_USE_MSI: true

Look for the following line in the output: "ARM_USE_MSI: true"

You can assign the 'Storage Account Contributor' role to the deployment credential on
the terraform state storage account, the resource group or the subscription (if feasible).
Use the ARM_CLIENT_ID from the deployment output.

cloudshell

export appId="<ARM_CLIENT_ID>"

az role assignment create --assignee ${appId} \


--role "Storage Account Contributor" \
--scope /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/MGMT-WEEU-
SAP_LIBRARY/providers/Microsoft.Storage/storageAccounts/mgmtweeutfstate###

You may also need to assign the reader role to the deployment credential on the
subscription containing the resource group with the Terraform state file. You can do that
with the following command:

cloudshell

export appId="<ARM_CLIENT_ID>"

az role assignment create --assignee ${appId} \


--role "Reader" \
--scope /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Private DNS Zone Name 'xxx' wasn't found


If you see an error similar to the following error when running the deployment:

text

Private DNS Zone Name: "privatelink.file.core.windows.net" was not found

or

Private DNS Zone Name: "privatelink.blob.core.windows.net" was not found

or
Private DNS Zone Name: "privatelink.vaultcore.azure.net" was not found

This error indicates that the Private DNS zone listed in the error isn't available. You can
resolve this issue by either creating the Private DNS or providing the configuration for
an existing private DNS Zone. For more information on how to create the Private DNS
Zone, see Create a private DNS zone.

You can specify the details for an existing private DNS zone by using the following
variables:

Terraform

# Resource group name for resource group that contains the private DNS zone
management_dns_resourcegroup_name="<resource group name for the Private DNS
Zone>"

# Subscription ID name for resource group that contains the private DNS zone
management_dns_subscription_id="<subscription id for resource group name for
the Private DNS Zone>"

use_custom_dns_a_registration=false

Rerun the deployment after you made these changes.

OverconstrainedAllocationRequest error
If you see an error similar to the following error when running the deployment:

text

Virtual Machine Name: "devsap01app01":


Code="OverconstrainedAllocationRequest" Message="Allocation failed. VM(s)
with the following constraints cannot be allocated, because the condition is
too restrictive. Please remove some constraints and try again. Constraints
applied are:
- Networking Constraints (such as Accelerated Networking or IPv6)
- VM Size

This error indicates that the selected VM size isn't available using the provided
constraints. To resolve this issue, select a different VM size or a different availability
zone.
The client 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' with
object id error
If you see an error similar to the following message when running the deployment:

text

The client 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' with object id 'yyyyyyyy-


yyyy-yyyy-yyyy-yyyyyyyyyyyy' does not have
authorization or an ABAC condition not fulfilled to perform action
'Microsoft.Authorization/roleAssignments/write' over scope
'/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/DEV-
WEEU-SAP01-X00/providers/Microsoft.Storage/storageAccounts/....

The error indicates that the deployment credential doesn't have 'User Access
Administrator' role on the resource group. To resolve this issue, assign the 'User Access
Administrator' role to the deployment credential on the resource group or the
subscription (if feasible).

Configuration
This section describes how to troubleshoot issues that you can encounter when
performing configuration using the SAP Deployment Automation Framework.

Task 'ansible.builtin.XXX' has extra params'


If you see an error similar to the following message when running the deployment:

text

ERROR! this task 'ansible.builtin.command' has extra params, which is only


allowed in the following modules: set_fact, shell, include_tasks, win_shell,
import_tasks, import_role, include, win_command, command, include_role,
meta, add_host, script, group_by, raw, include_vars

This error indicates that the version of Ansible installed on the agent doesn't support
this task. To resolve this issue, upgrade to the latest version of Ansible on the agent
virtual machine.

Software download
This section describes how to troubleshoot issues that you can encounter when
downloading the SAP software using the SAP Deployment Automation Framework.

"HTTP Error 404: Not Found"


This error indicates that the software version is no longer available for download. Open
a GitHub issue New Issue to request an update to the Bill of Materials file, or update
the Bill of Materials file yourself and submit a pull request.

Azure DevOps
This section describes how to troubleshoot issues that you can encounter when using
Azure DevOps with the SAP Deployment Automation Framework.

Issues with the Azure Pipelines


If you see an error similar to the following message when running the Azure Pipelines:

text

##[error]Variable group SDAF-MGMT could not be found.


##[error]Bash exited with code '2'.

This error indicates that the configured personal access token doesn't have permissions
to access the variable group. Ensure that the personal access token has the Read &
manage permission for the variable group and that it's still valid. The personal access
token is configured in the Azure DevOps pipeline variable groups either as 'PAT' in the
control plane variable group or as 'WZ_PAT' in the workload zone variable group.

Next step
Configure custom naming
Deploy the control plane
Article • 12/15/2023

The control plane deployment for SAP Deployment Automation Framework consists of
the:

Deployer
SAP library

Prepare the deployment credentials


SAP Deployment Automation Framework uses service principals for deployments. To
create a service principal for the control plane deployment, use an account that has
permissions to create service principals:

Azure CLI

az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-
Account"
) Important

The name of the service principal must be unique.

Record the output values from the command:

appId
password
tenant

Optionally, assign the following permissions to the service principal:

Azure CLI

az role assignment create --assignee <appId> --role "User Access


Administrator" --scope /subscriptions/<subscriptionID>

If you want to provide the User Access Administrator role scoped to the resource group
only, use the following command:

Azure CLI

az role assignment create --assignee <appId> --role "User Access


Administrator" --scope
/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>

Deploy the control plane


All the artifacts that are required to deploy the control plane are located in GitHub
repositories.

Prepare for the control plane deployment by cloning the repositories using the
following commands:

Bash

mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_

git clone https://github.com/Azure/sap-automation.git sap-automation

git clone https://github.com/Azure/sap-automation-samples.git samples


The sample deployer configuration file MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars is
located in the
~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-
DEP00-INFRASTRUCTURE folder.

The sample SAP library configuration file MGMT-WEEU-SAP_LIBRARY.tfvars is located in the


~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-
SAP_LIBRARY folder.

You can copy the sample configuration files to start testing the deployment automation
framework.

A minimal Terraform file for the DEPLOYER might look like this example:

Terraform

# The environment value is a mandatory field, it is used for partitioning


the environments.
environment = "MGMT"
# The location/region value is a mandatory field, it is used to control
where the resources are deployed
location = "westeurope"

# management_network_address_space is the address space for management


virtual network
management_network_address_space = "10.10.20.0/25"
# management_subnet_address_prefix is the address prefix for the management
subnet
management_subnet_address_prefix = "10.10.20.64/28"

# management_firewall_subnet_address_prefix is the address prefix for the


firewall subnet
management_firewall_subnet_address_prefix = "10.10.20.0/26"
firewall_deployment = false

# management_bastion_subnet_address_prefix is the address prefix for the


bastion subnet
management_bastion_subnet_address_prefix = "10.10.20.128/26"
bastion_deployment = true

# deployer_enable_public_ip controls if the deployer Virtual machines will


have Public IPs
deployer_enable_public_ip = false

# deployer_count defines how many deployer VMs will be deployed


deployer_count = 1

# use_service_endpoint defines that the management subnets have service


endpoints enabled
use_service_endpoint = true
# use_private_endpoint defines that the storage accounts and key vaults have
private endpoints enabled
use_private_endpoint = false

# enable_firewall_for_keyvaults_and_storage defines that the storage


accounts and key vaults have firewall enabled
enable_firewall_for_keyvaults_and_storage = false

# public_network_access_enabled controls if storage account and key vaults


have public network access enabled
public_network_access_enabled = true

Note the Terraform variable file locations for future edits during deployment.

A minimal Terraform file for the LIBRARY might look like this example:

Terraform

# The environment value is a mandatory field, it is used for partitioning


the environments, for example, PROD and NP.
environment = "MGMT"
# The location/region value is a mandatory field, it is used to control
where the resources are deployed
location = "westeurope"

#Defines the DNS suffix for the resources


dns_label = "azure.contoso.net"

# use_private_endpoint defines that the storage accounts and key vaults have
private endpoints enabled
use_private_endpoint = false

Note the Terraform variable file locations for future edits during deployment.

Run the following command to create the deployer and the SAP library. The command
adds the service principal details to the deployment key vault.

Windows

You can't perform a control plane deployment from Windows.

Manually configure a virtual machine as a SDAF deployer


using Azure Bastion
To connect to the deployer:
1. Sign in to the Azure portal .

2. Go to the resource group that contains the deployer virtual machine (VM).

3. Connect to the VM by using Azure Bastion.

4. The default username is azureadm.

5. Select SSH Private Key from Azure Key Vault.

6. Select the subscription that contains the control plane.

7. Select the deployer key vault.

8. From the list of secrets, choose the secret that ends with -sshkey.

9. Connect to the VM.

Run the following script to configure the deployer:

Bash

mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_

wget https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/configure_deployer.sh -O
configure_deployer.sh
chmod +x ./configure_deployer.sh
./configure_deployer.sh

# Source the new variables

. /etc/profile.d/deploy_server.sh

The script installs Terraform and Ansible and configures the deployer.

Manually configure a virtual machine as a SDAF deployer


Connect to the deployer VM from a computer that can reach the Azure virtual network.

To connect to the deployer:

1. Sign in to the Azure portal .

2. Select or search for Key vaults.


3. On the Key vault page, find the deployer key vault. The name starts with
MGMT[REGION]DEP00user . Filter by the Resource group or Location, if necessary.

4. On the Settings section in the left pane, select Secrets.

5. Find and select the secret that contains sshkey. It might look like MGMT-[REGION]-
DEP00-sshkey .

6. On the secret's page, select the current version. Then copy the Secret value.

7. Open a plain text editor. Copy the secret value.

8. Save the file where you keep SSH keys. An example is C:\Users\<your-
username>\.ssh .

9. Save the file. If you're prompted to Save as type, select All files if SSH isn't an
option. For example, use deployer.ssh .

10. Connect to the deployer VM through any SSH client, such as Visual Studio Code.
Use the private IP address of the deployer and the SSH key you downloaded. For
instructions on how to connect to the deployer by using Visual Studio Code, see
Connect to the deployer by using Visual Studio Code. If you're using PuTTY,
convert the SSH key file first by using PuTTYGen.

7 Note

The default username is azureadm.

Configure the deployer by using the following script:

Bash

mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_

wget https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/configure_deployer.sh -O
configure_deployer.sh
chmod +x ./configure_deployer.sh
./configure_deployer.sh

# Source the new variables

. /etc/profile.d/deploy_server.sh

The script installs Terraform and Ansible and configures the deployer.
Securing the control plane
The control plane is the most critical part of the SAP automation framework. It's
important to secure the control plane. The following steps help you secure the control
plane. If you have created your control plane using an external virtual machine or by
using the cloud shell, you should secure the control plane by implementing private
endpoints for the storage accounts and key vaults.

You can use the sync_deployer.sh script to copy the control plane configuration files to
the deployer VM. Sign in to the deployer VM and run the following commands:

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES

../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname
mgtneweeutfstate### --state_subscription xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx

Ensure that the use_private_endpoint variable is set to true in the DEPLOYER and
LIBRARY configuration files. Also ensure that public_network_access_enabled is set to
false in the DEPLOYER configuration files.

Terraform

# use_private_endpoint defines that the storage accounts and key vaults have
private endpoints enabled
use_private_endpoint = true

# public_network_access_enabled controls if storage account and key vaults


have public network access enabled
public_network_access_enabled = false

Rerun the control plane deployment to enable private endpoints for the storage
accounts and key vaults.

Bash

export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP00"
export storageaccountname=<storageaccountname>

export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES

deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_c
ode}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_cod
e}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"

${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
--deployer_parameter_file "${deployer_parameter_file}" \
--library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}" \
--storageaccountname "${storageaccountname}" \
--recover

Prepare the web app


This step is optional. If you want a browser-based UX to help the configuration of SAP
workload zones and systems, run the following commands before you deploy the
control plane.

Windows

PowerShell

Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-


0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-
4d61-89e7-88639da4683d","type":"Scope"}]}]'

$region_code="WEEU"

$env:TF_VAR_app_registration_app_id = (az ad app create `


--display-name $region_code-webapp-registration `
--required-resource-accesses ./manifest.json `
--query "appId").Replace('"',"")

$env:TF_VAR_webapp_client_secret=(az ad app credential reset `


--id $env:TF_VAR_app_registration_app_id --append `
--query "password").Replace('"',"")

$env:TF_VAR_use_webapp="true"

del manifest.json

Next step
Configure SAP workload zone
Workload zone deployment in the SAP
automation framework
Article • 02/27/2024

An SAP application typically has multiple development tiers. For example, you might
have development, quality assurance, and production tiers. SAP Deployment
Automation Framework calls these tiers workload zones.

You can use workload zones in multiple Azure regions. Each workload zone then has its
own instance of Azure Virtual Network.

The following services are provided by the SAP workload zone:

A virtual network, including subnets and network security groups


An Azure Key Vault instance, for system credentials
An Azure Storage account for boot diagnostics
A Storage account for cloud witnesses
An Azure NetApp Files account and capacity pools (optional)
Azure Files NFS shares (optional)
Azure Monitor for SAP (optional)

The workload zones are typically deployed in spokes in a hub-and-spoke architecture.


They can be in their own subscriptions.

The private DNS is supported from the control plane or from a configurable source.

Core configuration
The following example parameter file shows only required parameters.

Bash
# The environment value is a mandatory field, it is used for partitioning
the environments, for example (PROD and NP)
environment="DEV"

# The location value is a mandatory field, it is used to control where the


resources are deployed
location="westeurope"

# The network logical name is mandatory - it is used in the naming


convention and should map to the workload virtual network logical name
network_name="SAP01"

# network_address_space is a mandatory parameter when an existing virtual


network is not used
network_address_space="10.110.0.0/16"

# admin_subnet_address_prefix is a mandatory parameter if the subnets are


not defined in the workload or if existing subnets are not used
admin_subnet_address_prefix="10.110.0.0/19"

# db_subnet_address_prefix is a mandatory parameter if the subnets are not


defined in the workload or if existing subnets are not used
db_subnet_address_prefix="10.110.96.0/19"

# app_subnet_address_prefix is a mandatory parameter if the subnets are not


defined in the workload or if existing subnets are not used
app_subnet_address_prefix="10.110.32.0/19"

# The automation_username defines the user account used by the automation


automation_username="azureadm"

Prepare the workload zone deployment


credentials
SAP Deployment Automation Framework uses service principals when doing the
deployment. To create the service principal for the workload zone deployment, use an
account with permissions to create service principals.

Azure CLI

az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-
Account"

) Important
The name of the service principal must be unique.

Record the output values from the command:

appId
password
tenant

Assign the correct permissions to the service principal.

Azure CLI

az role assignment create --assignee <appId> \


--scope /subscriptions/<subscriptionID> \
--role "User Access Administrator"

Deploy the SAP workload zone


The sample workload zone configuration file DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars is
located in the
~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LANDSCAPE/DEV-WEEU-

SAP01-INFRASTRUCTURE folder.

Run the following command to deploy the SAP workload zone.

Windows

It isn't possible to perform the deployment from Windows.

To begin, be sure to replace:

The sample value <subscriptionID> with your subscription ID.


The <appID> , <password> , and <tenant> values with the output values of the
SPN creation.
The <keyvault> value with the deployer key vault name.
The <storageaccount> value with the name of the storage account that
contains the Terraform state files.
The <statefile_subscription> value with the subscription ID for the storage
account that contains the Terraform state files.
 Tip

If the scripts fail to run, it can sometimes help to clear the local cache files by
removing the ~/.sap_deployment_automation/ and ~/.terraform.d/ directories
before you run the scripts again.

Next step
SAP system deployment with the automation framework
SAP system deployment for the
automation framework
Article • 08/24/2023

The creation of the SAP system is part of the SAP Deployment Automation Framework
process. The SAP system deployment creates your virtual machines (VMs) and
supporting components for your SAP application.

The SAP system deploys:

The database tier, which deploys database VMs, their disks, and a Standard
instance of Azure Load Balancer. You can run HANA databases or AnyDB databases
in this tier.
The SAP central services tier, which deploys a customer-defined number of VMs
and a Standard instance of Load Balancer.
The application tier, which deploys the VMs and their disks.
The web dispatcher tier.

Application tier
The application tier deploys a customer-defined number of VMs. These VMs are size
Standard_D4s_v3 with a 30-GB operating system (OS) disk and a 512-GB data disk.

To set the application server count, define the parameter application_server_count for
this tier in your parameter file. For example, use application_server_count= 3 .

Central services tier


The SAP central services (SCS) tier deploys a customer-defined number of VMs. These
VMs are size Standard_D4s_v3 with a 30-GB OS disk and a 512-GB data disk. This tier
also deploys a Standard instance of Load Balancer.

To set the SCS server count, define the parameter scs_server_count for this tier in your
parameter file. For example, use scs_server_count=1 .

Web dispatcher tier


The web dispatcher tier deploys a customer-defined number of VMs. This tier also
deploys a Standard instance of Load Balancer.
To set the web server count, define the parameter web_server_count for this tier in your
parameter file. For example, use web_server_count = 2 .

Database tier
The database tier deploys the VMs and their disks and also deploys a Standard instance
of Load Balancer. You can use either HANA databases or AnyDB databases as your
database VMs.

You can set the size of database VMs with the parameter size for this tier. For example,
use "size": "S4Demo" for HANA databases or "size": "1 TB" for AnyDB databases. For
possible values, see the Size parameter in the tables of HANA database VM options and
AnyDB database VM options.

By default, the automation framework deploys the correct disk configuration for HANA
database deployments. For HANA database deployments, the framework calculates
default disk configuration based on VM size. However, for AnyDB database
deployments, the framework calculates default disk configuration based on database
size. You can set a disk size as needed by creating a custom JSON file in your
deployment. For an example, see the following JSON code sample and replace values as
necessary for your configuration. Then, define the parameter db_disk_sizes_filename in
the parameter file for the database tier. An example is db_disk_sizes_filename =
"path/to/JSON/file" .

You can also add extra disks to a new system or add extra disks to an existing system.

Core configuration
The following example parameter file shows only required parameters.

Bash

# The environment value is a mandatory field, it is used for partitioning


the environments, for example (PROD and NP)
environment="DEV"

# The location value is a mandatory field, it is used to control where the


resources are deployed
location="westeurope"

# The network logical name is mandatory - it is used in the naming


convention and should map to the workload virtual network logical name
network_name="SAP01"
# sid is a mandatory field that defines the SAP Application SID
sid="S15"

app_tier_vm_sizing="Production"
app_tier_use_DHCP=true

database_platform="HANA"

database_size="S4Demo"
database_sid="XDB"

database_vm_use_DHCP=true

database_vm_image={
os_type="linux"
source_image_id=""
publisher="SUSE"
offer="sles-sap-15-sp2"
sku="gen2"
version="latest"
}

# application_server_count defines how many application servers to deploy


application_server_count=2

application_server_image= {
os_type=""
source_image_id=""
publisher="SUSE"
offer="sles-sap-15-sp2"
sku="gen2"
version="latest"
}

scs_server_count=1

# scs_instance_number
scs_instance_number="00"

# ers_instance_number
ers_instance_number="02"

# webdispatcher_server_count defines how many web dispatchers to deploy


webdispatcher_server_count=0

Deploy the SAP system


The sample SAP system configuration file DEV-WEEU-SAP01-X01.tfvars is located in the
~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01
folder.

Run the following command to deploy the SAP system.

Windows

You can copy the sample configuration files to start testing the deployment
automation framework.

PowerShell

cd C:\Azure_SAP_Automated_Deployment

xcopy sap-automation\deploy\samples\WORKSPACES WORKSPACES

PowerShell

cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-
X01

New-SAPSystem -Parameterfile DEV-WEEU-SAP01-X01.tfvars


-Type sap_system

Output files
The deployment creates an Ansible hosts file ( SID_hosts.yaml ) and an Ansible
parameter file ( sap-parameters.yaml ). These files are required input for the Ansible
playbooks.

Next step
Workload zone deployment with automation framework
Get started with Ansible configuration
Article • 03/12/2024

When you use SAP Deployment Automation Framework, you can perform an automated
infrastructure deployment. You can also do the required operating system
configurations and install SAP by using Ansible playbooks provided in the repository.
These playbooks are located in the automation framework repository in the /sap-
automation/deploy/ansible folder.

ノ Expand table

Filename Description

playbook_01_os_base_config.yaml Base operating system configuration

playbook_02_os_sap_specific_config.yaml SAP-specific operating system configuration

playbook_03_bom_processing.yaml SAP Bill of Materials processing

playbook_04_00_00_hana_db_install SAP HANA database installation

playbook_05_00_00_sap_scs_install.yaml SAP central services installation

playbook_05_01_sap_dbload.yaml Database loader

playbook_04_00_01_hana_hsr.yaml SAP HANA high-availability configuration

playbook_05_02_sap_pas_install.yaml SAP primary application server installation

playbook_05_03_sap_app_install.yaml SAP application server installation

playbook_05_04_sap_web_install.yaml SAP Web Dispatcher installation

Prerequisites
The Ansible playbooks require the sap-parameters.yaml and SID_host.yaml files in the
current directory.

Configuration files
The sap-parameters.yaml file contains information that Ansible uses for configuration of
the SAP infrastructure.

YAML
---

# bom_base_name is the name of the SAP Application Bill of Materials file


bom_base_name: S41909SPS03_v0010ms
# Set to true to instruct Ansible to update all the packages on the virtual
machines
upgrade_packages: false

# TERRAFORM CREATED
sap_fqdn: sap.contoso.net
# kv_name is the name of the key vault containing the system credentials
kv_name: LABSECESAP01user###
# secret_prefix is the prefix for the name of the secret stored in key vault
secret_prefix: LAB-SECE-SAP01

# sap_sid is the application SID


sap_sid: L00
# scs_high_availability is a boolean flag indicating
# if the SAP Central Services are deployed using high availability
scs_high_availability: false
# SCS Instance Number
scs_instance_number: "00"
# scs_lb_ip is the SCS IP address of the load balancer in
# front of the SAP Central Services virtual machines
scs_lb_ip: 10.110.32.26
# ERS Instance Number
ers_instance_number: "02"
# ecs_lb_ip is the ERS IP address of the load balancer in
# front of the SAP Central Services virtual machines
ers_lb_ip:

# sap_sid is the database SID


db_sid: XDB
# platform
platform: HANA

# db_high_availability is a boolean flag indicating if the


# SAP database servers are deployed using high availability
db_high_availability: false
# db_lb_ip is the IP address of the load balancer in front of the database
virtual machines
db_lb_ip: 10.110.96.13

disks:
- { host: 'l00dxdb00l0538', LUN: 0, type: 'sap' }
- { host: 'l00dxdb00l0538', LUN: 10, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 11, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 12, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 13, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 20, type: 'log' }
- { host: 'l00dxdb00l0538', LUN: 21, type: 'log' }
- { host: 'l00dxdb00l0538', LUN: 22, type: 'log' }
- { host: 'l00dxdb00l0538', LUN: 2, type: 'backup' }
- { host: 'l00app00l538', LUN: 0, type: 'sap' }
- { host: 'l00app01l538', LUN: 0, type: 'sap' }
- { host: 'l00scs00l538', LUN: 0, type: 'sap' }

...

The L00_hosts.yaml file is the inventory file that Ansible uses for configuration of the
SAP infrastructure. The L00 label might differ for your deployments.

YAML

L00_DB:
hosts:
l00dxdb00l0538:
ansible_host : 10.110.96.12
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : hana

L00_SCS:
hosts:
l00scs00l538:
ansible_host : 10.110.32.25
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : scs

L00_ERS:
hosts:
vars:
node_tier : ers

L00_PAS:
hosts:
l00app00l538:
ansible_host : 10.110.32.24
ansible_user : azureadm
ansible_connection : ssh
connection_type : key

vars:
node_tier : pas

L00_APP:
hosts:
l00app01l538:
ansible_host : 10.110.32.15
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : app

L00_WEB:
hosts:
vars:
node_tier : web

Run a playbook
Make sure that you download the SAP software to your Azure environment before you
run this step.

One way you can run the playbooks is to use the configuration menu.

Run the configuration_menu script.

Bash

${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/configuration_menu.sh

To run a playbook or multiple playbooks, use the following ansible-playbook command.


This example runs the operating system configuration playbook.

Bash

sap_params_file=sap-parameters.yaml

if [[ ! -e "${sap_params_file}" ]]; then


echo "Error: '${sap_params_file}' file not found!"
exit 1
fi

# Extract the sap_sid from the sap_params_file, so that we can determine


# the inventory file name to use.
sap_sid="$(awk '$1 == "sap_sid:" {print $2}' ${sap_params_file})"

kv_name="$(awk '$1 == "kv_name:" {print $2}' ${sap_params_file})"

prefix="$(awk '$1 == "secret_prefix:" {print $2}' ${sap_params_file})"


password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name ${kv_name} --name


${password_secret_name} --query value --output table )

export ANSIBLE_PASSWORD=$password_secret
export ANSIBLE_INVENTORY="${sap_sid}_hosts.yaml"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
export
ANSIBLE_COLLECTIONS_PATHS=/opt/ansible/collections${ANSIBLE_COLLECTIONS_PATH
S:+${ANSIBLE_COLLECTIONS_PATHS}}
export ANSIBLE_REMOTE_USER=azureadm

export ANSIBLE_PYTHON_INTERPRETER=auto_silent

# Set of options that will be passed to the ansible-playbook command


playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@${sap_params_file}"
-e ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
"${@}"
)

ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_01_os_base_config.yaml

Operating system configuration


The operating system configuration playbook is used to configure the operating system
of the SAP virtual machines. The playbook performs the following tasks.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Core


Operating System Configuration .

The configuration menu script configuration_menu.sh .


The command line.

Windows
The following tasks are executed on Windows virtual machines:

Ensure that all the components are installed:


StorageDsc
NetworkingDsc

ComputerManagementDsc

PSDesiredStateConfiguration
WindowsDefender

ServerManager
SecurityPolicyDsc

Visual C++ runtime libraries


ODBC drivers

Configure the swap file size.

Initialize the disks.

Configure Windows Firewall.

Join the virtual machine to the specified domain.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to perform the Operating System configuration


ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_01_os_base_config.yaml
SAP-specific operating system configuration
The SAP-specific operating system configuration playbook is used to configure the
operating system of the SAP virtual machines. The playbook performs the following
tasks.

Windows

The following tasks are executed on Windows virtual machines:

Add local groups and permissions.


Connect to the Windows file shares.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to perform the SAP Specific Operating System


configuration
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_02_os_sap_specific_config.yaml
Local software download
This playbook downloads the installation media from the control plane to the
installation media source. The installation media can be shared out from the central
services instance or from Azure Files or Azure NetApp Files.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Local


software download .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The following tasks are executed on the central services instance virtual machine:

Download the software from the storage account and make it available for the
other virtual machines.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_03_bom_processing.yaml

SAP Central Services and high-availability


configuration
This playbook performs the Central Services installation. For high-availability scenarios,
the playbook also configures the Pacemaker cluster needed for SAP Central Services for
high availability on Linux and Windows Failover Clustering for Windows.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing SCS


Installation & High Availability Configuration .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The playbook performs the following tasks:

Central Services installation


Windows failover cluster configuration

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml

Database installation
This playbook performs the database server installation.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Database


installation .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The playbook performs the following task:

Database instance installation

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_00_db_install.yaml

Database load
This playbook performs the Database load.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Database


Load .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The playbook performs the following task:

Database load

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_01_sap_dbload.yaml

Database high-availability configuration


This playbook performs the database server high-availability configuration.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Database


High Availability Configuration .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The playbook performs the following tasks:

Database high-availability configuration


SQL Server Always On availability group configuration

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_01_db_ha.yaml

Primary application server installation


This playbook performs the installation of the primary application server.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Primary


Application Server Installation .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The playbook performs the following task:

Primary application server installation

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml

Additional application server installation


This playbook performs the installation of the application servers.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing


Application Server Installation .
The configuration menu script configuration_menu.sh .
The command line.

Windows

The playbook performs the following task:

Application server installation

Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_app_install.yaml

Web Dispatcher installation


This playbook performs the installation of the Web Dispatchers.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Web


Dispatcher Installation .

The configuration menu script configuration_menu.sh .


The command line.

Windows

The playbook performs the following task:


Web Dispatcher installation

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_04_sap_web_install.yaml

ACSS registration
This playbook performs the Azure Center for SAP Solutions (ACSS) registration.

You can run the playbook by using either:

The DevOps pipeline Configuration and SAP installation by choosing Register


System in ACSS .

The configuration menu script configuration_menu.sh .


The command line.
Windows

The playbook performs the following task:

ACSS registration

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/

export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"

password_secret_name=$prefix-sid-password

password_secret=$(az keyvault secret show --vault-name


${workload_vault_name} --name ${password_secret_name} --query value --
output table)
export ANSIBLE_PASSWORD=$password_secret

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_06_00_acss_registration.yaml
Tutorial: Deploy SAP Deployment
Automation Framework for enterprise
scale
Article • 03/12/2024

This tutorial shows you how to perform deployments by using SAP Deployment
Automation Framework. This example uses Azure Cloud Shell to deploy the control
plane infrastructure. The deployer virtual machine (VM) creates the remaining
infrastructure and SAP HANA configurations.

In this tutorial, you perform the following tasks:

" Deploy the control plane (deployer infrastructure and library).


" Deploy the workload zone (landscape and system).
" Download/Upload Bill of Materials.
" Configure standard and SAP-specific operating system settings.
" Install the HANA database.
" Install the SAP Central Services (SCS) server.
" Load the HANA database.
" Install the primary application server.

There are three main steps of an SAP deployment on Azure with the automation
framework:

1. Prepare the region. You deploy components to support the SAP automation
framework in a specified Azure region. In this step, you:
a. Create the deployment environment.
b. Create shared storage for Terraform state files.
c. Create shared storage for SAP installation media.

2. Prepare the workload zone. You deploy the workload zone components, such as
the virtual network and key vaults.

3. Deploy the system. You deploy the infrastructure for the SAP system.

There are several workflows in the deployment automation process. This tutorial focuses
on one workflow for ease of deployment. You can deploy this workflow, the SAP S4
HANA standalone environment, by using Bash. This tutorial describes the general
hierarchy and different phases of the deployment.
Environment overview
SAP Deployment Automation Framework has two main components:

Deployment infrastructure (control plane)


SAP infrastructure (SAP workload)

The following diagram shows the dependency between the control plane and the
application plane.
The framework uses Terraform for infrastructure deployment and Ansible for the
operating system and application configuration. The following diagram shows the
logical separation of the control plane and workload zone.

Management zone
The management zone contains the control plane infrastructure from which other
environments are deployed. After the management zone is deployed, you rarely, if ever,
need to redeploy.
The deployer is the execution engine of the SAP automation framework. This
preconfigured VM is used for executing Terraform and Ansible commands.

The SAP Library provides the persistent storage for the Terraform state files and the
downloaded SAP installation media for the control plane.

You configure the deployer and the library in a Terraform .tfvars variable file. For more
information, see Configure the control plane.

Workload zone
An SAP application typically has multiple deployment tiers. For example, you might have
development, quality assurance, and production tiers. SAP Deployment Automation
Framework calls these tiers workload zones.
The SAP workload zone contains the networking and shared components for the SAP
VMs. These components include route tables, network security groups, and virtual
networks. The landscape provides the opportunity to divide deployments into different
environments. For more information, see Configure the workload zone.

The system deployment consists of the VMs to run the SAP application, including the
web, app, and database tiers. For more information, see Configure the SAP system.

Prerequisites
The SAP Deployment Automation Framework repository is available on GitHub.

You need to deploy Azure Bastion or use a Secure Shell (SSH) client to connect to the
deployer. Use any SSH client that you feel comfortable with.

Review the Azure subscription quota

Ensure that your Azure subscription has a sufficient core quote for DdSV4 and EdsV4
family SKUs in the elected region. About 50 cores available for each VM family should
suffice.

S-User account for SAP software download


A valid SAP user account (SAP-User or S-User account) with software download
privileges is required to download the SAP software.

Set up Cloud Shell


1. Go to Azure Cloud Shell .

2. Sign in to your Azure account.

cloudshell
az login

Authenticate your sign-in. Don't close the window until you're prompted.

3. Validate your active subscription and record your subscription ID:

cloudshell

az account list --query "[?isDefault].{Name: name, CloudName:


cloudName, SubscriptionId: id, State: state, IsDefault: isDefault}" --
output=table

Or:

cloudshell

az account list --output=table | grep True

4. If necessary, change your active subscription.

cloudshell

az account set --subscription <Subscription ID>

Validate that your active subscription changed.

cloudshell

az account list --query "[?isDefault].{Name: name, CloudName:


cloudName, SubscriptionId: id, State: state, IsDefault: isDefault}" --
output=table

5. Optionally, remove all the deployment artifacts. Use this command when you want
to remove all remnants of previous deployment artifacts.

cloudshell

cd ~

rm -rf Azure_SAP_Automated_Deployment .sap_deployment_automation


.terraform.d

6. Create the deployment folder and clone the repository.


cloudshell

mkdir -p ${HOME}/Azure_SAP_Automated_Deployment; cd $_

git clone https://github.com/Azure/sap-automation-bootstrap.git config

git clone https://github.com/Azure/sap-automation.git sap-automation

git clone https://github.com/Azure/sap-automation-samples.git samples

cp -Rp samples/Terraform/WORKSPACES
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES

7. Optionally, validate the versions of Terraform and the Azure CLI available on your
instance of Cloud Shell.

cloudshell

./sap-automation/deploy/scripts/helpers/check_workstation.sh

To run the automation framework, update to the following versions:

az version 2.5.0 or higher.

terraform version 1.5 or higher. Upgrade by using the Terraform

instructions , as necessary.

Create a service principal


The SAP automation deployment framework uses service principals for deployment.
Create a service principal for your control plane deployment. Make sure to use an
account with permissions to create service principals.

When you choose a name for your service principal, make sure that the name is unique
within your Azure tenant.

1. Give the service principal Contributor and User Access Administrator permissions.

cloudshell

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export control_plane_env_code="LAB"

az ad sp create-for-rbac --role="Contributor" \
--scopes="/subscriptions/${ARM_SUBSCRIPTION_ID}" \
--name="${control_plane_env_code}-Deployment-Account"

Review the output. For example:

JSON

{
"appId": "<AppId>",
"displayName": "<environment>-Deployment-Account ",
"name": "<AppId>",
"password": "<AppSecret>",
"tenant": "<TenantId>"
}

2. Copy down the output details. Make sure to save the values for appId , password ,
and Tenant .

The output maps to the following parameters. You use these parameters in later
steps, with automation commands.

ノ Expand table

Parameter input name Output name

spn_id appId

spn_secret password

tenant_id tenant

3. Optionally, assign the User Access Administrator role to the service principal.

cloudshell

export appId="<appId>"

az role assignment create --assignee ${appId} \


--role "User Access Administrator" \
--scope /subscriptions/${ARM_SUBSCRIPTION_ID}

) Important

If you don't assign the User Access Administrator role to the service principal, you
can't assign permissions by using the automation.
Configure the control plane web application
credentials
As a part of the SAP automation framework control plane, you can optionally create an
interactive web application that assists you in creating the required configuration files.

Create an app registration


If you want to use the web app, you must first create an app registration for
authentication purposes. Open Cloud Shell and run the following commands:

Replace LAB with your environment, as necessary.

Bash

export env_code="LAB"

echo '[{"resourceAppId":"00000003-0000-0000-c000-
000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-
88639da4683d","type":"Scope"}]}]' >> manifest.json

export TF_VAR_app_registration_app_id=$(az ad app create \


--display-name ${env_code}-webapp-registration \
--enable-id-token-issuance true \
--sign-in-audience AzureADMyOrg \
--required-resource-access @manifest.json \
--query "appId" --output tsv )
#remove the placeholder manifest.json
rm manifest.json

export TF_VAR_webapp_client_secret=$(az ad app credential reset \


--id $TF_VAR_app_registration_app_id --append \
--query "password" --output tsv )

export TF_use_webapp=true

echo "App registration ID: ${TF_VAR_app_registration_app_id}"


echo "App registration password: ${TF_VAR_webapp_client_secret}"

7 Note

Ensure that you're logged on by using a user account that has the required
permissions to create application registrations. For more information about app
registrations, see Create an app registration.

Copy down the output details. Make sure to save the values for App registration ID
and App registration password .

The output maps to the following parameters. You use these parameters in later steps,
with automation commands.

ノ Expand table

Parameter input name Output name

app_registration_app_id App registration ID

webapp_client_secret App registration password

View configuration files


1. Open Visual Studio Code from Cloud Shell.

cloudshell

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
code .
2. Expand the WORKSPACES directory. There are six subfolders: CONFIGURATION ,
DEPLOYER , LANDSCAPE , LIBRARY , SYSTEM , and BOMS . Expand each of these folders to

find regional deployment configuration files.

3. Find the Terraform variable files in the appropriate subfolder. For example, the
DEPLOYER Terraform variable file might look like this example:

Terraform

# The environment value is a mandatory field, it is used for


partitioning the environments, for example, PROD and NP.
environment = "LAB"
# The location/region value is a mandatory field, it is used to control
where the resources are deployed
location = "swedencentral"

# management_network_address_space is the address space for management


virtual network
management_network_address_space = "10.10.20.0/25"
# management_subnet_address_prefix is the address prefix for the
management subnet
management_subnet_address_prefix = "10.10.20.64/28"

# management_firewall_subnet_address_prefix is the address prefix for


the firewall subnet
management_firewall_subnet_address_prefix = "10.10.20.0/26"
firewall_deployment = true

# management_bastion_subnet_address_prefix is the address prefix for


the bastion subnet
management_bastion_subnet_address_prefix = "10.10.20.128/26"
bastion_deployment = true

# deployer_enable_public_ip controls if the deployer Virtual machines


will have Public IPs
deployer_enable_public_ip = true

# deployer_count defines how many deployer VMs will be deployed


deployer_count = 1

# use_service_endpoint defines that the management subnets have service


endpoints enabled
use_service_endpoint = true

# use_private_endpoint defines that the storage accounts and key vaults


have private endpoints enabled
use_private_endpoint = false

# enable_firewall_for_keyvaults_and_storage defines that the storage


accounts and key vaults have firewall enabled
enable_firewall_for_keyvaults_and_storage = false
# public_network_access_enabled controls if storage account and key
vaults have public network access enabled
public_network_access_enabled = true

Note the Terraform variable file locations for future edits during deployment.

4. Find the Terraform variable files for the SAP Library in the appropriate subfolder.
For example, the LIBRARY Terraform variable file might look like this example:

Terraform

# The environment value is a mandatory field, it is used for


partitioning the environments, for example, PROD and NP.
environment = "LAB"
# The location/region value is a mandatory field, it is used to control
where the resources are deployed
location = "swedencentral"

#Defines the DNS suffix for the resources


dns_label = "lab.sdaf.contoso.net"

# use_private_endpoint defines that the storage accounts and key vaults


have private endpoints enabled
use_private_endpoint = false

Note the Terraform variable file locations for future edits during deployment.

) Important

Ensure that the dns_label matches your instance of Azure Private DNS.

Deploy the control plane


Use the deploy_controlplane.sh script to deploy the deployer and library. These
deployment pieces make up the control plane for a chosen automation area.

The deployment goes through cycles of deploying the infrastructure, refreshing the
state, and uploading the Terraform state files to the library storage account. All of these
steps are packaged into a single deployment script. The script needs the location of the
configuration file for the deployer and library, and some other parameters.

For example, choose West Europe as the deployment location, with the four-character
name SECE , as previously described. The sample deployer configuration file LAB-SECE-
DEP05-INFRASTRUCTURE.tfvars is in the
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-SECE-DEP05-
INFRASTRUCTURE folder.

The sample SAP Library configuration file LAB-SECE-SAP_LIBRARY.tfvars is in the


${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/LAB-SECE-SAP_LIBRARY

folder.

1. Set the environment variables for the service principal:

Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"

If you're running the script from a workstation that isn't part of the deployment
network or from Cloud Shell, you can use the following command to set the
environment variable for allowing connectivity from your IP address:

Bash

export TF_VAR_Agent_IP=<your-public-ip-address>

If you're deploying the configuration web application, you need to also set the
following environment variables:

Bash

export TF_VAR_app_registration_app_id=<appRegistrationId>
export TF_VAR_webapp_client_secret=<appRegistrationPassword>
export TF_use_webapp=true

2. Create the deployer and the SAP Library and add the service principal details to the
deployment key vault by using this script:

Bash

export env_code="LAB"
export vnet_code="DEP05"
export region_code="SECE"
export
DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export
SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

cd $CONFIG_REPO_PATH

az login --service-principal -u "${ARM_CLIENT_ID}" -


p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"

deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${reg
ion_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${regio
n_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"

${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
--deployer_parameter_file "${deployer_parameter_file}" \
--library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}"

If you run into authentication issues, run az logout to sign out and clear the
token-cache . Then run az login to reauthenticate.

Wait for the automation framework to run the Terraform operations plan and
apply .

The deployment of the deployer might run for about 15 to 20 minutes.

You need to note some values for upcoming steps. Look for this text block in the
output:

text

#######################################################################
##################
#
#
# Please save these values:
#
# - Key Vault: LABSECEDEP05user39B
#
# - Deployer IP: x.x.x.x
#
# - Storage Account: labsecetfstate53e
#
# - Web Application Name: lab-sece-sapdeployment39B
#
# - App registration Id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#
#
#
#######################################################################
##################

3. Go to the Azure portal .

Select Resource groups. Look for new resource groups for the deployer
infrastructure and library. For example, you might see LAB-[region]-DEP05-
INFRASTRUCTURE and LAB-[region]-SAP_LIBRARY .

The contents of the deployer and SAP Library resource group are shown here.

The Terraform state file is now placed in the storage account whose name contains
tfstate . The storage account has a container named tfstate with the deployer

and library state files. The contents of the tfstate container after a successful
control plane deployment are shown here.

Common issues and solutions


Here are some troubleshooting tips:

If you get the following error for the deployer module creation, make sure that
you're in the WORKSPACES directory when you run the script:

text

Incorrect parameter file.


The file must contain the environment attribute!!

The following error is transient. Rerun the same command,


deploy_controlplane.sh .

text

Error: file provisioner error


..
timeout - last error: dial tcp

If you have authentication issues directly after you run the script
deploy_controlplane.sh , run this command:

Azure CLI

az logout

az login

Connect to the deployer VM


After the control plane is deployed, the Terraform state is stored by using the remote
back-end azurerm . All secrets for connecting to the deployer VM are available in a key
vault in the deployer's resource group.

To connect to your deployer VM:

1. Sign in to the Azure portal .

2. Select or search for Key vaults.

3. On the Key vault page, find the deployer key vault. The name starts with
LAB[REGION]DEP05user . Filter by Resource group or Location, if necessary.

4. On the Settings section in the left pane, select Secrets.


5. Find and select the secret that contains sshkey. It might look like LAB-[REGION]-
DEP05-sshkey .

6. On the secret's page, select the current version. Then, copy the secret value.

7. Open a plain text editor. Copy in the secret value.

8. Save the file where you keep SSH keys. For example, use C:\\Users\\<your-
username>\\.ssh .

9. Save the file. If you're prompted to Save as type, select All files if SSH isn't an
option. For example, use deployer.ssh .

10. Connect to the deployer VM through any SSH client, such as Visual Studio Code.
Use the public IP address you noted earlier and the SSH key you downloaded. For
instructions on how to connect to the deployer by using Visual Studio Code, see
Connect to the deployer by using Visual Studio Code. If you're using PuTTY,
convert the SSH key file first by using PuTTYGen.

7 Note

The default username is azureadm.

Ensure that the file you use to save the SSH key can save the file by using the
correct format, that is, without carriage return (CR) characters. Use Visual Studio
Code or Notepad++.

After you're connected to the deployer VM, you can download the SAP software by
using the Bill of Materials (BOM).

Connect to the deployer VM when you're not


using a public IP
For deployments without public IP connectivity, direct connectivity over the internet isn't
allowed. In these cases, you can use an Azure Bastion jump box or you can perform the
next step from a computer that has connectivity to the Azure virtual network.

The following example uses Azure Bastion.

To connect to the deployer:

1. Sign in to the Azure portal .


2. Go to the resource group that contains the deployer VM.

3. Connect to the VM by using Azure Bastion.

4. The default username is azureadm.

5. Select SSH Private Key from Azure Key Vault.

6. Select the subscription that contains the control plane.

7. Select the deployer key vault.

8. From the list of secrets, select the secret that ends with -sshkey.

9. Connect to the VM.

The rest of the tasks must be executed on the deployer.

Secure the control plane


The control plane is the most critical part of the SAP automation framework. It's
important to secure the control plane. The following steps help you secure the control
plane.

You should update the control plane tfvars file to enable private endpoints and to
block public access to the storage accounts and key vaults.

1. To copy the control plane configuration files to the deployer VM, you can use the
sync_deployer.sh script. Sign in to the deployer VM and update the following

command to use your Terraform state storage account name. Then, run the
following script:

Bash

terraform_state_storage_account=labsecetfstate###

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES

../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname
$terraform_state_storage_account --state_subscription
$ARM_SUBSCRIPTION_ID
This command copies the tfvars configuration files from the SAP Library's storage
account to the deployer VM.

2. Change the configuration files for the control plane to:

Terraform

# use_private_endpoint defines that the storage accounts and key


vaults have private endpoints enabled
use_private_endpoint = true

# enable_firewall_for_keyvaults_and_storage defines that the


storage accounts and key vaults have firewall enabled
enable_firewall_for_keyvaults_and_storage = true

# public_network_access_enabled controls if storage account and key


vaults have public network access enabled
public_network_access_enabled = false

#if you want to use the webapp


use_webapp=true

3. Rerun the deployment to apply the changes. Update the storage account name
and key vault name in the script.

Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"

4. Create the deployer and the SAP Library.

Bash

export env_code="LAB"
export vnet_code="DEP05"
export region_code="SECE"

terraform_state_storage_account=labsecetfstate###
vault_name="LABSECEDEP05user###"

export
DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export
SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

cd $CONFIG_REPO_PATH

deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${reg
ion_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"

library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${regio
n_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"

az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -
p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"

${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
--deployer_parameter_file "${deployer_parameter_file}" \
--library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--storageaccountname "${terraform_state_storage_account}" \
--vault "${vault_name}"

Deploy the web application


You can deploy the web application by using the following script:

Bash

export env_code="LAB"
export vnet_code="DEP05"
export region_code="SECE"
export webapp_name="<webAppName>"
export app_id="<appRegistrationId>"
export webapp_id="<webAppId>"

export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

cd $DEPLOYMENT_REPO_PATH
cd Webapp/SDAF

dotnet build SDAFWebApp.csproj


dotnet publish SDAFWebApp.csproj --output publish
cd publish

zip -r SDAF.zip .
az webapp deploy --resource-group ${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE --name $webapp_name --src-path SDAF.zip --type zip

az ad app update --id $app_id --web-home-page-url


https://$webapp_name.azurewebsites.net --web-redirect-uris
https://$webapp_name.azurewebsites.net/
https://$webapp_name.azurewebsites.net/.auth/login/aad/callback
az role assignment create --assignee $webapp_id --role reader --subscription
$ARM_SUBSCRIPTION_ID --scope /subscriptions/$ARM_SUBSCRIPTION_ID
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE --name $webapp_name

Collect workload zone information


1. Collect the following information in a text editor. This information was collected at
the end of the "Deploy the control plane" phase.

a. The name of the Terraform state file storage account in the library resource
group:

Following from the preceding example, the resource group is LAB-SECE-


SAP_LIBRARY .

The name of the storage account contains labsecetfstate .

b. The name of the key vault in the deployer resource group:

Following from the preceding example, the resource group is LAB-SECE-


DEP05-INFRASTRUCTURE .

The name of the key vault contains LABSECEDEP05user .

c. The public IP address of the deployer VM. Go to your deployer's resource


group, open the deployer VM, and copy the public IP address.

2. You need to collect the following piece of information:


a. The name of the deployer state file is found under the library resource group:

Select Library resource group > State storage account > Containers >
tfstate . Copy the name of the deployer state file.

Following from the preceding example, the name of the blob is LAB-SECE-
DEP05-INFRASTRUCTURE.terraform.tfstate .

3. If necessary, register the service principal. For this tutorial, this step isn't needed.
The first time an environment is instantiated, a service principal must be registered.
In this tutorial, the control plane is in the LAB environment and the workload zone
is also in LAB . For this reason, a service principal must be registered for the LAB
environment.

Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appID>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenant>"
export key_vault="<vaultName>"
export env_code="LAB"
export region_code="SECE"

export
SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"

Bash

${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/set_secrets.sh \
--environment "${env_code}" \
--region "${region_code}" \
--vault "${key_vault}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}"

Prepare the workload zone deployment


Connect to your deployer VM for the following steps. A copy of the repo is now there.

Deploy the workload zone


Use the install_workloadzone script to deploy the SAP workload zone.

1. On the deployer VM, go to the Azure_SAP_Automated_Deployment folder.

Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/LAB-
SECE-SAP04-INFRASTRUCTURE

2. Optionally, open the workload zone configuration file and, if needed, change the
network logical name to match the network name.

3. Start deployment of the workload zone. The details that you collected earlier are
needed here:

Name of the deployer tfstate file (found in the tfstate container)


Name of the tfstate storage account
Name of the deployer key vault

Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"

Bash

export deployer_env_code="LAB"
export sap_env_code="LAB"
export region_code="SECE"

export deployer_vnet_code="DEP05"
export vnet_code="SAP04"

export tfstate_storage_account="<storageaccountName>"
export key_vault="<vaultName>"

export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"


--tenant "${ARM_TENANT_ID}"

cd
"${CONFIG_REPO_PATH}/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE"

parameterFile="${sap_env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
deployerState="${deployer_env_code}-${region_code}-${deployer_vnet_code}-
INFRASTRUCTURE.terraform.tfstate"
$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
--parameterfile "${parameterFile}" \
--deployer_environment "${deployer_env_code}" \
--deployer_tfstate_key "${deployerState}" \
--keyvault "${key_vault}" \
--storageaccountname "${tfstate_storage_account}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}"

The workload zone deployment should start automatically.

Wait for the deployment to finish. The new resource group appears in the Azure portal.

Prepare to deploy the SAP system


infrastructure
Connect to your deployer VM for the following steps. A copy of the repo is now there.

Go into the WORKSPACES/SYSTEM folder and copy the sample configuration files to use
from the repository.

Deploy the SAP system infrastructure


After the workload zone is finished, you can deploy the SAP system infrastructure
resources. The SAP system creates your VMs and supporting components for your SAP
application. Use the installer.sh script to deploy the SAP system.

The SAP system deploys:

The database tier, which deploys database VMs and their disks and an Azure
Standard Load Balancer instance. You can run HANA databases or AnyDB
databases in this tier.
The SCS tier, which deploys a customer-defined number of VMs and an Azure
Standard Load Balancer instance.
The application tier, which deploys the VMs and their disks.
The Web Dispatcher tier.

Deploy the SAP system.

Bash
export sap_env_code="LAB"
export region_code="SECE"
export vnet_code="SAP04"
export SID="L00"

export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

cd
${CONFIG_REPO_PATH}/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-${SID
}

${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh
\
--parameterfile
"${sap_env_code}-${region_code}-${vnet_code}-${SID}.tfvars" \
--type sap_system

Check that the system resource group is now in the Azure portal.

Get SAP software by using the Bill of Materials


The automation framework gives you tools to download software from SAP by using the
SAP BOM. The software is downloaded to the SAP Library, which acts as the archive for
all media required to deploy SAP.

The SAP BOM mimics the SAP maintenance planner. There are relevant product
identifiers and a set of download URLs.

A sample extract of a BOM file looks like this example:

YAML

---
name: 'S41909SPS03_v0010'
target: 'S/4 HANA 1909 SPS 03'
version: 7

product_ids:
dbl: NW_ABAP_DB:S4HANA1909.CORE.HDB.ABAP
scs: NW_ABAP_ASCS:S4HANA1909.CORE.HDB.ABAP
scs_ha: NW_ABAP_ASCS:S4HANA1909.CORE.HDB.ABAPHA
pas: NW_ABAP_CI:S4HANA1909.CORE.HDB.ABAP
pas_ha: NW_ABAP_CI:S4HANA1909.CORE.HDB.ABAPHA
app: NW_DI:S4HANA1909.CORE.HDB.PD
app_ha: NW_DI:S4HANA1909.CORE.HDB.ABAPHA
web: NW_Webdispatcher:NW750.IND.PD
ers: NW_ERS:S4HANA1909.CORE.HDB.ABAP
ers_ha: NW_ERS:S4HANA1909.CORE.HDB.ABAPHA

materials:
dependencies:
- name: HANA_2_00_055_v0005ms

media:
# SAPCAR 7.22
- name: SAPCAR
archive: SAPCAR_1010-70006178.EXE
checksum:
dff45f8df953ef09dc560ea2689e53d46a14788d5d184834bb56544d342d7b
filename: SAPCAR
permissions: '0755'
url:
https://softwaredownloads.sap.com/file/0020000002208852020

# Kernel
- name: "Kernel Part I ; OS: Linux on x86_64 64bit ; DB:
Database independent"

For this example configuration, the resource group is LAB-SECE-DEP05-INFRASTRUCTURE .


The deployer key vault name contains LABSECEDEP05user in the name. You use this
information to configure your deployer's key vault secrets.

1. Connect to your deployer VM for the following steps. A copy of the repo is now
there.

2. Add a secret with the username for your SAP user account. Replace <vaultName>
with the name of your deployer key vault. Also replace <sap-username> with your
SAP username.

Bash

export key_vault=<vaultName>
sap_username=<sap-username>

az keyvault secret set --name "S-Username" --vault-name $key_vault --


value "${sap_username}";

3. Add a secret with the password for your SAP user account. Replace <vaultName>
with your deployer key vault name and replace <sap-password> with your SAP
password.

7 Note
The use of single quotation marks when you set sap_user_password is
important. The use of special characters in the password can otherwise cause
unpredictable results.

Azure CLI

sap_user_password='<sap-password>'

az keyvault secret set --name "S-Password" --vault-name "${key_vault}"


--value="${sap_user_password}";

4. Configure your SAP parameters file for the download process. Then, download the
SAP software by using Ansible playbooks. Run the following commands:

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
mkdir BOMS
cd BOMS

vi sap-parameters.yaml

5. Update the bom_base_name with the name BOM. Replace <Deployer KeyVault Name>
with the name of the Azure key vault for the deployer resource group.

Your file should look similar to the following example configuration:

YAML

bom_base_name: S42022SPS00_v0001ms
deployer_kv_name: <vaultName>
BOM_directory:
${HOME}/Azure_SAP_Automated_Deployment/samples/SAP

6. Run the Ansible playbook to download the software. One way you can run the
playbooks is to use the Downloader menu. Run the download_menu script.

Bash

${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/download_menu.sh

7. Select which playbooks to run.


Bash

1) BoM Downloader
3) Quit
Please select playbook:

Select the playbook 1) BoM Downloader to download the SAP software described in
the BOM file into the storage account. Check that the sapbits container has all
your media for installation.

You can run the playbook by using the configuration menu or directly from the
command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/BOMS/

export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
--extra-vars="bom_processing=true"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to perform the Operating System configuration


ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_bom_downloader.yaml

If you want, you can also pass the SAP User credentials as parameters.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/BOMS/

sap_username=<sap-username>
sap_user_password='<sap-password>'
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
--extra-vars="s_user=${sap_username}"
--extra-vars="s_password=${sap_user_password}"
--extra-vars="bom_processing=true"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to perform the Operating System configuration


ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_bom_downloader.yaml

SAP application installation


The SAP application installation happens through Ansible playbooks.

Go to the system deployment folder.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

Make sure you have the following files in the current folders: sap-parameters.yaml and
L00_host.yaml .

For a standalone SAP S/4HANA system, there are eight playbooks to run in sequence.
One way you can run the playbooks is to use the configuration menu.

Run the configuration_menu script.

Bash
${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/configuration_menu.sh

Choose the playbooks to run.

Playbook: Base operating system configuration


This playbook performs the generic operating system configuration setup on all the
machines, which includes configuration of software repositories, packages, and services.

You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to perform the Operating System configuration


ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_01_os_base_config.yaml
Playbook: SAP-specific operating system configuration
This playbook performs the SAP operating system configuration setup on all the
machines. The steps include creation of volume groups and file systems and
configuration of software repositories, packages, and services.

You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to perform the SAP Specific Operating System


configuration
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_02_os_sap_specific_config.yaml

Playbook: BOM processing


This playbook downloads the SAP software to the SCS VM.

You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_03_bom_processing.yaml

Playbook: SCS installation


This playbook installs SAP Central Services. For highly available configurations, the
playbook also installs the SAP ERS instance and configures Pacemaker.

You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml

Playbook: Database instance installation


This playbook installs the database instances.

You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_00_db_install.yaml

Playbook: Database load


This playbook invokes the database load task from the primary application server.
You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_01_sap_dbload.yaml

Playbook: Database high-availability setup


This playbook configures the database high availability. For HANA, it entails HANA
system replication and Pacemaker for the HANA database.

You can run the playbook by using the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_01_db_ha.yaml

Playbook: Primary application server installation


This playbook installs the primary application server. You can run the playbook by using
the configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml

Playbook: Application server installations


This playbook installs the application servers. You can run the playbook by using the
configuration menu or the command line.

Bash

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_app_install.yaml

Playbook: Web Dispatcher installations


This playbook installs the Web Dispatchers. You can run the playbook by using the
configuration menu or the command line.

You've now deployed and configured a standalone HANA system. If you need to
configure a highly available (HA) SAP HANA database, run the HANA HA playbook.

Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/

export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey

playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)

# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml

# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_04_sap_web_install.yaml

Clean up the installation


It's important to clean up your SAP installation from this tutorial after you're finished.
Otherwise, you continue to incur costs related to the resources.

To remove the entire SAP infrastructure you deployed, you need to:

" Remove the SAP system infrastructure resources.


" Remove all workload zones (the landscape).
" Remove the control plane.

Run the removal of your SAP infrastructure resources and workload zones from the
deployer VM. Run the removal of the control plane from Cloud Shell.

Before you begin, sign in to your Azure account. Then, check that you're in the correct
subscription.

Remove the SAP infrastructure


Go to the LAB-SECE-SAP01-L00 subfolder inside the SYSTEM folder. Then, run this
command:

Bash

export sap_env_code="LAB"
export region_code="SECE"
export sap_vnet_code="SAP04"

cd
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${r
egion_code}-${sap_vnet_code}-L00

${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \
--parameterfile "${sap_env_code}-${region_code}-${sap_vnet_code}-
L00.tfvars" \
--type sap_system

Remove the SAP workload zone


Go to the LAB-XXXX-SAP01-INFRASTRUCTURE subfolder inside the LANDSCAPE folder. Then,
run the following command:

Bash

export sap_env_code="LAB"
export region_code="SECE"
export sap_vnet_code="SAP01"

cd
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-
${region_code}-${sap_vnet_code}-INFRASTRUCTURE

${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh
\
--parameterfile ${sap_env_code}-${region_code}-${sap_vnet_code}-
INFRASTRUCTURE.tfvars \
--type sap_landscape

Remove the control plane


Sign in to Cloud Shell .

Go to the WORKSPACES folder.

Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/

Export the following two environment variables:

Bash

export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export ARM_SUBSCRIPTION_ID="<subscriptionId>"

Run the following command:

Bash

export region_code="SECE"
export env_code="LAB"
export vnet_code="DEP05"

cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_controlplane.sh
\
--deployer_parameter_file
DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars
\
--library_parameter_file LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars

Verify that all resources are cleaned up.

Next step
Configure the control plane
Tutorial: Use SAP Deployment
Automation Framework with DevOps
Article • 08/31/2023

This tutorial shows you how to perform the deployment activities of SAP Deployment
Automation Framework by using Azure DevOps Services.

In this tutorial, you learn how to:

" Deploy the control plane (deployer infrastructure and library).


" Deploy the workload zone (landscape and system).
" Deploy the SAP infrastructure.
" Install the HANA database.
" Install the SCS server.
" Load the HANA database.
" Install the primary application server.
" Download the SAP software.
" Install SAP.

Prerequisites
An Azure subscription. If you don't have an Azure subscription, you can create a
free account .

7 Note

The free Azure account might not be sufficient to run the deployment.

A service principal with Contributor permissions in the target subscriptions. For


more information, see Prepare the deployment credentials.

A configured Azure DevOps instance. For more information, see Configure Azure
DevOps Services for SAP Deployment Automation.

For the SAP software acquisition and the Configuration and SAP installation
pipelines, a configured self-hosted agent.

The self-hosted agent virtual machine is deployed as part of the control plane
deployment.
Overview
These steps reference and use the default naming convention for the automation
framework. Example values are also used for naming throughout the configurations. This
tutorial uses the following names:

The Azure DevOps Services project name is SAP-Deployment .


The Azure DevOps Services repository name is sap-automation .
The control plane environment is named MGMT . It's in the region West Europe
( WEEU ) and is installed in the virtual network DEP00 . The deployer configuration
name is MGMT-WEEU-DEP00-INFRASTRUCTURE .
The SAP workload zone has the environment name DEV . It's in the same region as
the control plane and uses the virtual network SAP01 . The SAP workload zone
configuration name is DEV-WEEU-SAP01-INFRASTRUCTURE .
The SAP system with SID X00 is installed in this SAP workload zone. The
configuration name for the SAP system is DEV-WEEU-SAP01-X00 .

Artifact type Configuration name Location

Control plane MGMT-WEEU-DEP00-INFRASTRUCTURE westeurope

Workload zone DEP-WEEU-SAP01-INFRASTRUCTURE westeurope

SAP system DEP-WEEU-SAP01-X00 westeurope

The following diagram shows the deployed infrastructure.


7 Note

In this tutorial, the X00 SAP system is deployed with the following configuration:

Standalone deployment
HANA DB VM SKU: Standard_M32ts
ASCS VM SKU: Standard_D4s_v3
APP VM SKU: Standard_D4s_v3

Deploy the control plane


The deployment uses the configuration defined in the Terraform variable files located in
the samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE and
samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY folders.

Ensure that the Deployment_Configuration_Path variable in the SDAF-General variable


group is set to samples/WORKSPACES .

Run the pipeline by selecting the Deploy control plane pipeline from the Pipelines
section. Enter MGMT-WEEU-DEP00-INFRASTRUCTURE as the deployer configuration name and
MGMT-WEEU-SAP_LIBRARY as the SAP library configuration name.
You can track the progress in the Azure DevOps Services portal. After the deployment is
finished, you can see the control plane details on the Extensions tab.

Deploy the workload zone


The deployment uses the configuration defined in the Terraform variable file located in
the samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE folder.

Run the pipeline by selecting the Deploy workload zone pipeline from the Pipelines
section. Enter DEV-WEEU-SAP01-INFRASTRUCTURE as the workload zone configuration name
and MGM as the deployer environment name.

You can track the progress in the Azure DevOps Services portal. After the deployment is
finished, you can see the workload zone details on the Extensions tab.
Deploy the SAP system
The deployment uses the configuration defined in the Terraform variable file located in
the samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00 folder.

Run the pipeline by selecting the SAP system deployment pipeline from the Pipelines
section. Enter DEV-WEEU-SAP01-X00 as the SAP system configuration name.

You can track the progress in the Azure DevOps Services portal. After the deployment is
finished, you can see the SAP system details on the Extensions tab.

Download the SAP software


Run the pipeline by selecting the SAP software acquisition pipeline from the Pipelines
section. Enter S41909SPS03_v0011ms as the name of Bill of Materials, MGMT as the control
plane environment name, and MGMT and WEEU as the control plane (SAP library) location
code.

You can track the progress in the Azure DevOps portal.

Run the configuration and SAP installation


pipeline
Run the pipeline by selecting the Configuration and SAP installation pipeline from the
Pipelines section. Enter DEV-WEEU-SAP01-X00 as the SAP system configuration name and
S41909SPS03_v0010ms as the Bill of Materials name.

Choose the playbooks to run.


You can track the progress in the Azure DevOps Services portal.

Run the repository update pipeline


Run the pipeline by selecting the Repository updater pipeline from the Pipelines
section. Enter https://github.com/Azure/sap-automation.git as the source repository
and main as the source branch to update from.

Only select Force the update if the update fails.

Run the removal pipeline


Run the pipeline by selecting the Deployment removal pipeline from the Pipelines
section.

SAP system removal


Enter DEV-WEEU-SAP01-X00 as the SAP system configuration name.

SAP workload zone removal


Enter DEV-WEEU-SAP01-INFRASTRUCTURE as the SAP workload zone configuration name.

Control plane removal


Enter MGMT-WEEU-DEP00-INFRASTRUCTURE as the deployer configuration name and enter
MGMT-WEEU-SAP_LIBRARY as the SAP library configuration name.

Next step
Configure control plane
Configure new and existing
deployments
Article • 09/03/2023

You can use SAP Deployment Automation Framework in both new and existing
deployment scenarios.

In new deployment scenarios, the automation framework doesn't use existing Azure
infrastructure. The deployment process creates the virtual networks, subnets, key vaults,
and more.

In existing deployment scenarios, the automation framework uses existing Azure


infrastructure. For example, the deployment uses existing virtual networks.

New deployment scenarios


The following examples show new deployment scenarios that create new resources.

) Important

Modify all example configurations as necessary for your scenario.

New deployment
In this scenario, the automation framework creates all Azure components and uses the
deployer. This example deployment contains:

Two environments in the West Europe Azure region:


Management ( MGMT ) hosts the control plane.
Development ( DEV ) hosts the development environment.
A deployer
SAP library
SAP system ( SID X00 ) with:
Two application servers.
A highly available central services instance.
A web dispatcher with a single node HANA back end that uses SUSE 12 SP5.
Component Parameter file location

Deployer DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-WEEU-DEP00-
INFRASTRUCTURE.tfvars

Library LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-SAP_LIBRARY.tfvars

Workload LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE/DEV-WEEU-SAP01-
zone INFRASTRUCTURE.tfvars

System SYSTEM/DEV-WEEU-SAP01-X00/DEV-WEEU-SAP01-X00.tfvars

To test this scenario:

Clone the SAP Deployment Automation Framework repository and copy the sample
files to your root folder for parameter files:

Bash

cd ~/Azure_SAP_Automated_Deployment
mkdir -p WORKSPACES/DEPLOYER
cp sap-automation/samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE
WORKSPACES/DEPLOYER/. -r

mkdir -p WORKSPACES/LIBRARY
cp sap-automation/samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY
WORKSPACES/LIBRARY/. -r

mkdir -p WORKSPACES/LANDSCAPE
cp sap-automation/samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE
WORKSPACES/LANDSCAPE/. -r

mkdir -p WORKSPACES/SYSTEM
cp sap-automation/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00
WORKSPACES/SYSTEM/. -r
cd WORKSPACES

Prepare the control plane by installing the deployer and library. Be sure to replace the
sample values with your service principal's information.

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES

subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>

export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/"
export ARM_SUBSCRIPTION_ID="${subscriptionID}"

$DEPLOYMENT_REPO_PATH/scripts/prepare_region.sh
--deployer_parameter_file DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-
WEEU-DEP00-INFRASTRUCTURE.tfvars \
--library_parameter_file LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-
SAP_LIBRARY.tfvars \
--subscription $subscriptionID
\
--spn_id $appID
\
--spn_secret $spn_secret
\
--tenant_id $tenant
--auto-approve

You can also use PowerShell to do the deployment.

PowerShell

Import-Module "SAPDeploymentUtilities.psd1"

$Subscription=<subscriptionID>
$SPN_id=<appID>
$SPN_password=<password>
$Tenant_id=<tenant>

New-SAPAutomationRegion -DeployerParameterfile .\DEPLOYER\MGMT-WEEU-DEP01-


INFRASTRUCTURE\MGMT-WEEU-DEP01-INFRASTRUCTURE.tfvars
-LibraryParameterfile .\LIBRARY\MGMT-WEEU-SAP_LIBRARY\MGMT-WEEU-
SAP_LIBRARY.tfvars
-Subscription $Subscription
-SPN_id $SPN_id
-SPN_password $SPN_password
-Tenant_id $Tenant_id

Deploy the workload zone by running either the Bash or PowerShell script.

Be sure to replace the sample credentials with your service principal's information. You
can use the same service principal credentials that you used in the control plane
deployment. For production deployments, we recommend using different service
principals per workload zone.

Bash

subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-
INFRASTRUCTURE

${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
--parameterfile DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars \
--deployer_environment 'MGMT' \
--subscription $subscriptionID \
--spn_id $appID \
--spn_secret $spn_secret \
--tenant_id $tenant \
--auto-approve

PowerShell

cd \Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\DEV-WEEU-SAP01-
INFRASTRUCTURE

$subscription="<subscriptionID>"
$appId="<appID>"
$spn_secret="<password>"
$tenant_id="<tenant>"

New-SAPWorkloadZone --parameterfile .\DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars


-DeployerEnvironment MGMT
-Subscription $subscription
-SPN_id $appId
-SPN_password $spn_secret
-Tenant_id $tenant_id

Deploy the SAP system. Run either the Bash or PowerShell command.

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00

${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh --parameterfile DEV-


WEEU-SAP01-X00.tfvars --type sap_system --auto-approve

PowerShell

Import-Module "SAPDeploymentUtilities.psd1"
cd \Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-X00

New-SAPSystem --parameterfile .\DEV-WEEU-SAP01-X00.tfvars


-Type sap_system
Existing example scenarios
The following examples show existing scenarios that use existing Azure resources.

) Important

Modify all example configurations as necessary for your scenario. Update all the
<arm_resource_id> placeholders.

Existing environment scenario


In this scenario, the automation framework uses existing Azure components and uses
the deployer. These existing components include resource groups, storage accounts,
virtual networks, subnets, and network security groups. This example deployment
contains:

Two environments in the East US 2 region


Management ( MGMT ) hosts the control plane.
Quality assurance ( QA ) hosts the SAP QA environment.
A deployer
The SAP library
An SAP system ( SID X01 ) with:
Two application servers.
An HA central services instance.
A database that uses a Microsoft SQL server back-end running Windows Server
2016.
A web dispatcher.

Component Parameter file location

Deployer DEPLOYER/MGMT-EUS2-DEP01-INFRASTRUCTURE/MGMT-EUS2-DEP01-
INFRASTRUCTURE.tfvars

Library LIBRARY/MGMT-EUS2-SAP_LIBRARY/MGMT-EUS2-SAP_LIBRARY.tfvars

Workload LANDSCAPE/QA-EUS2-SAP03-INFRASTRUCTURE/QA-EUS2-SAP03-
zone INFRASTRUCTURE.tfvars

System SYSTEM/QA-EUS2-SAP03-X01/QA-EUS2-SAP03-X01.tfvars

Copy the sample files to your root folder for parameter files:

Bash
cd ~/Azure_SAP_Automated_Deployment
mkdir -p WORKSPACES/DEPLOYER
cp sap-automation/samples/WORKSPACES/DEPLOYER/MGMT-EUS2-DEP01-INFRASTRUCTURE
WORKSPACES/DEPLOYER/. -r

mkdir -p WORKSPACES/LIBRARY
cp sap-automation/samples/WORKSPACES/LIBRARY/MGMT-EUS2-SAP_LIBRARY
WORKSPACES/LIBRARY/. -r

mkdir -p WORKSPACES/LANDSCAPE
cp sap-automation/samples/WORKSPACES/LANDSCAPE/QA-EUS2-SAP03-INFRASTRUCTURE
WORKSPACES/LANDSCAPE/. -r

mkdir -p WORKSPACES/SYSTEM
cp sap-automation/samples/WORKSPACES/SYSTEM/QA-EUS2-SAP03-X01
WORKSPACES/SYSTEM/. -r
cd WORKSPACES

The sample tfvars file has <azure_resource_id> placeholders. You need to replace
them with the actual Azure resource IDs for resource groups, virtual networks, and
subnets.

Deploy the control plane by installing the deployer and SAP library. Run either the Bash
or PowerShell command. Be sure to replace the sample credentials with your service
principal's information.

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES

subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>

export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/"
export ARM_SUBSCRIPTION_ID="${subscriptionID}"

$DEPLOYMENT_REPO_PATH/scripts/prepare_region.sh
--deployer_parameter_file DEPLOYER/MGMT-EUS2-DEP01-INFRASTRUCTURE/MGMT-
EUS2-DEP01-INFRASTRUCTURE.tfvars \
--library_parameter_file LIBRARY/MGMT-EUS2-SAP_LIBRARY/MGMT-EUS2-
SAP_LIBRARY.tfvars \
--subscription $subscriptionID
\
--spn_id $appID
\
--spn_secret $spn_secret
\
--tenant_id $tenant
--auto-approve

PowerShell

cd \Azure_SAP_Automated_Deployment\WORKSPACES

$subscription="<subscriptionID>"
$appId="<appID>"
$spn_secret="<password>"
$tenant_id="<tenant>"

New-SAPAutomationRegion
-DeployerParameterfile .\DEPLOYER\MGMT-EUS2-DEP01-INFRASTRUCTURE\MGMT-
EUS2-DEP01-INFRASTRUCTURE.json
-LibraryParameterfile .\LIBRARY\MGMT-EUS2-SAP_LIBRARY\MGMT-EUS2-
SAP_LIBRARY.json
-Subscription $subscription
-SPN_id $appId
-SPN_password $spn_secret
-Tenant_id $tenant_id
-Silent

Deploy the workload zone by running either the Bash or PowerShell script.

Be sure to replace the sample credentials with your service principal's information. You
can use the same service principal credentials that you used in the control plane
deployment. For production deployments, we recommend using different service
principals per workload zone.

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/QA-EUS2-SAP03-
INFRASTRUCTURE

subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>

${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
--parameterfile QA-EUS2-SAP03-INFRASTRUCTURE.tfvars \
--deployer_environment MGMT \
--subscription $subscriptionID \
--spn_id $appID \
--spn_secret $spn_secret \
--tenant_id $tenant \
--auto-approve

PowerShell

cd \Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\QA-EUS2-SAP03-
INFRASTRUCTURE

$subscription="<subscriptionID>"
$appId="<appID>"
$spn_secret="<password>"
$tenant_id="<tenant>"

New-SAPWorkloadZone --parameterfile .\QA-EUS2-SAP03-INFRASTRUCTURE.tfvars


-DeployerEnvironment MGMT
-Subscription $subscription
-SPN_id $appId
-SPN_password $spn_secret
-Tenant_id $tenant_id

Deploy the SAP system in the QA environment. Run either the Bash or PowerShell
command.

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/QA-EUS2-SAP03-X01

${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh --parameterfile QA-EUS2-


SAP03-X01.tfvars --type sap_system --auto-approve

PowerShell

cd \Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\QA-EUS2-SAP03-X01

New-SAPSystem --parameterfile .\QA-EUS2-SAP03-tfvars.json -Type sap_system

Next step
Tutorial: Enterprise scale for SAP Deployment Automation Framework
Configure Azure monitor for SAP with
SAP Deployment Automation
Framework
Article • 02/27/2024

Monitoring the performance and availability of SAP systems on Azure is simplified


through Azure Monitor for SAP. It collects and analyzes metrics and logs from your
applications, databases, operating systems, and Azure resources. Customers use Azure
Monitor for SAP to visualize and troubleshoot issues, set alerts and notifications, and
optimize SAP workloads on Azure.

By integrating Azure Monitor for SAP and SAP Deployment Automation Framework, you
can achieve a faster, easier, and more reliable deployment and operation of your SAP
systems on Azure. You can use the automation framework to provision and configure
the SAP systems, and Azure Monitor for SAP to monitor and optimize the performance
and availability of those SAP systems.

This integration with SAP on Azure Deployment Automation Framework enables you to
reduce the complexity and deployment cost of running your SAP environments on
Azure, by helping to automate the monitoring of different components of an SAP
landscape.

Overview
As described in the overview document, the automation framework has two main
components:

Deployment infrastructure (control plane, typically deployed in the hub)


SAP infrastructure (SAP workload zone, typically deployed in a spoke)

Deployment of Azure Monitor for SAP (AMS) and the providers can be automated from
the SAP Deployment Automation Framework (SDAF) to simplify the monitoring process.
In this architecture, one Azure Monitor for SAP resource is deployed in each workload
zone, which represents the environment. This resource is responsible for monitoring the
performance and availability of different components of the SAP systems in that
environment.

To monitor different components of each SAP system, there are corresponding providers
and all these providers are deployed in the Azure Monitor for SAP resource of that
environment. This setup allows for efficient monitoring and management of the SAP
systems, as all the providers for a particular system are located in the same Azure
Monitor for SAP resource. The automation framework automates the following steps:

Creates Azure Monitor for SAP resource in workload zone.


Performs prerequisites steps required to enable monitoring.
Creates providers for each component of SAP landscape in Azure Monitor for SAP
resource created.

7 Note

This automation framework currently supports deployment automation of Azure


monitor for SAP resource, OS (Linux) provider to monitor the Azure VMs, and HA
Pacemaker cluster provider to monitor the high availability clusters in the SAP
system.

The key components of the Azure monitor for SAP resource created in the workload
zone resource group would include:

Azure monitor for SAP resource


Managed Resource group with in the Azure monitor for SAP that includes:
Azure functions resource
Azure key vault
Log analytics workspace (optional)
Storage account

Workload zone configuration for Azure


Monitor for SAP resource
The example shows the parameters that are required for the deployment of Azure
Monitor for SAP resource in the workload zone. Optionally, you can choose to use an
existing log analytics workspace that exists in the same subscription as your workload
zone.

Terraform

############################################################################
#############
# AMS Subnet variables
#
############################################################################
#############

# If defined these parameters control the subnet name and the subnet prefix
# ams_subnet_name is an optional parameter and should only be used if the
default naming is not acceptable
# ams_subnet_name = ""

# ams_subnet_address_prefix is a mandatory parameter if the subnets are not


defined in the workload or if existing subnets are not used
ams_subnet_address_prefix = "10.242.25.0/24"

# ams_subnet_arm_id is an optional parameter that if provided specifies


Azure resource identifier for the existing subnet to use
#ams_subnet_arm_id = ""

# ams_subnet_nsg_name is an optional parameter and should only be used if


the default naming is not acceptable for the network security group name
# ams_subnet_nsg_name = ""

# ams_subnet_nsg_arm_id is an optional parameter that if provided specifies


Azure resource identifier for the existing network security group to use
# ams_subnet_nsg_arm_id = ""

############################################################################
#############
# AMS instance variables
#
############################################################################
#############

# If defined these parameters control the ams instance (Azure monitor for
SAP)
# create_ams_instance is an optional parameter, and should be set true is
the AMS instance is to be created.
create_ams_instance = true

# ams_instance_name is an optional parameter and should only be used if the


default naming is not acceptable
ams_instance_name = "AMS-RESOURCE"

# ams_laws_arm_id is a optional parameter to use an exisiting log analytics


for the AMS instance
ams_laws_arm_id = "/subscriptions/0000000-000000-0000000-
0000000000/resourcegroups/rg-
name/providers/microsoft.operationalinsights/workspaces/workspacename"

System configuration for AMS providers


The following example shows the parameter that is required for the automation of
provider prerequisites and provider creation in the Azure monitor for SAP.

Terraform

# enable_os_monitoring is an optional parameter and should be set to true if


you want to monitor the Azure VMs of your SAP system.
enable_os_monitoring = true

# enable_ha_monitoring is an optional parameter and should be set to true if


you want to monitor the HA clusters of your SAP system.
enable_ha_monitoring = true
Download SAP software
Article • 09/03/2023

You need a copy of the SAP software before you can use SAP Deployment Automation
Framework. Prepare your Azure environment so that you can put the SAP media in your
storage account. Then, download the SAP software by using Ansible playbooks.

Prerequisites
An Azure subscription. If you don't have an Azure subscription, you can create a
free account .
An SAP user account (SAP-User or S-User account) with software download
privileges.

Configure a key vault


First, configure your deployer key vault secrets. For this example configuration, the
resource group is DEMO-EUS2-DEP00-INFRASTRUCTURE or DEMO-SCUS-DEP00-INFRASTRUCTURE .

1. Sign in to the Azure CLI with the account you want to use.

Azure CLI

az login

2. Add a secret with the username for your SAP user account. Replace <keyvault-
name> with the name of your deployer key vault. Also replace <sap-username> with

your SAP username.

Azure CLI

export key_vault=<vaultID>
sap_username=<sap-username>

az keyvault secret set --name "S-Username" --vault-name $key_vault --


value "${sap_username}";

3. Add a secret with the password for your SAP user account. Replace <keyvault-
name> with the name of your deployer key vault. Also replace <sap-password> with

your SAP password.


Azure CLI

sap_user_password="<sap-password>
az keyvault secret set --name "S-Password" --vault-name "${key_vault}"
--value "${sap_user_password}";

4. Two other secrets are needed in this step for the storage account. The automation
framework automatically sets up sapbits . It's always a good practice to verify
whether they existed in your deployer key vault or not.

text

sapbits-access-key
sapbits-location-base-path

Download SAP software


Next, configure your SAP parameters file for the download process. Then, download the
SAP software by using Ansible playbooks.

Configure the parameters file


To configure the SAP parameters file:

1. Create a new directory called BOMS .

Bash

mkdir -p ~/Azure_SAP_Automated_Deployment/WORKSPACES/BOMS; cd $_

2. Create the SAP parameters YAML file.

Bash

cat <<EOF > sap-parameters.yaml


---
bom_base_name: S41909SPS03_v0010ms
kv_name: Name of your Management/Control Plane keyvault
..
EOF

3. Open sap-parameters.yaml in an editor.


Bash

vi sap-parameters.yaml

4. Update the following parameters:

a. Change the value of bom_base_name to S41909SPS03_v0010ms .

b. Change the value of kv_name to the name of the deployer key vault.

c. (If needed) Change the value of secret_prefix to match the prefix in your
environment (for example, DEV-WEEU-SAP ).

Run the Ansible playbooks


You're ready to run the Ansible playbooks. One way you can run the playbooks is to use
the validator test menu.

1. Run the download menu script:

Bash

~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/download_menu.sh

2. Select the playbook to run. For example:

text

1) BoM Downloader
2) Quit
Please select playbook:

Another option is to run the Ansible playbooks by using the ansible-playbook


command.

Bash

ansible-playbook
\
--user azureadm
\
--extra-vars="@sap-parameters.yaml"
\
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_bom_downloader.yaml

Next step
Deploy the SAP infrastructure
Acquire media for BOM creation
Article • 02/10/2023

The SAP on Azure Deployment Automation Framework uses a Bill of Materials (BOM). To
create your BOM, you have to locate and download relevant SAP installation media.
Then, you need to upload these media files to your Azure storage account.

7 Note

This guide covers advanced deployment topics. For a basic explanation of how to
deploy the automation framework, see the get started guide instead.

This guide is for configurations that use either the SAP Application (DB) or HANA
databases.

Prerequisites
An SAP account with permissions to download the SAP software and access the
Maintenance Planner.
An installation of the SAP download manager on your computer.
Information about your SAP system:
SAP account username and password. The SAP account cannot be linked to a
SAP Universal ID.
The SAP system product to deploy (such as S/4HANA)
The SAP System Identifier (SAP SID)
Any language pack requirements
The operating system (OS) to use in the application infrastructure
An Azure subscription. If you don't already have an Azure subscription, create a
free account .

Acquire media
To prepare for downloading the SAP installation media:

1. On your computer, create a unique directory for your stack SAP downloads. For
example, ~/Downloads/S4HANA_1909_SP2/ .

2. Sign in to SAP ONE Support Launchpad .

3. Clear your download basket.


a. Go to Software Downloads.

b. Select Download Basket.

c. Select all the items in the basket.

d. Select the X to remove all items from the basket.

4. Add the utility SAPCAR to your download basket.

a. On the search bar, make sure the search type is set to Downloads.

b. Enter SAPCAR in the search bar and select Search.

c. In the table Items Available to Download, select the row for SAPCAR with
Maintenance Software Component. This step filters available downloads for the
latest version of the utility.

d. Make sure the drop-down menu for the table shows the correct OS type. For
example, LINUX ON X86_64 64BIT .

e. Select the checkbox next to the filename of the SAPCAR executable. For
example, SAPCAR_1320-80000935.EXE .

f. Select the shopping cart icon to add your selection to the download basket.

5. Sign in to the Maintenance Planner .

6. Design your SAP system. For example, if you're using S/4HANA:

a. Select the plan for SAP S/4HANA.

b. Optionally, change the Maintenance Plan name.

c. Select Install New S4HANA System.

d. Select Next

e. For Install a New System, enter the SAP SID you're using.

f. For Target Version, select your target SAP version. For example, SAP S/4HANA
2020.

g. For Target Stack, select your target stack. For example, Initial Shipment Stack.

h. If necessary, select your Target Product Instances.

i. Select Next
7. Design your codeployment.

a. Select Co-Deployed with Backend.

b. For Target Version, select your target version for codeployment. For example,
SAP FIORI FOR SAP S/4HANA 2020.

c. For Target Stack, select your target stack for codeployment. For example, Initial
Shipment Stack.

d. Select Next

8. Select Continue Planning. If you're using a new system, select Next. If you're using
an existing system, make the following changes:

a. For OS/DB dependent files, select Linux on x86_64 64bit.

b. Select Confirm Selection.

c. Select Next.

9. Optionally, under Select Stack Independent Files, configure settings for non-ABAP
databases. You can choose to expand the database and deselect non-required
language files.

10. Select Next.

11. Download stack XML files to the stack download directory you created earlier.

a. Select Push to Download Basket.

b. Select Additional Downloads.

c. Select Download Stack Text File.

d. Select Download PDF.

e. Select Export to Excel.

f. Go to your download basket again in the SAP Launchpad. You might need to
refresh the page to see your new selections.

g. Select the T icon to download a file with the URLs for your download basket.

Get download basket manifest


) Important

Only follow these steps if you want to run the scripted BOM generation. You must
perform these actions before you run the SAP Download Manager. If you don't
want to run the scripted BOM generation, skip to the next section.

To get your SAP Download Basket manifest JSON file ( DownloadBasket.json ):

1. Open the Postman utility.

2. Add a new request by selecting the plus sign (+) button in the workspace tab. A
new page opens with your request.

3. On the Params tab, set the request type to GET .

4. For the request URL, enter


https://tech.support.sap.com:443/odata/svt/swdcuisrv/DownloadContentSet?

_MODE=BASKET_CONTENT&_VERSION=3.1.2&$format=json .

5. Select the Authorization tab.

6. For Type, select Basic Auth.

7. For Username, enter your SAP username.

8. For Password, enter your SAP password.

9. Select the Headers tab.

10. Uncheck the Accept-Encoding and User-Agent check boxes

11. Select the Send button.

12. On the Body tab, make sure to select the Raw view.

13. Copy the raw JSON response body. Save the response in your stack download
directory.

Download media
To download the SAP installation media:

1. On your computer, run the SAP Download Manager.

2. Sign in to the SAP Download Manager.


3. Access your SAP Download Basket.

4. Set your download directory to the stack download directory that you created. For
example, ``~/Downloads/S4HANA_1909_SP2/`.

5. Download all files from your download basket into this directory.

7 Note

The text file that contains your SAP download URLs is always
myDownloadBasketFiles.txt . However, this file is specific to the application or
database. You should keep this file with your other downloads for this particular
section for use in later sections.

Upload media
To upload the SAP media and stack files to your Azure storage account:

1. Sign in to the Azure portal .

2. Under Azure services, select Resource groups. Or, enter resource groups in the
search bar.

3. Select the resource group for your SAP Library.

4. On the resource group page, select the saplib storage account in the Resources
table.

5. On the storage account page's menu, select Containers under Data storage.

6. Select the sapbits container.

7. On the container page, upload your archives and tools.

a. Select the Upload button.

b. Select Select a file.

c. Navigate to the directory where you downloaded the SAP media previously.

d. Select all the archive files. These file names are similar to *.SAR , *.RAR , *.ZIP ,
and SAPCAR*.EXE .

e. Select Advanced to show advanced options.


f. For Upload Directory, enter archives .

8. Upload your stack files.

a. Select the Upload button.

b. Select Select a file.

c. Navigate to the download directory that you created in the previous section.

d. Select all your stack files. These file names are similar to MP_*.
(xml|xls|pdf|txt) .

e. Select Advanced to show advanced options.

f. For Upload Directory, enter boms/<Stack_Version>/stackfiles where


<Stack_Version> is a combination of your product information. For example,

S4HANA_2020_ISS_v001 indicates the product type is S4HANA , the product release

is 2020 , the service pack is ISS for the initial software shipment, and the stack is
v001 .

Next steps
Prepare BOM
Prepare SAP BOM
Article • 05/07/2023

The SAP on Azure Deployment Automation Framework uses a Bill of Materials (BOM).
The BOM helps configure your SAP systems.

The automation framework's GitHub repository contains a set of Sample BOMs that
you can use to get started. It is also possible to create BOMs for other SAP Applications
and databases.

If you want to generate a BOM that includes permalinks, follow the steps for creating
this type of BOM.

7 Note

This guide covers advanced deployment topics. For a basic explanation of how to
deploy the automation framework, see the get started guide instead.

Prerequisites
Get, download, and prepare your SAP installation media and related files if you
haven't already done so.
SAP Application (DB) or HANA media in your Azure storage account.
A YAML editor for working with the BOM file.
Application installation templates for:
SAP Central Services (SCS)
The SAP Primary Application Server (PAS)
The SAP Additional Application Server (AAS)
Downloads of necessary stack files to the folder you created for acquiring SAP
media. For more information, see the basic BOM preparation how-to guide.
A copy of your SAP Download Basket manifest ( DownloadBasket.json ), downloaded
to the folder you created for acquiring SAP media.
An installation of the Postman utility .
An Azure subscription. If you don't already have an Azure subscription, create a
free account .
An SAP account with permissions to work with the database you want to use.
A system that runs Linux-type commands for validating the BOM. Install the
commands yamllint and ansible-lint on the system.
Scripted creation process
This process automates the same steps as the manual BOM creation process. Review the
script limitations before using this process.

1. Navigate to your stack files folder.

Bash

cd stackfiles

2. Run the BOM generation script. Replace the example path with the correct path to
your utilities folder. For example:

Bash

cd ~/Azure_SAP_Automated_Deployment/deploy/scripts/generate_bom.sh
>../bom.yml

3. For the product parameter ( product ), enter the SAP product name. For example,
SAP_S4HANA_1809_SP4 . If you don't enter a value, the script attempts to determine

the name from the stack XML file.

4. Open the generated bom.yml file for review.

5. Review the templates section ( templates ). Make sure the file and
override_target_location values are correct. If necessary, edit and comment out

those lines. For example:

yml

templates:
# - name: "S4HANA_2020_ISS_v001 ini file"
# file: S4HANA_2020_ISS_v001.inifile.params
# override_target_location: "{{ target_media_location }}/config"

6. Review the stack files section ( stackfiles ). Make sure the item names and files are
correct. If necessary, edit those lines.

Script limitations
The scripted BOM creation process has the following limitations.
The scripting has a hard-coded dependency on HANA2. Edit your BOM file manually to
match the required dependency name. For example:

yml

dependencies:
- name: "HANA2"

There are no defaults for the media parameters override_target_filename: ,


override_target_location , and version: . Edit your BOM file manually to change these
parameters. For example:

yml

- name: SAPCAR
archive: SAPCAR_1320-80000935.EXE
override_target_filename: SAPCAR.EXE

- name: "SWPM20SP07"
archive: "SWPM20SP07_2-80003424.SAR"
override_target_filename: SWPM.SAR
sapurl: "https://softwaredownloads.sap.com/file/0020000001812632020"

The script only generates entries for media files that the SAP Maintenance Planner
identifies. This limitation occurs because it processes the stack .xsl file. If you add any
files to your download basket separately, such as through SAP Launchpad, you must add
those files to the BOM manually.

Manual creation process


You can create your BOM through the following manual process. Another option is to
use the scripted creation process to do the same steps.

1. Open the downloads folder you created for acquiring SAP media

2. Create an empty YAML file named bom.yml .

3. Open bom.yml in an editor.

4. Add a BOM header with names for the build and target. The name value must be
the same as the BOM folder name in your storage account. For example:

yml
name: 'S4HANA_2020_ISS_v001'
target: 'ABAP PLATFORM 2020'

5. Add a defaults section with the target location. Use the path to the folder on the
target server where you want to copy installation files. Typically, use {{
target_media_location }} as follows:

yml

defaults:
target_location: "{{ target_media_location }}/download_basket"

6. Add a product identifiers section. You populate these values later as part of the
template preparation. For example:

yml

product_ids:
scs:
db:
pas:
aas:
web:

7. Add a materials section to specify the list of required materials. Add any
dependencies on other BOMs in this section. For example:

yml

materials:
dependencies:
- name: HANA2

8. Get a list of media to include in your BOM.

a. Open your download basket spreadsheet. This file renders as XML.

b. Format the XML content to be human readable, if necessary.

c. For each item in the download basket, note the String and Number data. The
String data provides the file name (for example, igshelper_17-10010245.sar )

and a friendly description (for example, SAP IGS Fonts and Textures ). You'll
record the Number data after each entry in your BOM.
9. Add the list of media to bom.yml . The order of these items doesn't matter,
however, you might want to group related items together for readability. Add
SAPCAR separately, even though your SAP download basket contains this utility. For

example:

yml

media:
- name: SAPCAR
archive: SAPCAR_1320-80000935.EXE

name: "SAP IGS Fonts and Textures"


archive: "igshelper_17-10010245.sar"
# 61489

<...>

10. Optionally, if you need to override the target media location, add the parameter
override_target_location to a media item. For example,
override_target_location: "{{ target_media_location }}/config" .

11. Add a blank templates section.

yml

templates:

12. Create a stack files section. For example:

yml

stackfiles:
- name: Download Basket JSON Manifest
file: downloadbasket.json

- name: Download Basket Spreadsheet


file: MP_Excel_2001017452_20201030_SWC.xls

13. Save your changes to bom.yml .

Permalinks
You can automatically generate a basic BOM that functions. However, the BOM doesn't
create permanent URLs (permalinks) to the SAP media by default. If you want to create
permalinks, you need to do more steps before you acquire the SAP media.
7 Note

Manual generation of a full SAP BOM with permalinks takes about twice as long as
preparing a basic BOM manually.

To generate a BOM with permalinks:

1. Open DownloadBasket.json in your editor.

2. For each result, note the contents of the Value line. For example:

JSON

"Value": "0020000000703122018|SP_B|SAP IGS Fonts and


Textures|61489|1|20201023150931|0"

3. Copy down the first and fourth values separated by vertical bars.

a. The first value is the file number. For example, 0020000000703122018 .

b. The fourth value is the number you'll use to match with your media list. For
example, 61489 .

c. Optionally, copy down the second value, which denotes the file type. For
example, SP_B for kernel binary files, SPAT for non-kernel binary files, and CD
for database exports.

4. Use the fourth value as a key to match your download basket to your media list.
Match the values (for example, 61489 ) with the values you added as comments for
the media items (for example, # 61489 ).

5. For each matching entry in bom.yml , add a new value for the SAP URL. For the URL,
use https://softwaredownloads.sap.com/file/ plus the third value for that item
(for example, 0020000000703122018 ). For example:

yml

- name: "SAP IGS Fonts and Textures"


archive: "igshelper_17-10010245.sar"
sapurl: "https://softwaredownloads.sap.com/file/0020000000703122018"

Example BOM file


The following sample is a small part of an example BOM file for S/4HANA 1909 SP2.

You can find multiple complete, usable BOM files in the GitHub repository folder.

yml

step|BOM Content

---

name: 'S4HANA_2020_ISS_v001'
target: 'ABAP PLATFORM 2020'

defaults:
target_location: "{{ target_media_location }}/download_basket"

product_ids:
scs:
db:
pas:
aas:
web:

materials:
dependencies:
- name: HANA2

media:
- name: SAPCAR
archive: SAPCAR_1320-80000935.EXE

- name: SWPM
archive: SWPM20SP06_6-80003424.SAR

- name: SAP IGS HELPER


archive: igshelper_17-10010245.sar

- name: SAP HR 6.08


archive: SAP_HR608.SAR

- name: S4COREOP 104


archive: S4COREOP104.SAR

templates:
- name: "S4HANA_2020_ISS_v001 ini file"
file: S4HANA_2020_ISS_v001.inifile.params
override_target_location: "{{ target_media_location }}/config"

stackfiles:
- name: Download Basket JSON Manifest
file: downloadbasket.json
override_target_location: "{{ target_media_location }}/config"

- name: Download Basket Spreadsheet


file: MP_Excel_2001017452_20201030_SWC.xls
override_target_location: "{{ target_media_location }}/config"

- name: Download Basket Plan doc


file: MP_Plan_2001017452_20201030_.pdf
override_target_location: "{{ target_media_location }}/config"

- name: Download Basket Stack text


file: MP_Stack_2001017452_20201030_.txt
override_target_location: "{{ target_media_location }}/config"

- name: Download Basket Stack XML


file: MP_Stack_2001017452_20201030_.xml
override_target_location: "{{ target_media_location }}/config"

- name: Download Basket permalinks


file: myDownloadBasketFiles.txt
override_target_location: "{{ target_media_location }}/config"

Validate BOM
You can validate your BOM structure from any OS that runs Linux-type commands. For
Windows, use Windows Subsystem for Linux (WSL). Another option is to run the
validation from your deployer if there's a copy of the BOM file there.

1. Run the validation script check_bom.sh from the directory containing your BOM.
For example:

Bash

cd ~/Azure_SAP_Automated_Deployment/deploy/scripts/check_bom.sh bom.yml

2. Review the output.

Successful validation
A successful validation shows the following output. You already installed yamllint and
ansible-lint commands in the prerequisites.

Output

... yamllint [ok]


... ansible-lint [ok]
... bom structure [ok]
Unsuccessful validation
An unsuccessful validation contains error information. For example:

Output

../documentation/ansible/system-design-
deployment/examples/S4HANA_2020_ISS_v001/bom_with_errors.yml
178:16 error too many spaces after colon (colons)
179:16 error too many spaces after colon (colons)
180:16 error too many spaces after colon (colons)

... yamllint [errors]


... ansible-lint [ok]
- Expected to find key 'defaults' in 'bom' (Check name:
S4HANA_2020_ISS_v001)
- Unexpected key 'default in 'bom' (Check name: S4HANA_2020_ISS_v001)
- Unexpected key 'overide_target_location in 'bom.materials.stackfiles'
(Check name: Download Basket Stack text)
... bom structure [errors]

Upload your BOM


To use the BOM with permalinks:

1. Validate the BOM.

2. Sign in to the Azure portal .

3. Under Azure services, select Resource groups. Or, enter resource groups in the
search bar.

4. Select the resource group for your SAP Library.

5. On the resource group page, select the storage account saplib in the Resources
table.

6. On the storage account page's menu, select Containers under Data storage.

7. Select the sap bits container.

8. On the container page, upload your archives and tools.

a. Select the Upload button.

b. Select Select a file.


c. Navigate to the download directory that you created previously.

Next steps
How to generate SAP Application BOM
Generate SAP Application templates for
automation
Article • 02/10/2023

The SAP on Azure Deployment Automation Framework uses a Bill of Materials (BOM) to
define the SAP Application. Before you can deploy a system using a custom BOM, you
need to also create the templates for the ini-files used in the unattended SAP
installation. This guide covers how to create the application templates for an SAP/S4
deployment. The process is the same for the other SAP applications.

Prerequisites
Get, download, and prepare your SAP installation media and related files if you
haven't already done so. Make sure to have the name of the SAPCAR utility file
that you downloaded available.
Prepare your BOM if you haven't already done so. Make sure to have the BOM file
that you created available.
An Azure subscription. If you don't already have an Azure subscription, create a
free account .
An SAP account with permissions to work with the database you want to use.
Optionally, create a virtual machine (VM) within Azure to use for transferring SAP
media from your storage account. This method improves the transfer speed. Make
sure you have connectivity between your VM and the target SAP VM. For example,
check that your SSH keys are in place.

Check media and tools


Before you generate an SAP Application template, make sure you have all required
installation media and tools.

1. Sign in to your target VM as the root user.

2. Change the root user password to a known value. You'll use this password later to
connect to the SAP Software Provisioning Manager (SWPM).

3. Make and change to a temporary directory.

Bash

mkdir /tmp/workdir; cd $_
4. Make sure there's a temporary directory for the SAP Application template.

Bash

mkdir /tmp/app_template/

5. Change the permissions for the SAPCAR utility to make this file executable. Replace
<SAPCAR>.EXE with the name of the file you downloaded. For example,
SAPCAR_1311-80000935.EXE .

Bash

chmod +x /usr/sap/install/download_basket/<SAPCAR>.EXE

6. Make sure the installation folder for SWPM exists.

Bash

mkdir -p /usr/sap/install/SWPM

7. Extract the SWPM installation file using the SAPCAR utility.

Bash

/usr/sap/install/download_basket/SAPCAR_1311-80000935.EXE -xf
/usr/sap/install/SWPM20SP07_0-80003424.SAR -R /usr/sap/install/SWPM/

You can do an unattended SAP installations with parameter files. These files pass all
required parameters to the SWPM installer.

7 Note

To generate the parameter file, you need to partially perform a manual installation.
For more information about why, see SAP NOTE 2230669 .

Generate ASCS parameter file


To generate your unattended installation parameter file for ASCS:

1. Sign in to your VM as the root user through your command-line interface (CLI).
2. Run the command hostname to get the host name of the VM from which you're
running the installation. Note both the unique hostname (where <example-vm-
hostname> is in the example output), and the full URL for the GUI.

3. Check that you have all necessary media and tools installed on your VM.

4. Launch SWPM as follows.

a. Replace <target-VM-hostname> with the hostname you previously obtained.

b. Replace <XML-stack-file-path> with the XML stack file path that you created.
For example, /usr/sap/install/config/MP_STACK_S4_2020_v001.xml .

Bash

/usr/sap/install/SWPM/sapinst \
SAPINST_XML_FILE=<XML-stack-file-path>.xml \
SAPINST_USE_HOSTNAME=<target-VM-hostname>
SAPINST_START_GUISERVER=true \
SAPINST_STOP_AFTER_DIALOG_PHASE=true

Output

Connecting to the ASCS VM to launch


***********************************************************************
*********
Open your browser and paste the following URL address to access the GUI
https://<example-VM-
hostname>.internal.cloudapp.net:4237/sapinst/docs/index.html
Logon users: [root]
***********************************************************************
*********

5. Open your browser and visit the URL for the GUI that you previously obtained.

a. Accept the security risk warning.

b. Authenticate with your system's root user credentials.

6. In the drop-down menu, select SAP S/4HANA Server 2020 > SAP HANA Database
> Installation > Application Server ABAP > Distributed System > ASCS Instance.

7. For Parameter Mode, select Custom. Then, select Next.

8. Configure the SAP system settings:


a. Make sure the SAP system identifier is {SID} .

b. Make sure the SAP mount directory value is /sapmnt .

c. Select Next.

9. Configure the fully qualified domain name (FQDN) settings:

a. Make sure the FQDN value populates automatically.

b. Make sure to enable Set FQDN for SAP system.

c. Select Next.

10. Set up a main password, which you only use during the creation of this ASCS
instance. You can only use alphanumeric characters and the special characters # ,
$ , @ , and _ for your password. You also can't use a digit or underscore as the first

character.

a. Enter a main password.

b. Confirm the main password.

c. Select Next.

11. Configure more administrator settings. Other password fields are pre-populated
based on the main password you set.

a. Set the identifier of the administrator OS user ( <sid>adm where <sid > is your
SID) to 2000 .

b. Set the identifier of the SAP system ( sapsys ) to 2000 .

c. Select Next.

12. When prompted for the SAPEXE kernel file path, enter
/usr/sap/install/download_basket , then select Next.

13. Make sure the package status is Available, then select Next.

14. Make sure the SAP Host Agent installation file status is Available, then select Next.

15. Provide information for the SAP administrator OS user.

a. Leave the password as inherited from the main password.

b. Set the OS user identifier to 2100 .


c. Select Next.

16. Check the installation settings.

a. Make sure the instance number for the installation is correct.

b. Make sure to set the virtual host name for the instance.

c. Select Next.

17. Keep the ABAP message server port settings. These default settings are 3600 and
3900 . Then, `select Next.

18. Don't select any other components to install, then select Next.

19. Enable Skip setting of security parameters, then select Next.

20. Enable Yes, clean up operating system users, then select Next.

21. On Parameter Summary, don't do anything yet.

22. In the CLI, find your installation configuration file in the temporary SAP installation
directory. At this point, the file is called inifile.params .

a. Run ls /tmp/sapinst_instdir/ to list the files in the SAP installation directory.

b. If the file .lastInstallationLocation exists, view the file contents and note the
directory listed.

c. If a directory for the product that you're installing exists, such as S4HANA2020 , go
to the product folder. For example, run cd
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/HA/ABAP/ASCS/ .

23. In your browser, in the SWPM GUI, select Cancel. Now, you have the ini files
required to build the template that can do an unattended installation of ASCS.

24. Copy and rename inifile.params to scs.inifile.params in /tmp/app_template .


Replace <path-to-INI-file> with the path to your INI file as follows:

Bash

cp <path-to-INI-file>/inifile.params
/tmp/app_template/scs.inifile.params

Load database content


Make sure the following settings are in place on the VM before you begin:

Install and configure your HANA and SCS instances. These instances must be
online before you complete the database content load.

The <sid>adm user you created when you generated the unattended installation
file for ASCS must be a member of the sapinst group.

The user identifier for <sid>adm must match the value of hdblcm . This example uses
2000 .

The SWPM needs access to /sapmnt/<SID>/global/ . To configure permissions, run


chown <sid>adm:sapsys /sapmnt/<SID>/global .

Generate database load template


To generate an unattended installation parameter file for the database content load:

1. Make and change to a temporary directory. Replace <sid> with your SID.

Bash

sudo install -d -m 0777 <sid>adm -g sapinst "/tmp/db_workdir"; cd $_

2. Launch the SWPM and note the listed URL.

Bash

/usr/sap/install/SWPM/sapinst \
SAPINST_XML_FILE=/usr/sap/install/config/MP_STACK_S4_2020_v001.xml

3. In your browser, visit the URL you noted.

4. Accept the security risk warning.

5. Authenticate with your system's root user credentials.

6. Create a distributed system with custom parameters.

a. In the drop-down menu, go to SAP S4/HANA Server 2020 > SAP HANA
Database > Installation > Application Server ABAP > Distributed System >
Database Instance > Distributed System.

b. Select the Custom parameter mode.


c. Select Next.

7. Note the path of the profile directory that the ASCS installation creates. For
example, /usr/sap/<SID>/SYS/profile where <SID> is your SID. Then, select Next.

8. Enter the ABAP message server port for your ASCS instance. The port number is
36<InstanceNumber> , where <InstanceNumber> is the HANA instance number. For

example, if there are zero instances, 3600 is the port number. Then, select Next.

9. Enter your main password to use during the installation of database content. Then,
select Next.

10. Make sure the details for the administrator user <SID>adm where SID is your SID)
are correct. Then, select Next.

11. Enter your information for the SAP HANA Database Tenant.

a. For Database Host, enter the host name of the HANA database VM. To find this
host name, go to the resource page in the Azure portal.

b. For Instance Number, enter the HANA instance number. For example, 00 .

c. Enter an identifier for the new database tenant. For example, S4H .

d. Keep the automatically generated password for the database system


administrator.

e. Select Next.

12. Make sure your connection details are correct. Then, select OK.

13. Enter your administrator password for the system database. Then, select Next.

14. Enter the path to your SAPEXE kernel, /usr/sap/install/download_basket . Then,


select Next.

15. Review which files are available.

a. Select Next.

b. Make sure the SAPHOSTAGENT file is available.

c. Select Next again.

16. On the password confirmation page, select Next.

17. Review that all core HANA database export files are available. Then, select Next.
18. On Database Schema for SAPHANADB , select Next.

19. On Secure Storage for Database Connection, select Next.

20. On SAP HANA Import Parameters, select Next.

21. Enter the password for the HANA database administrator ( <SID>adm ) for the
database VM. Then, select Next.

22. On SAP HANA Client Software Installation Path, select Next.

23. Make sure the SAP HANA client file is available. Then, select Next.

24. Make sure to enable Yes, clean up operating system users. Then, select Next.

25. On Parameter Summary, don't select anything yet.

26. Open your CLI and find your installation configuration file.

a. List the files in your temporary directory, /tmp/sapinst_instdir/ .

b. Make sure the installation configuration file inifile.params is there.

c. If the file lastInstallationLocation is there, open the file. Note the directory
listed in the file contents.

d. If there's already a directory for the product that you're installing, such as
S4HANA2020 , go to the matching folder. For example,

/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/HA/ABAP/DB/ .

27. Open SWPM again.

28. Select Cancel. You can now use the unattended method for database content
loading.

29. Copy and rename your installation configuration file as follows. Replace
<path_to_config_file> with the path to your configuration file.

Bash

cp <path_to_config_file>/inifile.params
/tmp/app_template/db.inifile.params

30. Check the version of the sapinst tool in SWPM.

Bash
/usr/sap/install/SWPM/sapinst -version

31. If the version of sapinst is greater than 749.0.6 , also copy the files keydb.xml and
instkey.pkey to follow SAP Note 2393060 . Replace <path_to_config_file> with
the path to your configuration file.

Bash

cp <path_to_config_file>/{keydb.xml,instkey.pkey} /tmp/app_template/

Generate PAS parameter file


Generate an unattended installation parameter file for use with PAS. These files all begin
with inifile .

) Important

You might not see some of these settings in 2020 versions of SAP products. In that
case, skip the step.

1. Connect to your VM through your CLI.

2. Check that you have all necessary media and tools installed on your VM.

3. Create and change to a temporary directory. Replace <SID> with your SID.

Bash

sudo install -d -m 0777 <SID>adm -g sapinst "/tmp/pas_workdir"; cd $_

4. Connect to the node as the root user.

5. Sign in to the SWPM.

a. Go to the URL for the SWPM GUI. You got this URL when you generated the
unattended installation file for ASCS.

b. Accept the security warning.

c. Authenticate with your system's root user credentials.


6. In the drop-down menu, go to SAP S/4HANA Server 2020 > SAP HANA Database
> Installation > Application Server ABAP > Distributed System > Primary
Application Server Instance.

7. On Parameter Settings, select Custom. Then, select Next.

8. Make sure the Profile Directory is set to /sapmnt/<SID>/profile/ or


/usr/sap/<SID>/SYS/profile , where <SID> is your SID. Then, select Next.

9. Set the Message Server Port to 36<instance-number> , where <instance-number> is


the ASCS instance number. For example, 3600 . Then, select Next.

10. Set the main password for all users. Then, select Next.

11. Wait for the list below-the-fold-list to populate. Then, select Next.

12. Make sure to disable the setting Upgrade SAP Host Agent to the version of the
provided SAPHOSTAGENT.SAR archive. Then, select Next.

13. Enter the instance number for the SAP HANA database, and the database system
administrator password. Then, select Next.

14. On Configuration of SAP liveCache with SAP HANA, select Next.

15. On Database Schema for DBACOCKPIT , select Next.

16. On Database Schema for SAPHANADB , select Next.

17. On Secure Storage for Database Connection, select Next.

18. Make sure the PAS instance number and instance host are correct. Then, select
Next.

19. On ABAP Message Server Ports, select Next.

20. On Configuration of Work Processes, select Next.

21. On ICM User Management for the SAP Web Dispatcher, select Next.

22. On SLD Destination for the SAP System OS Level, configure these settings:

a. Enable No SLD destination. Then, select Next.

b. Enable Do not create Message Server Access Control List. Then, select Next

c. Enable Run TMS.


d. Set user password for TMSADM int Client 000 to the main password. Then,
select Next.

e. Enable No for Import ABAP Transports. Then, select Next.

23. On Additional SAP System Languages, select Next.

24. On SAP System DDIC Users, select Next.

25. On Secure Storage Key Generation, make sure to select Individual Key. Then,
select Next.

26. On the warning screen:

a. Copy the key identifier and key value.

b. Store the key identifier and key value securely.

c. Select Next.

27. For Clean up operating system users, select Yes. Then, select Next.

28. In your CLI, open your temporary directory for the installation.

29. Make sure there's a copy of the parameters file inifile.params . For example,
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/DISTRIBUTED/ABAP/APP1/inifile

.params .

30. In SWPM, select Cancel. You can now install PAS through the unattended method.

31. Copy and rename your PAS parameter file to pas.inifile.params in


/tmp/app_template as follows. Replace <path_to_config_file> with the path to
your parameter file.

Bash

cp <path_to_config_file>/inifile.params
/tmp/app_template/pas.inifile.params

32. Create a copy of pas.inifile.params and download to your computer or VM.

Generate additional application servers


parameter file
Generate an unattended installation parameter file for use with AAS. These files all begin
with inifile .

) Important

You might not see some of these settings in 2020 versions of SAP products. In that
case, skip the step.

1. Connect to your AAS VM through the CLI.

2. Check that you have all necessary media and tools installed on your VM.

3. Make sure the group sapinst exists.

Bash

groupadd -g 2000 sapinst

4. Create a temporary directory for your installation as follows. Replace <sid> with
your SID.

Bash

sudo install -d -m 0777 <sid>adm -g sapinst "/tmp/aas_workdir"; cd $_

5. Sign in to the SWPM.

a. Go to the URL for the SWPM GUI. You got this URL when you generated the
unattended installation file for ASCS.

b. Accept the security warning.

c. Authenticate with your system's root user credentials.

6. In the drop-down menu, SAP S/4HANA Server 2020 > SAP HANA Database >
Installation > Application Server ABAP > High-Availability System > Additional
Application Server Instance.

7. On Parameter Settings, select Custom. Then, select Next.

8. Make sure the Profile Directory is set to /sapmnt/<SID>/profile/ or


/usr/sap/<SID>/SYS/profile , where <SID> is your SID. Then, select Next.
9. Set Message Server Port to 36<instance-number> where <instance-number> is the
ASCS instance number. Then, select Next.

10. Set the main password for all users. Then, select Next.

11. On Software Package Browser, set Search Directory to


/usr/sap/install/download_basket . Then, select Next.

12. Wait for the list below-the-fold-list to populate. Then, select Next.

13. Make sure to enable Upgrade SAP Host Agent to the version of the provided
SAPHOSTAGENT.SAR archive. Then, select Next.

14. Enter the instance number of your SAP HANA database and the database system
administrator password. Then, select Next.

15. On Configuration of SAP liveCache with SAP HANA, select Next.

16. On Database Schema for DBACOCKPIT , select Next.

17. On Database Schema for SAPHANADB , select Next.

18. On Secure Storage for Database Connection, select Next.

19. Make sure the AAS instance number and instance host are correct. Then, select
Next.

20. On ABAP Message Server Ports, select Next.

21. On Configuration of Work Processes, select Next.

22. On ICM User Management for the SAP Web Dispatcher, select Next.

23. On SLD Destination for the SAP System OS Level, make sure to enable No SLD
destination. Then, select Next.

24. Enable Do not create Message Server Access Control List. Then, select Next.

25. Enable Run TMS.

26. Set the password for the user TMSADM in Client 000 to the main password. Then,
select Next.

27. Set SPAM/SAINT Update Archive to /usr/sap/install/config/KD75371.SAR .

28. Set Import ABAP Transports to No. Then, select Next.


29. On Preparing for the Software Update Manager Screen, enable Extract the
SUM.SAR Archive*. Then, select Next.

30. On Software Package Browser, select the table Detected Packages. If the
individual package location for SUM 2.0 is empty, set the package path to
usr/sap/install/config . Then, select Next.

31. Wait for the package location to populate. Then, select Next.

32. On Additional SAP System Languages, select Next.

33. Make sure to enable Yes, clean up operating system users. Then, select Next.

34. Through the CLI, check that your temporary directory now has a copy of the
parameter file. For example,
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/AS/APPS/inifile.params .

35. Copy and rename the file to aas.inifile.params in /tmp/app_template as follows.


Replace <path_to_inifile> with the path to your parameter file.

Bash

cp <path_to_inifile>/inifile.params
/tmp/app_template/aas.inifile.params

36. Create a copy of aas.inifile.params and download to your computer or VM.

37. In SWPM, select Cancel. You can now do the AAS installation through the
unattended method.

Combine parameter files


You can combine your parameter files, which all end with inifile.params , into one file
for the installation process.

Create combination file


To create a file that combines all your parameters:

1. If you haven't already, download each parameter file you created (ASCS, PAS, and
AAS). You need these files on the computer or VM from which you're working.

2. Make a backup of each parameter file.


3. Create a new combination file. Name this file for the SAP product that you're using.
For example, S4HANA_2020_ISS_v001.inifile.params .

4. Open the ASCS parameter file ( scs.inifile.params ) in an editor.

5. Copy the header of the ASCS parameter file into the combination file. For example:

yml

#######################################################################
##################################################
#
#
# Installation service 'SAP S/4HANA Server 2020 > SAP HANA Database >
Installation #
# > Application Server ABAP > Distributed System > ASCS Instance',
product id 'NW_ABAP_ASCS:S4HANA2020.CORE.HDB.ABAP' #
#
#
#######################################################################
##################################################

6. For each inifile.params file you have, copy the product identifier line from the
header. Then, copy the product identifiers into the header of your combination file.
For example:

yml

#######################################################################
######################################################################
#
#
# Installation service 'SAP S/4HANA Server 2020 > SAP HANA Database >
Installation #
# > Application Server ABAP > Distributed System > ASCS Instance',
product id 'NW_ABAP_ASCS:S4HANA2020.CORE.HDB.ABAP'
#
# > Application Server ABAP > Distributed System > Database
Instance', product id 'NW_ABAP_DB:S4HANA2020.CORE.HDB.ABAP'
#
# > Application Server ABAP > Distributed System > Primary
Application Server Instance', product id
'NW_ABAP_CI:S4HANA2020.CORE.HDB.ABAP' #
# > Additional SAP System Instances > Additional Application Server
Instance', product id 'NW_DI:S4HANA2020.CORE.HDB.PD' #
#
#
#######################################################################
######################################################################
7. Open your bom.yml file in an editor.

8. Copy the sections for product_ids into your combination file.

9. For each inifile.params file you have, copy the product identifier from the header
into the appropriate part of product_ids . For example, copy your ASCS to scs :

yml

product_ids:
scs: "NW_ABAP_ASCS:S4HANA2020.CORE.HDB.ABAP"
db: ""
pas: ""
aas: ""
web: ""

10. Remove any lines that you commented out or left blank.

11. Save your combination file.

Improve readability
Next, improve the readability of your combination file:

1. Open your combination file in an editor.

2. Sort all lines not in the header.

3. Remove any duplicated lines.

4. Align all the equals signs. For example:

yml

archives.downloadBasket =
/usr/sap/install/download_basket
HDB_Schema_Check_Dialogs.schemaName = SAPHANADB
HDB_Schema_Check_Dialogs.schemaPassword = MyDefaultPassw0rd
HDB_Userstore.doNotResolveHostnames = x00dx0000l09d4

5. Separate the lines by prefixes. For example, NW_CI_Instance.* and NW_HDB_DB.* .

6. Update the following lines to use Ansible variables:

a. archives.downloadBasket = {{ download_basket_dir }}

b. HDB_Schema_Check_Dialogs.schemaPassword = {{ main_password }}
c. HDB_Userstore.doNotResolveHostnames = {{ hdb_hostname }}

d. hostAgent.sapAdmPassword = {{ main_password }}

e. NW_AS.instanceNumber = {{ aas_instance_number }}

f. NW_checkMsgServer.abapMSPort = 36{{ scs_instance_number }}

g. NW_CI_Instance.ascsVirtualHostname = {{ scs_hostname }}

h. NW_CI_Instance.ciInstanceNumber = {{ pas_instance_number }}

i. NW_CI_Instance.ciMSPort = 36{{ scs_instance_number }}

j. NW_CI_Instance.ciVirtualHostname = {{ pas_hostname }}

k. NW_CI_Instance.scsVirtualHostname = {{ scs_hostname }}

l. NW_DI_Instance.virtualHostname = {{ aas_hostname }}

m. NW_getFQDN.FQDN = {{ sap_fqdn }}

n. NW_GetMasterPassword.masterPwd = {{ main_password }}

o. NW_GetSidNoProfiles.sid = {{ app_sid | upper }}

p. NW_HDB_DB.abapSchemaPassword = {{ main_password }}

q. NW_HDB_getDBInfo.dbhost = {{ hdb_hostname }}

r. NW_HDB_getDBInfo.dbsid = {{ hdb_sid | upper }}

s. NW_HDB_getDBInfo.instanceNumber = {{ hdb_instance_number }}

t. NW_HDB_getDBInfo.systemDbPassword = {{ main_password }}

u. NW_HDB_getDBInfo.systemid = {{ hdb_sid | upper }}

v. NW_HDB_getDBInfo.systemPassword = {{ main_password }}

w. NW_readProfileDir.profileDir = /usr/sap/{{ app_sid | upper }}/SYS/profile

x. NW_Recovery_Install_HDB.extractLocation = /usr/sap/{{ hdb_sid | upper


}}/HDB{{ hdb_instance_number }}/backup/data/DB_{{ hdb_sid | upper }}

y. NW_Recovery_Install_HDB.sidAdmName = {{ hdb_sid | lower }}adm


z. NW_Recovery_Install_HDB.sidAdmPassword = {{ main_password }}

aa. NW_SAPCrypto.SAPCryptoFile = {{ download_basket_dir }}/SAPEXE_300-


80004393.SAR

ab. NW_SCS_Instance.instanceNumber = {{ scs_instance_number }}

ac. NW_Unpack.igsExeSar = {{ download_basket_dir }}/igsexe_12-80003187.sar

ad. NW_Unpack.igsHelperSar = {{ download_basket_dir }}/igshelper_17-


10010245.sar

ae. NW_Unpack.sapExeDbSar = {{ download_basket_dir }}/SAPEXEDB_300-


80004392.SAR

af. NW_Unpack.sapExeSar = {{ download_basket_dir }}/SAPEXE_300-80004393.SAR

ag. NW_SCS_Instance.scsVirtualHostname = {{ scs_hostname }}

ah. nwUsers.sapadmUID = {{ sapadm_uid }}

ai. nwUsers.sapsysGID = {{ sapsys_gid }}

aj. nwUsers.sidadmPassword = {{ main_password }}

ak. nwUsers.sidAdmUID = {{ sidadm_uid }}

al. storageBasedCopy.hdb.instanceNumber = {{ hdb_instance_number }}

am. storageBasedCopy.hdb.systemPassword = {{ main_password }}

Upload combination file


Finally, upload your combined template file to your SAP Library.

1. Sign in to the Azure portal .

2. Select or search for Storage accounts.

3. Select the storage account for your SAP Library.

4. In the storage account menu, under Data storage, select Containers.

5. Select the sapbits container.


6. Go to the product folder for your BOM in sapbits . For example,
boms/S4HANA_2020_ISS_v001 .

7. If you don't already have a directory called templates, create this directory.

8. Open the templates directory.

9. Select Upload.

10. In the pane, select Select a file.

11. Select the combined template file. For example,


S4HANA_2020_ISS_v001.inifile.params .

12. Select Upload.

Update BOM with templates


After combining your parameter files, update your BOM with the new template files.

1. Open bom.yml .

2. In the section templates , add your new template file names. For example:

yml

templates:
- name: "S4HANA_2020_ISS_v001 ini file"
file: S4HANA_2020_ISS_v001.inifile.params
override_target_location: "{{ target_media_location }}/config"

3. If you're using the scripted application BOM preparation, remove the # before the
template.

4. Save your changes.

Then, upload the new BOM file to your SAP Library.

1. Sign in to the Azure portal .

2. Select or search for Storage accounts.

3. Select the storage account for your SAP Library.

4. In the storage account menu, under Data storage, select Containers.


5. Select the sapbits container.

6. Go to the product folder for your BOM in sapbits . For example,


boms/S4HANA_2020_ISS_v001 .

7. Open the boms directory.

8. Select Upload.

9. In the pane, select Select a file.

10. Select your BOM file, bom.yml , from your computer or VM.

11. Make sure to enable Overwrite if files already exist.

12. Select Upload.


Use SAP Deployment Automation
Framework shell scripts
Article • 09/19/2023

You can deploy all SAP Deployment Automation Framework components by using shell
scripts.

Control plane operations


You can deploy or update the control plane by using the deploy_controlplane shell
script.

Remove the control plane by using the remove_controlplane shell script.

You can bootstrap the deployer in the control plane by using the install_deployer shell
script.

You can bootstrap the SAP library in the control plane by using the install_library shell
script.

Workload zone operations


Deploy or update the workload zone by using the install_workloadzone shell script.

Remove the workload zone by using the remover shell script.

SAP system operations


Deploy or update the SAP system by using the installer shell script.

Remove the SAP system by using the remover shell script.

Other operations
Set the deployment credentials by using the Set SPN secrets shell script.

Update the Terraform state file by using the Update Terraform state shell script.

Next step
Deploy the control plane by using Bash
deploy_controlplane.sh
Article • 09/20/2023

Synopsis
The deploy_controlplane.sh script deploys the control plane, including the deployer
VMs, Azure Key Vault, and the SAP library.

The deployer VM has installations of Ansible and Terraform. This VM is used to deploy
the SAP systems.

Syntax
Bash

deploy_controlplane.sh [ --deployer_parameter_file ] <String> [ --


library_parameter_file ] <String>
[[ --subscription] <String>] [[ --spn_id ] <String>] [[ --spn_secret ]
<String>] [[ --tenant_id ] <String>]
[[ --storageaccountname] <String>] [ --force ] [ --auto-approve ]

Description
Deploys the control plane, which includes the deployer VM and the SAP library. For
more information, see Configuring the control plane and Deploying the control plane

Examples

Example 1
This example deploys the control plane, as defined by the parameter files. The process
prompts you for the SPN details.

Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"

az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"

sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"

Example 2
This example deploys the control plane, as defined by the parameter files. The process
adds the deployment credentials to the deployment's key vault.

Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"

export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"

az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES

sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
\
--subscription "${ARM_SUBSCRIPTION_ID}"
\
--spn_id "${ARM_CLIENT_ID}"
\
--spn_secret "${ARM_CLIENT_SECRET}"
\
--tenant_id "${ARM_TENANT_ID}"

Parameters

--deployer_parameter_file

Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.

YAML

Type: String
Aliases: `-d`

Required: True

--library_parameter_file

Sets the parameter file for the SAP library. For more information, see Configuring the
control plane.

YAML

Type: String
Aliases: `-l`

Required: True

--subscription
Sets the target Azure subscription.

YAML

Type: String
Aliases: `-s`

Required: False

--spn_id

Sets the service principal's app ID. For more information, see Prepare the deployment
credentials.

YAML

Type: String
Aliases: `-c`

Required: False

--spn_secret

Sets the Service Principal password. For more information, see Prepare the deployment
credentials.

YAML

Type: String
Aliases: `-p`

Required: False

--tenant_id

Sets the tenant ID for the service principal. For more information, see Prepare the
deployment credentials.

YAML

Type: String
Aliases: `-t`
Required: False

--storageaccountname

Sets the name of the storage account that contains the Terraform state files.

YAML

Type: String
Aliases: `-a`

Required: False

--force

Cleans up your local configuration.

YAML

Type: SwitchParameter
Aliases: `-f`

Required: False

--auto-approve

Enables silent deployment.

YAML

Type: SwitchParameter
Aliases: `-i`

Required: False

--recover

Recreates the local configuration files.

YAML
Type: SwitchParameter
Aliases: `-h`

Required: False

--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False

Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related Links
+GitHub repository: SAP on Azure Deployment Automation Framework
install_workloadzone.sh
Article • 09/19/2023

Synopsis
You can use the install_workloadzone.sh script to deploy a new SAP workload zone.

Syntax
Bash

install_workloadzone.sh [ -p or --parameterfile ] <String>


[[ --deployer_tfstate_key ] <String>] [[ --deployer_environment] <String>]
[[ --state_subscription] <String>] [[ --storageaccountname ]
[[ --subscription] <String>] [[ --spn_id ] <String>] [[ --spn_secret ]
<String>] [[ --tenant_id ] <String>]
[[ --storageaccountname] <String>] [ force] [-i | --auto-approve]

Description
The install_workloadzone.sh script deploys a new SAP workload zone. The workload
zone contains the shared resources for all SAP VMs.

Examples

Example 1
This example deploys the workload zone, as defined by the parameter files. The process
prompts you for the SPN details.

Bash

install_workloadzone.sh -parameterfile PROD-WEEU-SAP00-infrastructure.tfvars

Example 2
This example deploys the workload zone, as defined by the parameter files. The process
adds the deployment credentials to the deployment's key vault.

Bash

cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-
INFRASTRUCTURE

export subscriptionId=<subscriptionID>
export appId=<appID>
export spnSecret="<password>"
export tenantId=<tenantID>
export keyvault=<keyvaultName>
export storageAccount=<storageaccountName>
export statefileSubscription=<statefile_subscription>

export DEPLOYMENT_REPO_PATH=~/Azure_SAP_Automated_Deployment/sap-automation

${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
--parameter_file DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars \
--keyvault $keyvault \
--state_subscription $statefileSubscription \
--storageaccountname $storageAccount \
--subscription $subscriptionId \
--spn_id $appId \
--spn_secret $spnSecret \
--tenant_id $tenantId

Parameters

--parameter_file

Sets the parameter file for the workload zone. For more information, see Configuring
the workload zone.

YAML

Type: String
Aliases: `-p`

Required: True

--deployer_tfstate_key

Sets the deployer VM's Terraform state file name.


YAML

Type: String
Aliases: `-d`

Required: False

deployer_environment

Deployer environment name

YAML

Type: String
Aliases: `-e`

Required: False

--state_subscription

Sets the subscription ID for the Terraform storage account.

YAML

Type: String
Aliases: `-k`

Required: False

--storageaccountname

Sets the name of the storage account that contains the Terraform state files.

YAML

Type: String
Aliases: `-a`

Required: False

--keyvault

Sets the deployment credentials' key vault.


YAML

Type: String
Aliases: `-v`

Required: False

--subscription

Sets the target Azure subscription.

YAML

Type: String
Aliases: `-s`

Required: False

-spn_id

Sets the service principal's app ID. For more information, see Prepare the deployment
credentials.

YAML

Type: String
Aliases: `-c`

Required: False

--spn_secret

Sets the service principal password. For more information, see Prepare the deployment
credentials.

YAML

Type: String
Aliases: `-p`

Required: False

--tenant_id
Sets the tenant ID for the service principal. For more information, see Prepare the
deployment credentials.

YAML

Type: String
Aliases: `-t`

Required: False

--force

Cleans up your local configuration.

YAML

Type: SwitchParameter
Aliases: `-f`

Required: False

--auto-approve

Enables silent deployment.

YAML

Type: SwitchParameter
Aliases: `-i`

Required: False

--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False
Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
Installer.sh
Article • 02/10/2023

Synopsis
You can use the command installer.sh to deploy a new SAP system. The script can be
used to deploy all the different types of deployments.

Syntax
Bash

installer.sh [--parameterfile] <String> [--type] <String> [[--


deployer_tfstate_key] <String>]
[[ --landscape_tfstate_key] <String>] [[--storageaccountname] <String>] [[
--state_subscription ] <String>] [[ --state_subscription ] <String>] [[ --
state_subscription ] [ --force ] [ --auto-approve ]<String>]
s>]

Description
The installer.sh script deploys or updates a new SAP system of the specified type.

Examples

Example 1
Deploys or updates an SAP System.

Bash

installer.sh --parameterfile DEV-WEEU-SAP00-X00.tfvars --type sap_system

Example 2
Deploys or updates an SAP System.
Bash

installer.sh --parameterfile DEV-WEEU-SAP00-X00.tfvars --type sap_system \


--deployer_tfstate_key MGMT-WEEU-DEP00-INFRASTRUCTURE.terraform.tfstate \
--landscape_tfstate_key DEV-WEEU-SAP01-INFRASTRUCTURE.terraform.tfstate

Example 3
Deploys or updates an SAP Library.

Bash

installer.sh -Parameterfile MGMT-WEEU-SAP_LIBRARY.tfvars --type sap_library

Parameters

--parameter_file

Sets the parameter file for the system. For more information, see Configuring the SAP
system.

YAML

Type: String
Aliases: `-p`

Required: True

--type

Sets the type of deployment. Valid values include: sap_deployer , sap_library ,


sap_landscape , and sap_system .

YAML

Type: String
Accepted values: sap_deployer, sap_landscape, sap_library, sap_system
Aliases: `-t`

Required: True
--deployer_tfstate_key
Sets the name of the state file for the deployer deployment.

YAML

Type: String
Aliases: `-d`

Required: False

-landscape_tfstate_key
Sets the name of the state file for the workload zone deployment.

YAML

Type: String
Aliases: `-l`

Required: False

--state_subscription

Sets the subscription ID for the Terraform storage account.

YAML

Type: String
Aliases: `-k`

Required: False

--storageaccountname

Sets the name of the storage account that contains the Terraform state files.

YAML

Type: String
Aliases: `-a`

Required: False
--keyvault

Sets the deployment credentials' key vault.

YAML

Type: String
Aliases: `-v`

Required: False

--force

Cleans up your local configuration.

YAML

Type: SwitchParameter
Aliases: `-f`

Required: False

--auto-approve

Enables silent deployment.

YAML

Type: SwitchParameter
Aliases: `-i`

Required: False

--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False
Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
install_deployer.sh
Article • 02/10/2023

Synopsis
You can use the script install_deployer.sh to set up a new deployer VM in the control
plane.

Syntax
Bash

install_deployer.sh [ --parameterfile ] <String>


[-i | --auto-approve]

Description
The script install_deployer.sh sets up a new deployer in the control plane.

The deployer VM has installation of Ansible and Terraform. You use the deployer VM to
deploy the SAP artifacts.

Examples

Example 1
Bash

install_deployer.sh --parameterfile MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars

Parameters

--parameterfile

Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.
YAML

Type: String
Aliases: `-p`

Required: True

--auto-approve

Enables silent deployment.

YAML

Type: SwitchParameter
Aliases: `-i`

Required: False

--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False

Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
Install_library.sh
Article • 02/10/2023

Synopsis
The install_library.sh script sets up a new SAP Library.

Syntax
Bash

install_library.sh [ --parameterfile ] <String> [ --


deployer_statefile_foldername ] <String>
[-i | --auto-approve]

Description
The install_library.sh command sets up a new SAP Library in the control plane. The
SAP Library provides the storage for Terraform state files and SAP installation media.

Examples

Example 1
Bash

install_library.sh -Parameterfile MGMT-WEEU-SAP_LIBRARY.tfvars -


deployer_statefile_foldername ../../DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE

Parameters

--parameterfile

Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.
YAML

Type: String
Aliases: `-p`

Required: True

--deployer_statefile_foldername

Sets the relative folder path to the folder that contains the deployer VM's parameter file,
named terraform.tfstate .

YAML

Type: String
Aliases: `-d`

Required: True

--auto-approve

Enables silent deployment.

YAML

Type: SwitchParameter
Aliases: `-i`

Required: False

--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False

Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
remove_controlplane.sh
Article • 09/20/2023

Synopsis
Removes the control plane, including the deployer VM and the SAP library. It's
important to remove the terraform deployed artifacts using Terraform to ensure that the
removals are done correctly.

Syntax
Bash

remove_controlplane.sh [-d or --deployer_parameter_file ] <String> [-l or -


-library_parameter_file ] <String>

Description
Removes the SAP control plane, including the deployer VM and the SAP library.

Examples

Example 1
Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"

az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"

sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/remove_controlplane.sh.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"

Example 2
Bash

export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"

az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"

sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/remove_controlplane.sh.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
--subscription xxxxxxxxxxx
--storage_account mgmtweeutfstate###

Parameters

--deployer_parameter_file
Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.

YAML

Type: String
Aliases: `-d`

Required: True

--library_parameter_file

Sets the parameter file for the SAP library. For more information, see Configuring the
control plane.

YAML

Type: String
Aliases: `-l`

Required: True

--subscription

Sets the subscription that contains the SAP library. For more information, see
Configuring the control plane.

YAML

Type: String
Aliases: `-l`
Required: True

--storage_account

Sets the storage account name of the tfstate storage account in SAP library. For more
information, see Configuring the control plane.

YAML

Type: String
Aliases: `-l`
Required: True
--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False

Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
Remover.sh
Article • 02/10/2023

Synopsis
You can use the command remover.sh to remove a new SAP system. The script can be
used to remove workload zones and SAP systems.

Syntax
Bash

remover.sh [--parameterfile] <String> [--type] <String> [ --help ]<String>]


s>]

Description
The remover.sh script deploys or updates a new deployment of the specified type.

Examples

Example 1
Removes an SAP System deployment.

Bash

remover.sh --parameterfile DEV-WEEU-SAP00-X00.tfvars --type sap_system

Example 2
Removes a workload deployment.

Bash
remover.sh --parameterfile DEV-WEEU-SAP00-INFRASTRUCTURE.tfvars --type
sap_landscape

Parameters

--parameter_file

Sets the parameter file for the system

YAML

Type: String
Aliases: `-p`

Required: True

--type

Sets the type of deployment. Valid values include: sap_deployer , sap_library ,


sap_landscape , and sap_system .

YAML

Type: String
Accepted values: sap_deployer, sap_landscape, sap_library, sap_system
Aliases: `-t`

Required: True

--help

Shows help for the script.

YAML

Type: SwitchParameter
Aliases: `-h`

Required: False
Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
set_secrets.sh
Article • 02/10/2023

Synopsis
Sets the service principal secrets in Azure Key Vault.

Syntax
Bash

set_secrets.sh [--region] <String> [--environment] <String> [--vault]


<String> [--subscription] <String> [--spn_id] <String>
[--spn_secret] <String> [--tenant_id] <String>

Description
Sets the secrets in Key Vault that the deployment automation requires.

EXAMPLES

EXAMPLE 1
Bash

set_secrets.sh --environment DEV \


--region weeu \
--vault MGMTWEEUDEP00userABC \
--subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \
--spn_id yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy \
--spn_secret ************************ \
--tenant_id zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz

Parameters

--region
Sets the name of the Azure region for deployment.

YAML

Type: String
Aliases: `-r`

Required: True

--environment

Sets the name of the deployment environment.

YAML

Type: String
```yaml
Type: String
Aliases: `-e`

Required: True

--vault

Sets the name of the key vault.

YAML

Type: String
Aliases: `-v`

Required: True

-spn_id

Sets the service principal's app ID. For more information, see Prepare the deployment
credentials.

YAML

Type: String
Aliases: `-c`

Required: False
--spn_secret

Sets the Service Principal password. For more information, see Prepare the deployment
credentials.

YAML

Type: String
Aliases: `-p`

Required: False

--tenant_id

Sets the tenant ID for the service principal. For more information, see Prepare the
deployment credentials.

YAML

Type: String
Aliases: `-t`

Required: False

Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
advanced_state_management.sh
Article • 12/22/2023

Synopsis
Allows for Terraform state file management.

Syntax
Bash

advanced_state_management.sh [--parameterfile] <String>


[--type] <String>
[--operation] <String>
[--terraform_keyfile] <String>
[--subscription] <String>
[--storage_account_name] <String>
[--tf_resource_name] <String>
[--azure_resource_id] <String>
[--help]

Description
You can use this script to:

list the resources in the Terraform state file.


add missing or modified resources to the Terraform state file.
remove resources from the Terraform state file.

This script is useful if resources are modified or created without using Terraform.

Examples

Example 1
List the contents of the Terraform state file.

Bash
parameter_file_name="DEV-WEEU-SAP01-X00.tfvars"
deployment_type="sap_system"
subscriptionID="<subscriptionId>"

filepart=$(echo "${parameter_file_name}" | cut -d. -f1)


key_file=${filepart}.terraform.tfstate

#This is the name of the storage account containing the terraform state
files
storage_accountname="<storageaccountname>"

$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh
\
--parameterfile "${parameter_file_name}" \
--type "${deployment_type}" \
--operation list \
--subscription "${subscriptionID}" \
--storage_account_name "${storage_accountname}" \
--terraform_keyfile "${key_file}"

Example 2
Importing a Virtual Machine

Bash

parameter_file_name="DEV-WEEU-SAP01-X00.tfvars"
deployment_type="sap_system"
subscriptionID="<subscriptionId>"

filepart=$(echo "${parameter_file_name}" | cut -d. -f1)


key_file=${filepart}.terraform.tfstate

#This is the name of the storage account containing the terraform state
files
storage_accountname="<storageaccountname>"

#Terraform Resource name of the first


tf_resource_name="module.hdb_node.azurerm_linux_virtual_machine.vm_dbnode[0]
"

#Azure Resource id of the Virtual


azure_resource_id="/subscriptions/<subscriptionId>/resourceGroups/DEV-WEEU-
SAP01-X00/providers/Microsoft.Compute/virtualMachines/xxxxx"

$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh
\
--parameterfile "${parameter_file_name}" \
--type "${deployment_type}" \
--operation import \
--subscription "${subscriptionID}" \
--storage_account_name "${storage_accountname}" \
--terraform_keyfile "${key_file}" \
--tf_resource_name "${tf_resource_name}" \
--azure_resource_id "${azure_resource_id}"

Example 3
Removing a storage account from the state file

Bash

parameter_file_name="DEV-WEEU-SAP01-X00.tfvars"
deployment_type="sap_system"
subscriptionID="<subscriptionId>"

filepart=$(echo "${parameter_file_name}" | cut -d. -f1)


key_file=${filepart}.terraform.tfstate

#This is the name of the storage account containing the terraform state
files
storage_accountname="<storageaccountname>"

#Terraform Resource name of the first


tf_resource_name="module.common_infrastructure.azurerm_storage_account.sapmn
t[0]"

$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh
\
--parameterfile "${parameter_file_name}" \
--type "${deployment_type}" \
--operation remove \
--subscription "${subscriptionID}" \
--storage_account_name "${storage_accountname}" \
--terraform_keyfile "${key_file}" \
--tf_resource_name "${tf_resource_name}"

Parameters

--parameterfile

Sets the parameter file for the system.

YAML
Type: String
Aliases: `-p`

Required: True

--type

Sets the type of system. Valid values include: sap_deployer , sap_library , sap_landscape ,
and sap_system .

YAML

Type: String
Aliases: `-t`
Accepted values: sap_deployer, sap_landscape, sap_library, sap_system

Required: True

--operation

Sets the operation to perform. Valid values include: sap_deployer , import , list , and
remove .

YAML

Type: String
Aliases: `-t`
Accepted values: import, list, remove

Required: True

--terraform_keyfile

Sets the Terraform state file's name.

YAML

Type: String
Aliases: `-k`

Required: True
--subscription

Sets the target Azure subscription.

YAML

Type: String
Aliases: `-s`

Required: False

--storageaccountname

Sets the name of the storage account that contains the Terraform state files.

YAML

Type: String
Aliases: `-a`

Required: False

--tf_resource_name

Sets the resource name in the Terraform state file.

YAML

Type: String
Aliases: `-n`

Required: False

--azure_resource_id

Sets the resource ID of the Azure resource to import.

YAML

Type: String
Aliases: `-i`

Required: False
Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
update_sas_token.sh
Article • 02/10/2023

Synopsis
Updates the SAP Library SAS token in Azure Key Vault

Syntax
Bash

update_sas_token.sh

Description
Updates the SAP Library SAS token in Azure Key Vault. Prompts for the SAP library
storage account name and the deployer key vault name.

EXAMPLES

EXAMPLE 1
Prompts for the SAP library storage account name and the deployer key vault name.

Bash

update_sas_token.sh

EXAMPLE 2
Bash

export SAP_LIBRARY_TF=mgmtweeusaplibXXX
export SAP_KV_TF=MGMTWEEUDEP00userYYY

update_sas_token.sh
Parameters
None

Notes
v0.9 - Initial version

Copyright (c) Microsoft Corporation. Licensed under the MIT license.

Related links
GitHub repository: SAP on Azure Deployment Automation Framework
What is Azure Monitor for SAP
solutions?
Article • 05/23/2023

When you have critical SAP applications and business processes that rely on Azure
resources, you might want to monitor those resources for availability, performance, and
operation. Azure Monitor for SAP solutions is an Azure-native monitoring product for
SAP landscapes that run on Azure. It uses specific parts of the Azure Monitor
infrastructure.

You can use Azure Monitor for SAP solutions with both SAP on Azure virtual machines
(VMs) and SAP on Azure Large Instances.

What can you monitor?


You can use Azure Monitor for SAP solutions to collect data from Azure infrastructure
and databases in one central location. Then, you can visually correlate the data for faster
troubleshooting.

To monitor components of an SAP landscape, add the corresponding provider. These


components include Azure VMs, high-availability (HA) clusters, SAP HANA databases,
and SAP NetWeaver. For more information, see Quickstart: Deploy Azure Monitor for
SAP solutions in Azure portal.

Azure Monitor for SAP solutions uses the Azure Monitor capabilities of Log Analytics
and workbooks. With it, you can:

Create custom visualizations by editing the default that Azure Monitor for SAP
solutions provides.
Write custom queries.
Create custom alerts by using Log Analytics workspaces.
Take advantage of the flexible retention period in Azure Monitor Logs and Log
Analytics.
Connect monitoring data with your ticketing system.

What data is collected?


Azure Monitor for SAP solutions doesn't collect Azure Monitor metrics or resource log
data, like some other Azure resources do. Instead, it sends custom logs directly to the
Azure Monitor Logs system. There, you can use the built-in features of Log Analytics.
Data collection in Azure Monitor for SAP solutions depends on the providers that you
configure. The following data is collected for each provider.

HA Pacemaker cluster data


Node, resource, and SBD status
Pacemaker location constraints
Quorum votes and ring status

Also see the metrics specification for ha_cluster_exporter .

SAP HANA data


CPU, memory, disk, and network use
HANA system replication
HANA backup
HANA host status
Index server and name server roles
Database growth
Top tables
File system use

Microsoft SQL Server data


CPU, memory, and disk use
Host name, SQL instance name, and SAP system ID
Batch requests, compilations, and page life expectancy over time
Top 10 most expensive SQL statements over time
Top 12 largest tables in the SAP system
Problems recorded in the SQL Server error log
Blocking processes and SQL wait statistics over time

OS (Linux) data
CPU use, fork count, running processes, and blocked processes
Memory use and distribution among used, cached, and buffered
Swap use, paging, and swap rate
File system usage, along with number of bytes read and written per block device
Read/write latency per block device
Ongoing I/O count and persistent memory read/write bytes
Network packets in/out and network bytes in/out

SAP NetWeaver data


SAP system and application server availability, including instance process
availability of:
Dispatcher
ICM
Gateway
Message server
Enqueue server
IGS Watchdog
Work process usage statistics and trends
Enqueue lock statistics and trends
Queue usage statistics and trends
SMON metrics (/SDF/SMON)
SWNC workload, memory, transaction, user, and RFC usage (St03n)
Short dumps (ST22)
Object lock (SM12)
Failed updates (SM13)
System log analysis (SM21)
Batch job statistics (SM37)
Outbound queues (SMQ1)
Inbound queues (SMQ2)
Transactional RFC (SM59)
STMS change transport system metrics (STMS)

IBM Db2 data


Database availability
Number of connections, logical reads, and physical reads
Waits and current locks
Top 20 runtimes and executions

What is the architecture?


The following diagram shows, at a high level, how Azure Monitor for SAP solutions
collects data from the SAP HANA database. The architecture is the same if SAP HANA is
deployed on Azure VMs or Azure Large Instances.

Important points about the architecture include:

You can monitor multiple instances of a component type across multiple SAP
systems (SIDs) within a virtual network by using a single resource of Azure Monitor
for SAP solutions. For example, you can monitor multiple HANA databases, HA
clusters, Microsoft SQL Server instances, and SAP NetWeaver systems of multiple
SIDs.
The architecture diagram shows the SAP HANA provider as an example. You can
configure multiple providers for corresponding components to collect data from
those components. Examples include HANA database, HA cluster, Microsoft SQL
Server instance, and SAP NetWeaver.

The key components of the architecture are:

The Azure portal, where you access Azure Monitor for SAP solutions.
The Azure Monitor for SAP solutions resource, where you view monitoring data.
The managed resource group, which is deployed automatically as part of the Azure
Monitor for SAP solutions resource's deployment. Inside the managed resource
group, resources like these help collect data:
An Azure Functions resource hosts the monitoring code. This logic collects data
from the source systems and transfers the data to the monitoring framework.
An Azure Key Vault resource holds the SAP HANA database credentials and
stores information about providers.
A Log Analytics workspace is the destination for storing data. Optionally, you
can choose to use an existing workspace in the same subscription as your Azure
Monitor for SAP solutions resource at deployment.
A storage account is associated with the Azure Functions resource. It's used to
manage triggers and executions of logging functions.

Azure Monitor workbooks provide customizable visualization of the data in Log


Analytics. To automatically refresh your workbooks or visualizations, pin the items to the
Azure dashboard. The maximum refresh frequency is every 30 minutes.

You can also use Kusto Query Language (KQL) to run log queries against the raw tables
inside the Log Analytics workspace.

How do you analyze logs?


Azure Monitor for SAP solutions doesn't support resource logs or activity logs. For a list
of the tables that Azure Monitor Logs uses for querying in Log Analytics, see the data
reference for monitoring SAP on Azure.

How do you make Kusto queries?


When you select Logs from the Azure Monitor for SAP solutions menu, Log Analytics
opens with the query scope set to the current instance of Azure Monitor for SAP
solutions. Log queries include only data from that resource. To run a query that includes
data from other accounts or data from other Azure services, select Logs from the Azure
Monitor menu. For more information, see Log query scope and time range in Azure
Monitor Log Analytics.

You can use Kusto queries to help you monitor your Azure Monitor for SAP solutions
resources. The following sample query gives you data from a custom log for a specified
time range. You can view the list of custom tables by expanding the Custom Logs
section. You can specify the time range and the number of rows. In this example, you
get five rows of data for your selected time range:

Kusto

Custom_log_table_name
| take 5

How do you get alerts?


Azure Monitor alerts proactively notify you when important conditions are found in your
monitoring data. You can then identify and address problems in your system before
your customers notice them.
You can configure alerts in Azure Monitor for SAP solutions from the Azure portal. For
more information, see Configure alerts in Azure Monitor for SAP solutions with the
Azure portal.

How can you create Azure Monitor for SAP


solutions resources?
You can deploy Azure Monitor for SAP solutions and configure providers by using the
Azure portal or Azure PowerShell.

What is the pricing?


Azure Monitor for SAP solutions is a free product. There's no license fee.

You're responsible for paying the cost of the underlying components in the managed
resource group. You're also responsible for consumption costs associated with data use
and retention. For more information, see:

Azure Functions pricing


Azure Key Vault pricing
Azure storage account pricing
Azure Log Analytics and alerts pricing

Next steps
For a list of custom logs relevant to Azure Monitor for SAP solutions and
information on related data types, see Data reference for Azure Monitor for SAP
solutions.
For information on providers available for Azure Monitor for SAP solutions, see
Azure Monitor for SAP solutions providers.
Azure Monitor for SAP solutions
FAQ
FAQ

This article provides answers to frequently asked questions (FAQ) about Azure Monitor
for SAP solutions.

Do I have to pay for Azure Monitor for


SAP solutions?
There's no licensing fee for Azure Monitor for SAP solutions. However, you're
responsible for the cost of managed resource group components.

Do I need specific permission on Azure


Subscription to deploy Azure Monitor
for SAP solutions?
You need to have contributor or owner access to deploy Azure Monitor for SAP
solutions.

In which regions is this service


available?
The Azure Monitor for SAP solutions service is available in the following regions: East US
2, West US 2, East US, Central US, South Central US, West US 3, North Central US,
West Central US, West US, North Europe, West Europe, Australia East, Australia
Central, Australia Southeast, Central India, South India, East Asia, Southeast Asia,
Sweden Central, UK South, West US 2, Germany West Central, Canada Central, Korea
Central, South Africa North, Switzerland North, Norway East, UAE North, France
Central, Japan East, and Brazil South.

Do I need to give permissions to allow


the deployment of a managed resource
group in my subscription?
No, explicit permissions aren't required.

Where does the collector virtual


machine (VM) reside?
When deploying Azure Monitor for SAP solutions, we recommend you choose the same
VNET for your monitoring resource as your SAP HANA server. So we recommend the
collector VM be in the same VNET as the SAP HANA server. If you're using a non-HANA
database, the collector VM will be in the same VNET as the non-HANA database.

Which versions of HANA are supported?


HANA 1.0 SPS 12 (Rev. 120 or higher) and HANA 2.0 SPS03 or higher are supported.

Which HANA deployment


configurations are supported?
The following configurations are supported:

Single node (scale-up) and multi-node (scale-out).


Single database container (HANA 1.0 SPS 12) and multiple database containers
(HANA 1.0 SPS 12 or HANA 2.0).
Auto host failover (n+1) and HSR.

Which SQL Server Versions are


supported?
SQL Server 2012 SP4 or higher.

Which SQL Server configurations are


supported?
The following configurations are supported:

Default or named standalone instances in a virtual machine.


Clustered instances or instances in an Always On configuration when either using
the virtual name of the clustered resource or the Always On listener name.
Currently, no cluster or Always On specific metrics are collected.
Azure SQL Database (PAAS) is currently not supported.

What happens if I accidentally delete


the managed resource group?
The managed resource group is locked by default. So the chances of accidentally
deleting the managed resource group are minuscule. If you do delete the managed
resource group, Azure Monitor for SAP solutions will stop working. You'll have to deploy
a new Azure Monitor for SAP solutions resource and start over.

Which roles do I need in my Azure


subscription to deploy Azure Monitor
for SAP solutions resource?
Contributor role.

What is the SLA on this product?


Online Services provided free of charge are excluded from service level agreements.
Read Licensing Document for Online Services document on Microsoft Website.

Can I monitor my entire landscape


through this solution?
You can currently monitor:

HANA database
The underlying infrastructure
The High-availability cluster
Microsoft SQL server
SAP Netweaver availability
SAP Application Instance availability metrics
Does this service replace SAP Solution
manager?
No. Customers can still use SAP Solution manager for Business process monitoring.

What is the value of this service over


traditional solutions like SAP HANA
Cockpit/Studio?
Azure Monitor for SAP solutions isn't HANA database specific. Azure Monitor for SAP
solutions supports also AnyDB.

Which SAP NetWeaver versions are


supported?
SAP NetWeaver 7.0 or higher.

Which SAP NetWeaver configurations


are supported?
Supports ABAP, Java, and dual-stack SAP NetWeaver Application Server configurations.

Why do I need to unprotect methods


for SAP NetWeaver application
monitoring?
In SAP releases >= 7.3, most web service methods are protected by default. To fetch
availability and performance metrics by calling these methods, you need to unprotect
the following methods: GetQueueStatistic, ABAPGetWPTable, GetProcessList,
EnqGetStatistic, and GetSystemInstancelist.

Is there any risk in unprotecting


SAPCONTROL web methods?
In general, you can unprotect SAPCONTROL web methods without posing a security risk
as such . You can restrict access to unprotected web methods via server ports 5XX13 /
5XX14 of sapstartsrv. You do so by adding a filter in the SAP Access Control List. The
OSS note describes the required configuration.

Do I need to restart my SAP instances


after performing system configurations
for setting up SAP NetWeaver provider?
Yes, once you have unprotected methods through SAP config changes, you'll need to
restart the respective SAP systems to ensure the configuration changes are updated.

Next steps
Learn more about Azure Monitor for SAP solutions.

Monitor SAP on Azure


Quickstart: Deploy Azure Monitor for
SAP solutions by using the Azure portal
Article • 05/23/2023

In this quickstart, you get started with Azure Monitor for SAP solutions by using the
Azure portal to deploy resources and configure providers.

Prerequisites
If you don't have an Azure subscription, create a free account before you begin.

Set up a network before you create an Azure Monitor instance.

Create or choose a virtual network for Azure Monitor for SAP solutions that has
access to the source SAP system's virtual network.

Create a subnet with an address range of IPv4/25 or larger in the virtual network
that's associated with Azure Monitor for SAP solutions, with subnet delegation
assigned to Microsoft.Web/serverFarms.
Create a monitoring resource for Azure
Monitor for SAP solutions
1. Sign in to the Azure portal .

2. In the search box, search for and select Azure Monitor for SAP solutions.

3. On the Basics tab, provide the required values:


a. For Subscription, add the Azure subscription details.
b. For Resource group, create a new resource group or select an existing one
under the subscription.
c. For Resource name, enter the name for the Azure Monitor for SAP solutions
instance.
d. For Workload region, select the region where the monitoring resources are
created. Make sure that it matches the region for your virtual network.
e. Service region is where your proxy resource is created. The proxy resource
manages monitoring resources deployed in the workload region. The service
region is automatically selected based on your Workload region selection.
f. For Virtual network, select a virtual network that has connectivity to your SAP
systems for monitoring.
g. For Subnet, select a subnet that has connectivity to your SAP systems. You can
use an existing subnet or create a new one. It must be an IPv4/25 block or
larger.
h. For Log analytics, you can use an existing Log Analytics workspace or create a
new one. If you create a new workspace, it's created inside the managed
resource group along with other monitoring resources.
i. For Managed resource group name, enter a unique name. This name is used to
create a resource group that will contain all the monitoring resources. You can't
change this name after the resource is created.

4. On the Providers tab, you can start creating providers along with the monitoring
resource. You can also create providers later by going to the Providers tab in the
Azure Monitor for SAP solutions resource.

5. On the Tags tab, you can add tags to the monitoring resource. Make sure to add
all the mandatory tags if you have a tag policy in place.

6. On the Review + create tab, review the details and select Create.

Create a provider in Azure Monitor for SAP


solutions
To create a provider, see the following articles:

SAP NetWeaver provider creation


SAP HANA provider creation
Microsoft SQL Server provider creation
IBM Db2 provider creation
Operating system provider creation
High-availability provider creation

Next steps
Learn more about Azure Monitor for SAP solutions.

Configure Azure Monitor for SAP solution providers


Quickstart: Deploy Azure Monitor for
SAP solutions by using PowerShell
Article • 05/23/2023

In this quickstart, get started with Azure Monitor for SAP solutions by using the
Az.Workloads PowerShell module to create Azure Monitor for SAP solutions resources.
You create a resource group, set up monitoring, and create a provider instance.

Prerequisites
If you don't have an Azure subscription, create a free account before you begin.

If you choose to use PowerShell locally, this article requires that you install the Az
PowerShell module. Connect to your Azure account by using the Connect-
AzAccount cmdlet. For more information about installing the Az PowerShell
module, see Install Azure PowerShell. Alternately, you can use Azure Cloud Shell.

Install the Az.Workloads PowerShell module by running this command:

Azure PowerShell

Install-Module -Name Az.Workloads

If you have multiple Azure subscriptions, select the subscription in which the
resources should be billed by using the Set-AzContext cmdlet:

Azure PowerShell

Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000

Create or choose a virtual network for Azure Monitor for SAP solutions that has
access to the source SAP system's virtual network.

Create a subnet with an address range of IPv4/25 or larger in the virtual network
that's associated with Azure Monitor for SAP solutions, with subnet delegation
assigned to Microsoft.Web/serverFarms.
Create a resource group
Create an Azure resource group by using the New-AzResourceGroup cmdlet. A resource
group is a logical container in which Azure resources are deployed and managed as a
group.

The following example creates a resource group with the specified name and in the
specified location:

Azure PowerShell

New-AzResourceGroup -Name Contoso-AMS-RG -Location <myResourceLocation>

Create an SAP monitor


To create an SAP monitor, use the New-AzWorkloadsMonitor cmdlet. The following
example creates an SAP monitor for the specified subscription, resource group, and
resource name:

Azure PowerShell

$monitor_name = 'Contoso-AMS-Monitor'
$rg_name = 'Contoso-AMS-RG'
$subscription_id = '00000000-0000-0000-0000-000000000000'
$location = 'eastus'
$managed_rg_name = 'MRG_Contoso-AMS-Monitor'
$subnet_id = '/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/ams-vnet-
rg/providers/Microsoft.Network/virtualNetworks/ams-vnet-eus/subnets/Contoso-
AMS-Monitor'
$route_all = 'RouteAll'

New-AzWorkloadsMonitor -Name $monitor_name -ResourceGroupName $rg_name -


SubscriptionId $subscription_id -Location $location -AppLocation $location -
ManagedResourceGroupName $managed_rg_name -MonitorSubnet $subnet_id -
RoutingPreference $route_all

To get the properties of an SAP monitor, use the Get-AzWorkloadsMonitor cmdlet. The
following example gets the properties of an SAP monitor for the specified subscription,
resource group, and resource name:

Azure PowerShell

Get-AzWorkloadsMonitor -ResourceGroupName Contoso-AMS-RG -Name Contoso-AMS-


Monitor
Create a provider

Create an SAP NetWeaver provider


To create an SAP NetWeaver provider, use the New-AzWorkloadsProviderInstance
cmdlet. The following example creates a NetWeaver provider for the specified
subscription, resource group, and resource name:

Azure PowerShell

Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000

In the following code, hostname is the host name or IP address for SAP Web Dispatcher
or the application server. SapHostFileEntry is the IP address, fully qualified domain
name, or host name of every instance that's listed in GetSystemInstanceList.

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-NW'

$SapClientId = '000'
$SapHostFileEntry = '["10.0.0.0 x01scscl1.ams.azure.com x01scscl1,10.0.0.0
x01erscl1.ams.azure.com x01erscl1,10.0.0.1 x01appvm1.ams.azure.com
x01appvm1,10.0.0.2 x01appvm2.ams.azure.com x01appvm2"]'
$hostname = 'x01appvm0'
$instance_number = '00'
$password = 'Password@123'
$sapportNumber = '8000'
$sap_sid = 'X01'
$sap_username = 'AMS_NW'
$providerSetting = New-AzWorkloadsProviderSapNetWeaverInstanceObject -
SapClientId $SapClientId -SapHostFileEntry $SapHostFileEntry -SapHostname
$hostname -SapInstanceNr $instance_number -SapPassword $password -
SapPortNumber $sapportNumber -SapSid $sap_sid -SapUsername $sap_username -
SslPreference Disabled

New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name


$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting

Create an SAP HANA provider


To create an SAP HANA provider, use the New-AzWorkloadsProviderInstance cmdlet.
The following example creates a HANA provider for the specified subscription, resource
group, and resource name:

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-HANA'

$hostname = '10.0.0.0'
$sap_sid = 'X01'
$username = 'SYSTEM'
$password = 'password@123'
$dbName = 'SYSTEMDB'
$instance_number = '00'

$providerSetting = New-AzWorkloadsProviderHanaDbInstanceObject -Name $dbName


-Password $password -Username SYSTEM -Hostname $hostname -InstanceNumber
$instance_number -SapSid $sap_sid -SqlPort 1433 -SslPreference Disabled
New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name
$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting

Create an operating system provider


To create an operating system provider, use the New-AzWorkloadsProviderInstance
cmdlet. The following example creates an operating system provider for the specified
subscription, resource group, and resource name:

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-OS'

$hostname = 'http://10.0.0.0:9100/metrics'
$sap_sid = 'X01'

$providerSetting = New-AzWorkloadsProviderPrometheusOSInstanceObject -
PrometheusUrl $hostname -SapSid $sap_sid -SslPreference Disabled
New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name
$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting

Create a high-availability cluster provider


To create a high-availability cluster provider, use the New-AzWorkloadsProviderInstance
cmdlet. The following example creates a high-availability cluster provider for the
specified subscription, resource group, and resource name:

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-HA'

$PrometheusHa_Url = 'http://10.0.0.0:44322/metrics'
$sap_sid = 'X01'
$cluster_name = 'haCluster'
$hostname = '10.0.0.0'
$providerSetting = New-AzWorkloadsProviderPrometheusHaClusterInstanceObject
-ClusterName $cluster_name -Hostname $hostname -PrometheusUrl
$PrometheusHa_Url -Sid $sap_sid -SslPreference Disabled

New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name


$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting

Create a Microsoft SQL Server provider


To create a Microsoft SQL Server provider, use the New-AzWorkloadsProviderInstance
cmdlet. The following example creates a SQL Server provider for the specified
subscription, resource group, and resource name:

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-SQL'

$hostname = '10.0.0.0'
$sap_sid = 'X01'
$username = 'AMS_SQL'
$password = 'Password@123'
$port = '1433'

$providerSetting = New-AzWorkloadsProviderSqlServerInstanceObject -Password


$password -Port $port -Username $username -Hostname $hostname -SapSid
$sap_sid -SslPreference Disabled
New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name
$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting
Create an IBM Db2 provider
To create an IBM Db2 provider, use the New-AzWorkloadsProviderInstance cmdlet. The
following example creates an IBM Db2 provider for the specified subscription, resource
group, and resource name:

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-DB2'

$hostname = '10.0.0.0'
$sap_sid = 'X01'
$username = 'AMS_DB2'
$password = 'password@123'
$dbName = 'X01'
$port = '5912'

$providerSetting = New-AzWorkloadsProviderDB2InstanceObject -Name $dbName -


Password $password -Port $port -Username $username -Hostname $hostname -
SapSid $sap_sid -SslPreference Disabled

New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name


$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting

Get properties of a provider instance


To get the properties of a provider instance, use the Get-AzWorkloadsProviderInstance
cmdlet. The following example gets the properties of:

A provider instance for the specified subscription.


The resource group.
The SAP monitor name.
The resource name.

Azure PowerShell

Get-AzWorkloadsProviderInstance -ResourceGroupName Contoso-AMS-RG -


SapMonitorName Contoso-AMS-Monitor

Clean up resources
If you don't need the resources that you created in this article, you can delete them by
using the following examples.

Delete the provider instance


To remove a provider instance, use the Remove-AzWorkloadsProviderInstance cmdlet.
The following example deletes an IBM DB2 provider instance for the specified
subscription, resource group, SAP monitor name, and resource name:

Azure PowerShell

$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-DB2'

Remove-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name


$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id

Delete the SAP monitor


To remove an SAP monitor, use the Remove-AzWorkloadsMonitor cmdlet. The following
example deletes an SAP monitor for the specified subscription, resource group, and
monitor name:

Azure PowerShell

$monitor_name = 'Contoso-AMS-Monitor'
$rg_name = 'Contoso-AMS-RG'
$subscription_id = '00000000-0000-0000-0000-000000000000'

Remove-AzWorkloadsMonitor -Name $monitor_name -ResourceGroupName $rg_name -


SubscriptionId $subscription_id

Delete the resource group


The following example deletes the specified resource group and all the resources in it.

U Caution

If resources outside the scope of this article exist in the specified resource group,
they'll also be deleted.
Azure PowerShell

Remove-AzResourceGroup -Name Contoso-AMS-RG

Next steps
Learn more about Azure Monitor for SAP solutions.

Monitor SAP on Azure


What are providers in Azure Monitor for
SAP solutions?
Article • 06/16/2023

In the context of Azure Monitor for SAP solutions, a provider contains the connection
information for a corresponding component and helps to collect data from there. There
are multiple provider types. For example, an SAP HANA provider is configured for a
specific component within the SAP landscape, like an SAP HANA database. You can
configure an Azure Monitor for SAP solutions resource (also known as an SAP monitor
resource) with multiple providers of the same type or multiple providers of multiple
types.

You can choose to configure different provider types for data collection from the
corresponding component in their SAP landscape. For example, you can configure one
provider for the SAP HANA provider type, another provider for the high-availability
cluster provider type, and so on.

You can also configure multiple providers of a specific provider type to reuse the same
SAP monitor resource and associated managed group. For more information, see
Manage Azure Resource Manager resource groups by using the Azure portal.

We recommend that you configure at least one provider when you deploy an Azure
Monitor for SAP solutions resource. By configuring a provider, you start data collection
from the corresponding component for which the provider is configured.
If you don't configure any providers at the time of deployment, the Azure Monitor for
SAP solutions resource is still deployed, but no data is collected. You can add providers
after deployment through the SAP monitor resource in the Azure portal. You can add or
delete providers from the SAP monitor resource at any time.

Provider type: SAP NetWeaver


You can configure one or more providers of the provider type SAP NetWeaver to enable
data collection from the SAP NetWeaver layer. The Azure Monitor for SAP solutions
NetWeaver provider uses the existing:

SAPControl Web service interface to retrieve the appropriate information.


SAP RFC ability to collect more information from the SAP system by using
Standard SAP RFC.

With the SAP NetWeaver provider, you can get the:

SAP system and application server availability (for example, instance process
availability of Dispatcher, ICM, Gateway, Message Server, Enqueue Server, IGS
Watchdog) (SAPOsControl).
Work process usage statistics and trends (SAPOsControl).
Enqueue lock statistics and trends (SAPOsControl).
Queue usage statistics and trends (SAPOsControl).
SMON metrics (Tcode - /SDF/SMON) (RFC).
SWNC workload, memory, transaction, user, RFC usage (Tcode - St03n) (RFC).
Short dumps (Tcode - ST22) (RFC).
Object lock (Tcode - SM12) (RFC).
Failed updates (Tcode - SM13) (RFC).
System logs analysis (Tcode - SM21) (RFC).
Batch jobs statistics (Tcode - SM37) (RFC).
Outbound queues (Tcode - SMQ1) (RFC).
Inbound queues (Tcode - SMQ2) (RFC).
Transactional RFC (Tcode - SM59) (RFC).
STMS Change Transport System metrics (Tcode - STMS) (RFC).

Configuring the SAP NetWeaver provider requires:

For SOAP web methods:

Fully qualified domain name (FQDN) of the SAP Web Dispatcher or the SAP
application server.
SAP system ID, Instance no.
Host file entries of all SAP application servers that get listed via the SAPcontrol
GetSystemInstanceList web method.

For SOAP+RFC:

FQDN of the SAP Web Dispatcher or the SAP application server.


SAP system ID, Instance no.
SAP client ID, HTTP port, SAP username and password for login.
Host file entries of all SAP application servers that get listed via the SAPcontrol
GetSystemInstanceList web method.

For more information, see Configure SAP NetWeaver for Azure Monitor for SAP
solutions.

Provider type: SAP HANA


You can configure one or more providers of the provider type SAP HANA to enable data
collection from the SAP HANA database. The SAP HANA provider connects to the SAP
HANA database over the SQL port. The provider pulls data from the database and
pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider
collects data every minute from the SAP HANA database.

With the SAP HANA provider, you can see the:


Underlying infrastructure usage.
SAP HANA host status.
SAP HANA system replication.
SAP HANA backup data.
Fetching services.
Network throughput between the nodes in a scaleout system.
SAP HANA long-idling cursors.
SAP HANA long-running transactions.
Checks for configuration parameter values.
SAP HANA uncommitted write transactions.
SAP HANA disk fragmentation.
SAP HANA statistics server health.
SAP HANA high memory usage service.
SAP HANA blocking transactions.

Configuring the SAP HANA provider requires the:

Host IP address.
HANA SQL port number.
SYSTEMDB username and password.

We recommend that you configure the SAP HANA provider against SYSTEMDB.
However, you can configure more providers against other database tenants.

For more information, see Configure SAP HANA provider for Azure Monitor for SAP
solutions.
Provider type: SQL Server
You can configure one or more SQL Server providers to enable data collection from SQL
Server on virtual machines . The SQL Server provider connects to SQL Server over the
SQL port. It then pulls data from the database and pushes it to the Log Analytics
workspace in your subscription. Configure SQL Server for SQL authentication and for
signing in with the SQL Server username and password. Set the SAP database as the
default database for the provider. The SQL Server provider collects data every 60
seconds up to every hour from the SQL Server.

With the SQL Server provider, you can get the:

Underlying infrastructure usage.


Top SQL statements.
Top largest table.
Problems recorded in the SQL Server error log.
Blocking processes and others.

Configuring SQL Server provider requires the:

SAP system ID.


Host IP address.
SQL Server port number.
SQL Server username and password.

For more information, see Configure SQL Server for Azure Monitor for SAP solutions.

Provider type: High-availability cluster


You can configure one or more providers of the provider type high-availability cluster to
enable data collection from the Pacemaker cluster within the SAP landscape. The high-
availability cluster provider connects to Pacemaker by using the ha_cluster_exporter
for SUSE-based clusters and by using Performance co-pilot for RHEL-based clusters.
Azure Monitor for SAP solutions then pulls data from the cluster and pushes it to the
Log Analytics workspace in your subscription. The high-availability cluster provider
collects data every 60 seconds from Pacemaker.

With the high-availability cluster provider, you can get the:

Cluster status represented as a roll-up of node and resource status.


Location constraints.
Trends.
Others .
To configure a high-availability cluster provider, two primary steps are involved:

1. Install ha_cluster_exporter in each node within the Pacemaker cluster.

You have two options for installing ha_cluster_exporter :

Use Azure Automation scripts to deploy a high-availability cluster. The scripts


install ha_cluster_exporter on each cluster node.
Do a manual installation .

2. Configure a high-availability cluster provider for each node within the Pacemaker
cluster.

To configure the high-availability cluster provider, the following information is


required:

Name: A name for this provider. It should be unique for this Azure Monitor
for SAP solutions instance.
Prometheus endpoint: http://<servername or ip address>:9664/metrics .
SID: For SAP systems, use the SAP SID. For other systems (for example, NFS
clusters), use a three-character name for the cluster. The SID must be distinct
from other clusters that are monitored.
Cluster name: The cluster name used when you're creating the cluster. You
can find the cluster name in the cluster property cluster-name .
Hostname: The Linux hostname of the virtual machine (VM).

For more information, see Create a high-availability cluster provider for Azure Monitor
for SAP solutions.

Provider type: OS (Linux)


You can configure one or more providers of the provider type OS (Linux) to enable data
collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal
or VM nodes by using the Node_Exporter endpoint. It then pulls data from the nodes
and pushes it to the Log Analytics workspace in your subscription. The OS (Linux)
provider collects data every 60 seconds for most of the metrics from the nodes.

With the OS (Linux) provider, you can get the:

CPU usage and CPU usage by process.


Disk usage and I/O read and write.
Memory distribution, memory usage, and swap memory usage.
Network usage and the network inbound and outbound traffic details.

To configure an OS (Linux) provider, two primary steps are involved:

1. Install Node_Exporter on each BareMetal or VM node. You have two options for
installing Node_Exporter :

For automated installation with Ansible, use Node_Exporter on each


BareMetal or VM node to install the OS (Linux) provider.
Do a manual installation .

2. Configure an OS (Linux) provider for each BareMetal or VM node instance in your


environment. To configure the OS (Linux) provider, the following information is
required:

Name: A name for this provider that's unique to the Azure Monitor for SAP
solutions instance.
Node Exporter endpoint: Usually http://<servername or ip
address>:9100/metrics .

Port 9100 is exposed for the Node_Exporter endpoint.

For more information, see Configure Linux provider for Azure Monitor for SAP solutions.
2 Warning

Make sure Node-Exporter keeps running after the node reboot.

Provider type: IBM Db2


You can configure one or more IBM Db2 providers to enable data collection from IBM
Db2 servers. The Db2 Server provider connects to the database over a specific port. It
then pulls data from the database and pushes it to the Log Analytics workspace in your
subscription. The Db2 Server provider collects data every 60 seconds up to every hour
from the Db2 Server.

With the IBM Db2 provider, you can get the:

Database availability.
Number of connections.
Logical and physical reads.
Waits and current locks.
Top 20 runtime and executions.

Configuring the IBM Db2 provider requires the:

SAP system ID.


Host IP address.
Database name.
Port number of the Db2 Server to connect to.
Db2 Server username and password.

For more information, see Create IBM Db2 provider for Azure Monitor for SAP solutions.
Next steps
Learn how to deploy Azure Monitor for SAP solutions from the Azure portal.

Deploy Azure Monitor for SAP solutions by using the Azure portal
Set up a network for Azure Monitor for
SAP solutions
Article • 04/29/2024

In this how-to guide, you learn how to configure an Azure virtual network so that you
can deploy Azure Monitor for SAP solutions. You learn how to:

Create a new subnet for use with Azure Functions.


Set up outbound internet access to the SAP environment that you want to monitor.

Create a new subnet


Azure Functions is the data collection engine for Azure Monitor for SAP solutions. You
must create a new subnet to host Azure Functions.

Create a new subnet with an IPv4/25 block or larger because you need at least 100 IP
addresses for monitoring resources. After you successfully create a subnet, verify the
following steps to ensure connectivity between the Azure Monitor for SAP solutions
subnet and your SAP environment subnet:

If both the subnets are in different virtual networks, do a virtual network peering
between the virtual networks.
If the subnets are associated with user-defined routes, make sure the routes are
configured to allow traffic between the subnets.
If the SAP environment subnets have network security group (NSG) rules, make
sure the rules are configured to allow inbound traffic from the Azure Monitor for
SAP solutions subnet.
If you have a firewall in your SAP environment, make sure the firewall is configured
to allow inbound traffic from the Azure Monitor for SAP solutions subnet.

For more information, see how to integrate your app with an Azure virtual network.

Use Custom DNS for your virtual network


This section only applies if you're using Custom DNS for your virtual network. Add the IP
address 168.63.129.16, which points to Azure DNS Server. This arrangement resolves the
storage account and other resource URLs that are required for proper functioning of
Azure Monitor for SAP solutions.
Configure outbound internet access
In many use cases, you might choose to restrict or block outbound internet access to
your SAP network environment. However, Azure Monitor for SAP solutions requires
network connectivity between the subnet that you configured and the systems that you
want to monitor. Before you deploy an Azure Monitor for SAP solutions resource, you
must configure outbound internet access or the deployment fails.

There are multiple methods to address restricted or blocked outbound internet access.
Choose the method that works best for your use case:

Use the Route All feature in Azure Functions


Use service tags with an NSG in your virtual network

Use Route All


Route All is a standard feature of virtual network integration in Azure Functions, which is
deployed as part of Azure Monitor for SAP solutions. Enabling or disabling this setting
only affects traffic from Azure Functions. This setting doesn't affect any other incoming
or outgoing traffic within your virtual network.

You can configure the Route All setting when you create an Azure Monitor for SAP
solutions resource through the Azure portal. If your SAP environment doesn't allow
outbound internet access, disable Route All. If your SAP environment allows outbound
internet access, keep the default setting to enable Route All.

You can only use this option before you deploy an Azure Monitor for SAP solutions
resource. It's not possible to change the Route All setting after you create the Azure
Monitor for SAP solutions resource.

Allow inbound traffic


If you have NSG or User-Defined Route rules that block inbound traffic to your SAP
environment, you must modify the rules to allow the inbound traffic. Also, depending on
the types of providers you're trying to add, you must unblock a few ports, as shown in
the following table.

ノ Expand table

Provider type Port number

Prometheus OS 9100

Prometheus HA Cluster on RHEL 44322

Prometheus HA Cluster on SUSE 9100

SQL Server 1433 (can be different if you aren't using the default port)

DB2 Server 25000 (can be different if you aren't using the default port)

SAP HANA DB 3<instance number>13, 3<instance number>15

SAP NetWeaver 5<instance number>13, 5<instance number>15

Use service tags


If you use NSGs, you can create Azure Monitor for SAP solutions-related virtual network
service tags to allow appropriate traffic flow for your deployment. A service tag
represents a group of IP address prefixes from a specific Azure service.

You can use this option after you deploy an Azure Monitor for SAP solutions resource.

1. Find the subnet associated with your Azure Monitor for SAP solutions managed
resource group:
a. Sign in to the Azure portal .
b. Search for or select the Azure Monitor for SAP solutions service.
c. On the Overview page for Azure Monitor for SAP solutions, select your Azure
Monitor for SAP solutions resource.
d. On the managed resource group's page, select the Azure Functions app.
e. On the app's page, select the Networking tab. Then select VNET Integration.
f. Review and note the subnet details. You need the subnet's IP address to create
rules in the next step.
2. Select the subnet's name to find the associated NSG. Note the NSG's information.

3. Set new NSG rules for outbound network traffic:


a. Go to the NSG resource in the Azure portal.
b. On the NSG's menu, under Settings, select Outbound security rules.
c. Select Add to add the following new rules:

ノ Expand table

Priority Name Port Protocol Source Destination Action

450 allow_monitor 443 TCP Azure Azure Allow


Functions Monitor
subnet

501 allow_keyVault 443 TCP Azure Azure Key Allow


Functions Vault
subnet

550 allow_storage 443 TCP Azure Storage Allow


Functions
subnet

600 allow_azure_controlplane 443 Any Azure Azure Allow


Functions Resource
subnet Manager

650 allow_ams_to_source_system Any Any Azure Virtual Allow


Functions network or
subnet comma-
separated
IP
addresses
of the
source
system

660 deny_internet Any Any Any Internet Deny

The Azure Monitor for SAP solution's subnet IP address refers to the IP of the subnet
associated with your Azure Monitor for SAP solutions resource. To find the subnet, go to
the Azure Monitor for SAP solutions resource in the Azure portal. On the Overview
page, review the vNet/subnet value.

For the rules that you create, allow_vnet must have a lower priority than deny_internet.
All other rules also need to have a lower priority than allow_vnet. The remaining order
of these other rules is interchangeable.
Next steps
Quickstart: Set up Azure Monitor for SAP solutions through the Azure portal
Quickstart: Set up Azure Monitor for SAP solutions with PowerShell
Configure alerts in Azure Monitor for
SAP solutions in Azure portal
Article • 01/29/2024

In this how-to guide, you learn how to configure alerts in Azure Monitor for SAP
solutions. You can configure alerts and notifications from the Azure portal using its
browser-based interface.

Prerequisites
An Azure subscription.
A deployment of an Azure Monitor for SAP solutions resource with at least one
provider. You can configure providers for:
The SAP application (NetWeaver)
SAP HANA
Microsoft SQL Server
High availability (HA) Pacemaker clusters
IBM Db2

Create an alert rule


1. Sign in to the Azure portal .

2. In the Azure portal, browse and select your Azure Monitor for SAP solutions
resource. Make sure you have at least one provider configured for this resource.

3. Navigate to the workbook you want to use. For example, SAP HANA.

4. Select a HANA instance.


5. Select the Alerts button to view available Alert Templates.

6. Select Create rule to configure an alert of your choice.

7. For Alert threshold, enter your alert threshold.

8. For Provider instance, select a provider instance.

9. For Action group, select or create an action group to configure the notification
setting. You can edit frequency and severity information according to your
requirements.

10. Select Enable alert rule to create the alert rule.


11. Select Deploy alert rule to finish your alert rule configuration. You can choose to
see the alert template by selecting View template.

12. Navigate to Alert rules to view the newly created alert rule. When and if alerts are
fired, you can view them under Fired alerts.

View and manage alerts in a centralized


experience (Preview)
This enhanced view introduces powerful capabilities that streamline alert management,
providing a unified view of all alerts and alert rules across various providers. This
consolidated approach enables you to efficiently manage and monitor alerts, improving
your overall experience with Azure Monitor for SAP Solutions.

Centralized Alert Management: Gain a holistic view of all alerts fired across
different providers within a single, intuitive interface. With the new Alerts
experience, you can easily track and manage alerts from various sources in one
place, providing a comprehensive overview of your SAP landscape's health.

Unified Alert Rules: Simplify your alert configuration by centralizing all alert rules
across different providers. This streamlined approach ensures consistency in rule
management, making it easier to define, update, and maintain alert rules for your
SAP solutions.

Grid View for Rule Status and Bulk Operations: Efficiently manage your alert rules
using the grid view, allowing you to see the status of all rules and make bulk
changes with ease. Enable or disable multiple rules simultaneously, providing a
seamless experience for maintaining the health of your SAP environment.

Alert Action Group Management: Take control of your alert action groups directly
from the new Alerts experience. Manage and configure alert action groups
effortlessly, ensuring that the right stakeholders are notified promptly when critical
alerts are triggered.
Alert Processing Rules for Maintenance Periods Enable alert processing rules, a
powerful feature that allows you to take specific actions or suppress alerts during
maintenance periods. Customize the behavior of alerts to align with your
maintenance schedule, minimizing unnecessary notifications and disruptions.

Export to CSV: Facilitate data analysis and reporting by exporting fired alerts and
alert rules to CSV format. This feature empowers you to share, analyze, and archive
alert data seamlessly, supporting your organization's reporting and compliance
requirements.

To access the new Alerts experience in Azure Monitor for SAP Solutions:

1. Navigate to the Azure portal.


2. Select your Azure Monitor for SAP Solutions instance.

3. Click on the "Alerts" tab to explore the enhanced alert management capabilities.

Next steps
Learn more about Azure Monitor for SAP solutions.

Monitor SAP on Azure


Enable TLS 1.2 or later in Azure Monitor
for SAP solutions
Article • 05/23/2023

In this article, learn about secure communication with TLS 1.2 or later in Azure Monitor
for SAP solutions.

Azure Monitor for SAP solutions resources and their associated managed resource
group components are deployed within a virtual network in a subscription. Azure
Functions is one component in a managed resource group. Azure Functions connects to
an appropriate SAP system by using connection properties that you provide, pulls
required telemetry data, and pushes that data to Log Analytics.

Azure Monitor for SAP solutions provides encryption of monitoring telemetry data in
transit by using approved cryptographic protocols and algorithms. Traffic between Azure
Functions and SAP systems is encrypted with TLS 1.2 or later. By choosing this option,
you can enable secure communication.

Enabling TLS 1.2 or later for telemetry data in transit is an optional feature. You can
choose to enable or disable this feature according to your requirements.

Supported certificates
To enable secure communication in Azure Monitor for SAP solutions, you can choose to
use either a root certificate or a server certificate.

We highly recommend that you use root certificates. For root certificates, Azure Monitor
for SAP solutions supports only certificates from certificate authorities (CAs) that
participate in the Microsoft Trusted Root Program.

Certificates must be signed by a trusted root authority. Self-signed certificates are not
supported.

How does it work?


When you deploy an Azure Monitor for SAP solutions resource, a managed resource
group and its components are automatically deployed. Managed resource group
components include Azure Functions, Log Analytics, Azure Key Vault, and a storage
account. This storage account holds certificates that are needed to enable secure
communication with TLS 1.2 or later.
During the creation of providers in Azure Monitor for SAP solutions, you choose to
enable or disable secure communication. If you enable it, you can then choose which
type of certificate you want to use.

If you select a root certificate, you need to verify that it comes from a Microsoft-
supported CA. You can then continue to create the provider instance. Subsequent data
in transit is encrypted through this root certificate.

If you select a server certificate, make sure that it's signed by a trusted CA. After you
upload the certificate, it's stored in a storage account within the managed resource
group in the Azure Monitor for SAP solutions resource. Subsequent data in transit is
encrypted through this certificate.

7 Note

Each provider type might have prerequisites that you must fulfill to enable secure
communication.

Next steps
Configure Azure Monitor for SAP solutions providers
Enable Insights to troubleshoot SAP
workload issues (preview)
Article • 01/18/2024

) Important

Insights in Azure Monitor for SAP solutions is currently in PREVIEW. See the
Supplemental Terms of Use for Microsoft Azure Previews for legal terms that
apply to Azure features that are in beta, preview, or otherwise not yet released into
general availability.

The Insights capability in Azure Monitor for SAP Solutions helps you troubleshoot
Availability and Performance issues on your SAP workloads. It helps you correlate key
SAP components issues with SAP logs, Azure platform metrics and health events. In this
how-to-guide, learn to enable Insights in Azure Monitor for SAP solutions. You can use
SAP Insights with only the latest version of the service, Azure Monitor for SAP solutions
and not Azure Monitor for SAP solutions (classic)

7 Note

This section applies to only Azure Monitor for SAP solutions.

Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
An existing NetWeaver and HANA(optional) provider. To configure a NetWeaver
provider, see the How to guides for NetWeaver provider configuration.
(Optional) Alerts set up for availability and/or performance issues on the
NetWeaver/HANA provider. To configure a NetWeaver provider, see the How to
guides for setting up Alerts on Azure Monitor for SAP

Steps to Enable Insights in Azure Monitor for


SAP solutions
To enable Insights for Azure Monitor for SAP solutions, you need to:

1. Prerequisite - Unprotect methods


2. Provide required access

Unprotect the GetEnvironment method


Follow steps to unprotect methods from the NetWeaver provider configuration page.
If you completed these steps during NetWeaver provider setup, you can skip this
section. Ensure that you have unprotected the GetEnvironment method in particular for
this capability to work.

Provide required access


In order to provide issue correlations with infrastructure, the Azure Monitor for SAP
solutions(AMS) service requires Reader access over the resource groups or subscriptions
that hold your SAP system infrastructure - virtual machines and virtual networks. You
can assign these role assignments using one of the two methods mentioned.

Provide access using AMS portal experience

1. Open the AMS instance of your choice and visit the insights tab under Monitoring
on the left navigation pane and choose to Configure Insights.
2. Choose the 'Add role assignment' button to open the role assignment experience.

3. Choose the scope at which you would want to assign the Reader role. You can
assign the reader role to multiple resource groups at a time under a subscription
scope. Make sure that the scope(s) chosen encompass the SAP system's
infrastructure on Azure. Save the role assignments.

Provide access using a PowerShell script

This script gives your AMS instance Reader role permission over the subscriptions that
hold the SAP systems. Feel free to modify the script to scope it down to a resource
group or a set of virtual machines.

1. Download the onboarding script from GitHub


2. Go to the Azure portal and select the Cloud Shell tab from the menu bar at the
top. Refer this guide to get started with Cloud Shell.
3. Switch from Bash to PowerShell.

4. Upload the script downloaded in the first step.


5. Navigate to the folder where the script is present using the command:

PowerShell

cd <script_path>

6. Set the AMS Resource/ARM ID with the command:

PowerShell

$armId = "<AMS ARM ID>"

7. If the VMs belong to a different subscription than AMS, set the list of subscriptions
in which VMs of the SAP system are present (use subscription IDs):

PowerShell

$subscriptions = "<Subscription ID 1>","<Subscription ID 2>"

) Important

To run this script successfully, ensure you have Contributor + User Access Admin or
Owner access on all subscriptions in the list. See steps to assign Azure roles.

8. Run the script uploaded from step 6 using the command:

If $subscriptions was set:

PowerShell
.\AMS_AIOPS_SETUP.ps1 -ArmId $armId -subscriptions $subscriptions

If $subscriptions wasn't set:

PowerShell

.\AMS_AIOPS_SETUP.ps1 -ArmId $armId

) Important

You might have to wait for up to 30 minutes for your AMS to start receiving
metadata of the infrastructure that it needs to monitor.

Using Insights on Azure Monitor for SAP


Solutions(AMS)
We have two categories of issues we help you get insights for.

Availability issues
Performance degradations

) Important

As a user of the Insights capability, you will require reader access on all virtual
machines on which the SAP systems are hosted that you're trying to monitor using
AMS. This is to make sure that you're able to view Azure monitor metrics and
Resource health events of these virtual machines in context of SAP issues. See steps
to assign Azure roles.

Availability Insights
This capability helps you get an overview regarding availability of your SAP system in
one place. You can also correlate SAP availability with Azure platform VM availability and
its health events easing the overall root-causing process.

Steps to use availability insights


1. Open the AMS instance of your choice and visit the insights tab under Monitoring
on the left navigation pane.

2. If you completed all the steps mentioned, you should see the above screen asking
for context to be set up. You can set the Time range, SID and the provider
(optional, All selected by default).
3. On the top, you're able to see all the fired alerts related to SAP system and
instance availability on this screen.
4. If you're able to see SAP system availability trend, categorized by VM - SAP
process list. If you selected a fired alert in the previous step, you're able to see
these trends in context with the fired alert. If not, these trends respect the time
range you set on the main Time range filter.

5. You can see the Azure virtual machine on which the process is hosted and the
corresponding availability trends for the combination. To view detailed insights,
select the 'Investigate' link.
6. It opens a context pane that shows you availability insights on the corresponding
virtual machine and the SAP application. It has two categories of insights:

Azure platform: VM health events filtered by the time range set, either by the
workbook filter or the selected alert. This pane also consists of VM availability
metric trend for the chosen VM.

SAP Application: Process availability and contextual insights on the process


like error messages (SM21), Lock entries (SM12) and Canceled jobs (SM37)
which can help you find issues that might exist in parallel in the system, at the
point in time.

Performance Insights
This capability helps you get an overview regarding performance of your SAP system in
one place. You can also correlate key SAP performance issues with related SAP
application logs alongside Azure platform utilization metrics and SAP workload
configuration drifts easing the overall root-causing process.

Steps to use performance insights

1. Open the AMS instance of your choice and visit the insights tab under Monitoring
on the left navigation pane.
2. On the top, you're able to see all the fired alerts related to SAP application
performance degradations.

3. Next you're able to see key metrics related to performance issues and its trend
during the timerange you chose.
4. To view detailed insights issues, you can either choose to investigate a fired alert or
view insights for a key metric.
5. On investigating, you see a context pane, which shows you four categories of
metrics in context of the issue/key metric chosen.

Issue/Key metric details - Detailed visualizations of the key metric that


defines the problem.
SAP application - Visualizations of the key SAP logs that pertain the issue
type.
Azure platform - Key Azure platform metrics that present an overview of the
virtual machine of the SAP system.
Configuration drift - Quality checks violations on the SAP system.
6. This capability with the set of metrics in context of the issue, helps you visually
correlate trends of key metrics. This experience eases the root-causing process of
performance degradations observed in SAP workloads on Azure.

Scope of the preview

We have insights only for a limited set of issues as part of the preview. We extend this
capability to most of the issues supported by AMS alerts before this capability is
Generally Available(GA).

Availability insights let you detect and troubleshoot unavailability of Netweaver


system, instance and HANA DB.
Performance insights are provided for NetWeaver metrics - High response
time(ST03) and Long running batch jobs.

Next steps
For information on providers available for Azure Monitor for SAP solutions, see
Azure Monitor for SAP solutions providers.
Configure SAP NetWeaver for Azure
Monitor for SAP solutions
Article • 07/21/2023

In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with
Azure Monitor for SAP solutions.

User can select between the two connection types when configuring SAP Netweaver
provider to collect information from SAP system. Metrics are collected by using

SAP Control - The SAP start service provides multiple services, including
monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use
SAP Control, which is a SOAP web service interface that exposes these capabilities.
The SAP Control interface differentiates between protected and unprotected web
service methods . It's necessary to unprotect some methods to use Azure
Monitor for SAP solutions with NetWeaver.
SAP RFC - Azure Monitor for SAP solutions also provides ability to collect
additional information from the SAP system using Standard SAP RFC. It's available
only as part of Azure Monitor for SAP solution.

You can collect the below metric using SAP NetWeaver Provider

SAP system and application server availability (for example Instance process
availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS
Watchdog) (SAP Control)
Work process usage statistics and trends (SAP Control)
Enqueue Lock statistics and trends (SAP Control)
Queue usage statistics and trends (SAP Control)
SMON Metrics (transaction code - /SDF/SMON) (RFC)
SWNC Workload, Memory, Transaction, User, RFC Usage (transaction code -
St03n) (RFC)
Short Dumps (transaction code - ST22) (RFC)
Object Lock (transaction code - SM12) (RFC)
Failed Updates (transaction code - SM13) (RFC)
System Logs Analysis (transaction code - SM21) (RFC)
Batch Jobs Statistics (transaction code - SM37) (RFC)
Outbound Queues (transaction code - SMQ1) (RFC)
Inbound Queues (transaction code - SMQ2) (RFC)
Transactional RFC (transaction code - SM59) (RFC)
STMS Change Transport System Metrics (transaction code - STMS) (RFC)
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.

Configure NetWeaver for Azure Monitor for


SAP solutions
To configure the NetWeaver provider for the current Azure Monitor for SAP solutions
version, you'll need to:

1. Prerequisite - Unprotect methods for metrics


2. Prerequisite to enable RFC metrics
3. Add the NetWeaver provider

Refer to troubleshooting section to resolve any issue faced while adding the SAP
NetWeaver Provider.

Prerequisite unprotect methods for metrics


This step is mandatory when configuring SAP NetWeaver Provider. To fetch specific
metrics, you need to unprotect some methods in each SAP instance:

1. Open an SAP GUI connection to the SAP server.

2. Sign in with an administrative account.

3. Execute transaction RZ10.

4. Select the appropriate profile (recommended Instance Profile).

5. Select Extended Maintenance > Change.

6. Select the profile parameter service/protectedwebmethods .

7. Change the value to:

Value

SDEFAULT -GetQueueStatistic -ABAPGetWPTable -EnqGetStatistic -


GetProcessList -GetEnvironment -ABAPGetSystemWPTable
8. Select Copy.

9. Select Profile > Save to save the changes.

10. Restart the SAPStartSRV service on each instance in the SAP system. Restarting the
services doesn't restart the entire system. This process only restarts SAPStartSRV
(on Windows) or the daemon process (in Unix or Linux).

You must restart SAPStartSRV on each instance of the SAP system for the SAP
Control web methods to be unprotected. These read-only SOAP APIs are required
for the NetWeaver provider to fetch metric data from the SAP system. Failure to
unprotect these methods results in empty or missing visualizations on the
NetWeaver metric workbook.

a. On Windows systems, use the SAP Microsoft Management Console (MMC) or


SAP Management Console (MC) to restart the service. Right-click each instance.
Then, choose All Tasks > Restart Service.

b. On Linux systems, use the following commands to restart the host. Replace
<instance number> with your SAP system's instance number.
Command

sapcontrol -nr <instance number> -function RestartService

c. Repeat the previous steps for each instance profile (or) you can restart the SAP
system in lower environments as another option.

PowerShell script to unprotect web methods

You can refer to the link to unprotect the web-methods in the SAP Windows virtual
machine.

Prerequisite to enable RFC metrics


RFC metrics are only supported for AS ABAP applications and do not apply to SAP JAVA
systems. This step is mandatory when the connection type selected is SOAP+RFC.
Below steps need to be performed as a pre-requisite to enable RFC

1. Create or upload role in the SAP NW ABAP system. Azure Monitor for SAP
solutions requires this role to connect to SAP. The role uses the least privileged
access. Download and unzip Z_AMS_NETWEAVER_MONITORING.zip
a. Sign in to your SAP system.
b. Use the transaction code PFCG > select on Role Upload in the menu.
c. Upload the Z_AMS_NETWEAVER_MONITORING.SAP file from the ZIP file.
d. Select Execute to generate the role. (ensure the profile is also generated as part
of the role upload)

Transport to import role in SAP System

You can also refer to the link to import role in PFCG and generate profile for
successfully configuring Netweaver provider for your SAP system.

2. Create and authorize a new RFC user.


a. Create an RFC user.
b. Assign the role Z_AMS_NETWEAVER_MONITORING to the user. It's the role
that you uploaded in the previous section.

3. Enable SICF Services to access the RFC via the SAP Internet Communication
Framework (ICF)
a. Go to transaction code SICF.
b. Go to the service path /default_host/sap/bc/soap/ .
c. Activate the services wsdl, wsdl11 and RFC.
It's also recommended to check that you enabled the ICF ports.

4. SMON - Enable SMON to monitor the system performance.Make sure the version
of ST-PI is SAPK-74005INSTPI.
You'll see empty visualization as part of the workbook when it isn't configured.
a. Enable the SDF/SMON snapshot service for your system. Turn on daily
monitoring. For instructions, see SAP Note 2651881 .
b. Configure SDF/SMON metrics to be aggregated every minute.
c. Recommended scheduling SDF/SMON as a background job in your target SAP
client each minute.
d. If you notice empty visualization as part of the workbook tab "System
Performance - CPU and Memory (/SDF/SMON)", please apply the below SAP
note:
i. Release 740 SAPKB74006-SAPKB74025 - Release 755 Until SAPK-
75502INSAPBASIS. For specific support package versions please refer to the
SAP NOTE.- SAP Note 2246160 .
ii. If the metric collection does not work with the above note then please try -
SAP Note 3268727

5. To enable secure communication

To enable TLS 1.2 or higher with SAP NetWeaver provider please execute steps
mentioned on this SAP document

Check if SAP systems are configured for secure communication using TLS 1.2 or
higher
a. Go to transaction RZ10.
b. Open DEFAULT profile, select Extended Maintenance and click change.
c. Below configuration is for TLS1.2 the bit mask will be 544: PFS. If TLS version is
higher, then bit mask will be greater than 544.

Check HTTPS port to be provided during the create provide process


a. Go to transaction SMICM.
b. Choose from the menu GOTO -> Services.
c. Verify if HTTPS protocol is in Active status.
Adding NetWeaver provider
Ensure all the prerequisites are successfully completed. To add the NetWeaver provider:

1. Sign in to the Azure portal .

2. Go to the Azure Monitor for SAP solutions service page.

3. Select Create to open the resource creation page.

4. Enter information for the Basics tab.

5. Select the Providers tab. Then, select Add provider.

6. Configure the new provider:

a. For Type, select SAP NetWeaver.

b. For Name, provide a unique name for the provider

c. For System ID (SID), enter the three-character SAP system identifier.

d. For Application Server, enter the IP address or the fully qualified domain name
(FQDN) of the SAP NetWeaver system to monitor. For example,
sapservername.contoso.com where sapservername is the hostname and
contoso.com is the domain. If you're using a hostname, make sure there's

connectivity from the virtual network that you used to create the Azure Monitor
for SAP solutions resource.

e. For Instance number, specify the instance number of SAP NetWeaver (00-99)

f. For Connection type - select either SOAP + RFC or SOAP based on the metric
collected (refer above section for details)

g. For SAP client ID, provide the SAP client identifier.

h. For SAP ICM HTTP Port, enter the port that the ICM is using, for example,
80(NN) where (NN) is the instance number.

i. For SAP username, enter the name of the user that you created to connect to
the SAP system.

j. For SAP password, enter the password for the user.

k. For Host file entries, provide the DNS mappings for all SAP VMs associated with
the SID Enter all SAP application servers and ASCS host file entries in Host file
entries. Enter host file mappings in comma-separated format. The expected
format for each entry is IP address, FQDN, hostname. For example: 192.X.X.X
sapservername.contoso.com sapservername,192.X.X.X
sapservername2.contoso.com sapservername2. To determine all SAP
hostnames associated with the SID, Sign in to the SAP system using the sidadm
user. Then, run the following command (or) you can leverage the script below to
generate the hostfile entries.

Command to find a list of instances associated with a given SID

Bash

/usr/sap/hostctrl/exe/sapcontrol -nr <instancenumber> -function


GetSystemInstanceList

Scripts to generate hostfile entries

We highly recommend following the detailed instructions in the link for


generating hostfile entries. These entries are crucial for the successful creation of
the Netweaver provider for your SAP system.

Troubleshooting for SAP Netweaver Provider

Common issues while adding Netweaver Provider.


1. Unable to reach the SAP hostname. ErrorCode: SOAPApiConnectionError

a. Check the input hostname, instance number, and host file mappings for the
hostname provided.

b. Follow the instruction for determining the hostfile entries Host file entries
section.

c. Ensure the NSG/firewall is not blocking the port – 5XX13 or 5XX14. (XX - SAP
Instance Number)

d. Check if AMS and SAP VMs are in the same vNet or are attached using vNet
peering.

If not attached, see the following link to connect vNets:

2. Check for unprotected updated rules. ErrorCode:


SOAPWebMethodsValidationFailed
After you restart the SAP service, check that your updated rules are applied to each
instance.

a. When Signing in to the SAP system as sidadm . Run the following command.
Replace <instance number> with your system's instance number.

Command

sapcontrol -nr <instance number> -function ParameterValue


service/protectedwebmethods

b. When signing in as non SIDADM user. Run the following command, replace
<instance number> with your system's instance number, <admin user> with your

administrator username, and <admin password> with the password.

Command

sapcontrol -nr <instance number> -function ParameterValue


service/protectedwebmethods -user "<admin user>" "<admin password>"

c. Review the output. Ensure in the output you see the name of methods
GetQueueStatistic ABAPGetWPTable EnqGetStatistic GetProcessList
GetEnvironment ABAPGetSystemWPTable

d. Repeat the previous steps for each instance profile.

To validate the rules, run a test query against the web methods. Replace the
<hostname> with your hostname, <instance number> with your SAP instance

number, and the method name with the appropriate method.

PowerShell

$SAPHostName = "<hostname>"
$InstanceNumber = "<instance number>"
$Function = "ABAPGetWPTable"
[System.Net.ServicePointManager]::ServerCertificateValidationCallback =
{$true}
$sapcntrluri = "https://" + $SAPHostName + ":5" + $InstanceNumber +
"14/?wsdl"
$sapcntrl = New-WebServiceProxy -uri $sapcntrluri -namespace
WebServiceProxy -class sapcntrl
$FunctionObject = New-Object ($sapcntrl.GetType().NameSpace +
".$Function")
$sapcntrl.$Function($FunctionObject)
3. Ensuring the Internet communication Framework port is open. ErrorCode:
RFCSoapApiNotEnabled

a. Sign in to the SAP system

b. Go to transaction code SICF.

c. Navigate to the service path /default_host/sap/bc/soap/ .

d. Right-click the ping service and choose Test Service. SAP starts your default
browser.

e. If the port can't be reached, or the test fails, open the port in the SAP VM.

i. For Linux, run the following commands. Replace <your port> with your
configured port.

Bash

sudo firewall-cmd --permanent --zone=public --add-port=<your


port>/TCP

Bash

sudo firewall-cmd --reload

ii. For Windows, open Windows Defender Firewall from the Start menu. Select
Advanced settings in the side menu, then select Inbound Rules. To open a
port, select New Rule. Add your port and set the protocol to TCP.

Common issues with the metric collection and possible


solutions
1. SMON metrics

Refer to the SMON section in the prerequisite

2. Batch job metrics

If you notice empty visualization as part of the workbook tab "Application


Performance -Batch Jobs (SM37)", please apply the below SAP note SAP Note
2469926 in your SAP System.
After you apply this OSS note you need to execute the RFC function module -
BAPI_XMI_LOGON_WS with the following parameters:

This function module has the same parameters as BAPI_XMI_LOGON but stores
them in the table BTCOPTIONS.

INTERFACE = XBP VERSION = 3.0 EXTCOMPANY = TESTC EXTPRODUCT = TESTP

3. SWNC metrics

To ensure a successful retrieval of the SWNC metrics, it is essential to confirm that


both the SAP system and the operating system (OS) have synchronized times.

Next steps
Learn about Azure Monitor for SAP solutions provider types
Configure SAP HANA provider for Azure
Monitor for SAP solutions
Article • 07/25/2023

In this how-to guide, you learn how to configure an SAP HANA provider for Azure
Monitor for SAP solutions through the Azure portal.

Prerequisite to enable secure communication


To enable TLS 1.2 higher for the SAP HANA provider, follow the steps in this SAP
document .

Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.

Configure SAP HANA provider


1. Sign in to the Azure portal .
2. Search for and select Azure Monitors for SAP solutions in the search bar.
3. On the Azure Monitor for SAP solutions service page, select Create.
4. On the Azure Monitor for SAP solutions creation page, enter your basic resource
information on the Basics tab.
5. On the Providers tab:

a. Select Add provider.

b. On the creation pane, for Type, select SAP HANA.


c. Optionally, select Enable secure communication and choose the certificate type
from the dropdown menu.

d. For IP address, enter the IP address or hostname of the server that runs the SAP
HANA instance that you want to monitor. If you're using a hostname, make sure
there's connectivity within the virtual network.

e. For Database tenant, enter the HANA database that you want to connect to. We
recommend that you use SYSTEMDB because tenant databases don't have all
monitoring views.

f. For Instance number, enter the instance number of the database (0-99). The
SQL port is automatically determined based on the instance number.

g. For Database username, enter the dedicated SAP HANA database user. This
user needs the MONITORING or BACKUP CATALOG READ role assignment. For
nonproduction SAP HANA instances, use SYSTEM instead.

h. For Database password, enter the password for the database username. You
can either enter the password directly or use a secret inside Azure Key Vault.
6. Save your changes to the Azure Monitor for SAP solutions resource.

7 Note

Azure Monitor for SAP solutions supports HANA 2.0 SP6 and later versions. Legacy
HANA 1.0 is not supported.
Next steps
Learn about Azure Monitor for SAP solutions provider types
Configure SQL Server for Azure Monitor
for SAP solutions
Article • 06/20/2023

In this how-to guide, you learn how to configure a SQL Server provider for Azure
Monitor for SAP solutions through the Azure portal.

Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.

Open a Windows port


Open the Windows port in the local firewall of SQL Server and the network security
group where SQL Server and Azure Monitor for SAP solutions exist. The default port is
1433.

Configure SQL Server


Configure SQL Server to accept sign-ins from Windows and SQL Server:

1. Open SQL Server Management Studio.


2. Open Server Properties > Security > Authentication.
3. Select SQL Server and Windows authentication mode.
4. Select OK to save your changes.
5. Restart SQL Server to complete the changes.

Create Azure Monitor for SAP solutions user for


SQL Server
Create a user for Azure Monitor for SAP solutions to sign in to SQL Server by using the
following script. Make sure to replace:

<Database to monitor> with your SAP database's name.

<password> with the password for your user.


You can replace the example information for the Azure Monitor for SAP solutions user
with any other SQL username.

SQL

USE [<Database to monitor>]


DROP USER [AMS]
GO
USE [master]
DROP USER [AMS]
DROP LOGIN [AMS]
GO
CREATE LOGIN [AMS] WITH
PASSWORD=N'<password>',
DEFAULT_DATABASE=[<Database to monitor>],
DEFAULT_LANGUAGE=[us_english],
CHECK_EXPIRATION=OFF,
CHECK_POLICY=OFF
CREATE USER AMS FOR LOGIN AMS
ALTER ROLE [db_datareader] ADD MEMBER [AMS]
ALTER ROLE [db_denydatawriter] ADD MEMBER [AMS]
GRANT CONNECT TO AMS
GRANT VIEW SERVER STATE TO AMS
GRANT VIEW ANY DEFINITION TO AMS
GRANT EXEC ON xp_readerrorlog TO AMS
GO
USE [<Database to monitor>]
CREATE USER [AMS] FOR LOGIN [AMS]
ALTER ROLE [db_datareader] ADD MEMBER [AMS]
ALTER ROLE [db_denydatawriter] ADD MEMBER [AMS]
GO

Prerequisites to enable secure communication


To enable TLS 1.2 or higher, follow the steps in this article.

Install an Azure Monitor for SAP solutions


provider
To install the provider from Azure Monitor for SAP solutions:

1. Open the Azure Monitor for SAP solutions resource in the Azure portal.
2. On the resource menu, under Settings, select Providers.
3. On the provider page, select Add to add a new provider.
4. On the Add provider page, enter all required information:
a. For Type, select Microsoft SQL Server.
b. For Name, enter a name for the provider.
c. (Optional) Select Enable secure communication and choose a certificate type
from the dropdown list.
d. For Host name, enter the IP address of the hostname.
e. For Port, enter the port on which SQL Server is listening. The default is 1433.
f. For SQL username, enter a username for the SQL Server account.
g. For Password, enter a password for the account.
h. For SID, enter the SAP system identifier.
i. Select Create to create the provider.
5. Repeat the previous step as needed to create more providers.
6. Select Review + create to complete the deployment.

Next steps
Learn about Azure Monitor for SAP solutions provider types
Create high-availability cluster provider
for Azure Monitor for SAP solutions
Article • 03/06/2024

In this how-to guide, you learn how to create a high-availability (HA) Pacemaker cluster
provider for Azure Monitor for SAP solutions. You install the HA agent and then create
the provider for Azure Monitor for SAP solutions.

Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.

Install an HA agent
Before you add providers for HA (Pacemaker) clusters, install the appropriate agent for
RHEL/SUSE in your environment in each of the cluster node.

For SUSE-based clusters, install ha_cluster_provider in each node. For more information,
see the HA cluster exporter installation guide . Supported SUSE versions include SLES
for SAP 12 SP3 and later versions.

For SUSE-based Pacemaker clusters, Please follow below steps to install in each of the
cluster node

Install an HA cluster exporter on SUSE


1. Install the required packages for Prometheus cluster exporter on the system.

Bash

sudo zypper install prometheus-ha_cluster_exporter

2. Enable and start the Prometheus cluster exporter as service

Bash
sudo systemctl start prometheus-ha_cluster_exporter

Bash

sudo systemctl enable prometheus-ha_cluster_exporter

3. Data is then collected in the system by ha_cluster_exporter. You can export the
data via URL http://<ip address of the server>:9644/metrics . To check if the
metrics are fetched via URL on the server where the ha_cluster_exporter is installed,
Run below command on the server.

Bash

curl http://localhost:9644/metrics

For RHEL-based clusters, install performance co-pilot (PCP) and the pcp-pmda-
hacluster subpackage in each node. For more information, see the PCP HACLUSTER
agent installation guide . Supported RHEL versions include 8.2, 8.4, and later versions.

For RHEL-based Pacemaker clusters, Please follow below steps to install in each of the
cluster node

Install an HA cluster exporter on RHEL


1. Install the required packages for PCP on the system.

Bash

sudo yum install pcp pcp-pmda-hacluster

2. Enable and start the required PCP Collector Services.

Bash

sudo systemctl start pmcd

Bash

sudo systemctl enable pmcd


3. Install and enable the HA cluster PMDA. Replace $PCP_PMDAS_DIR with the path
where hacluster is installed. Use the find command in Linux to find the path of
"hacluster" bits. usually hacluster will be in path "/var/lib/pcp/pmdas". Example : cd
/var/lib/pcp/pmdas/hacluster

Bash

cd $PCP_PMDAS_DIR/hacluster

Bash

sudo ./Install

4. Enable and start the pmproxy service.

Bash

sudo systemctl start pmproxy

Bash

sudo systemctl enable pmproxy

5. Data is then collected in the system by PCP. You can export the data by using
pmproxy via URL http://<ipaddress of the serrver>:44322/metrics?

names=ha_cluster . To check if the metrics are fetched via URL on the server where

the hacluster is installed, Run below command on the server.

Bash

curl http://localhost:44322/metrics?names=ha_cluster

Prerequisites to enable secure communication


To enable TLS 1.2 or higher, follow the steps in this article .

Create a provider for Azure Monitor for SAP


solutions
1. Sign in to the Azure portal .
2. Go to the Azure Monitor for SAP solutions service.

3. Open your Azure Monitor for SAP solutions resource.

4. On the resource's menu, under Settings, select Providers.

5. Select Add to add a new provider.

6. For Type, select High-availability cluster (Pacemaker).

7. (Optional) Select Enable secure communication and choose a certificate type.

8. Configure providers for each node of the cluster by entering the endpoint URL for
HA Cluster Exporter Endpoint.

a. For SUSE-based clusters, enter http://<IP-address>:9664/metrics .


b. For RHEL-based clusters, enter http://<'IP address'>:44322/metrics?
names=ha_cluster .

9. Enter the SID - SAP system ID, Hostname - SAP hostname of the Virtual machine
(Command hostname -s for SUSE and RHEL based servers will give hostname
detail.), Cluster - Provide any custom name that is easy to identify the SAP system
cluster - this Name will be visible in the workbook for metrics (need not have to be
the cluster name configured on the server).

10. Click on "Start test" under "Prerequisite check (Preview) - highly recommended" -
This test will help validate the connectivity from AMS subnet to the SAP source
system and list out if any error's found - which need to be addressed before
provider creation otherwise the provider creation will fail with error.

11. Select Create to finish creating the Provider.

12. Create provider for each of the servers in the cluster to be able to see the metrics
in the workbook For example - If the Cluster has three servers configured, Create
three providers for each of the three servers with all of the above steps followed.
Troubleshooting
Use the following troubleshooting steps for common errors.

Unable to reach the Prometheus endpoint


When the provider settings validation operation fails with the code
PrometheusURLConnectionFailure :

1. Restart the HA cluster exporter agent.

Bash

sudo systemctl start pmproxy

2. Reenable the HA cluster exporter agent.

Bash

sudo systemctl enable pmproxy

3. Verify that the Prometheus endpoint is reachable from the subnet that you
provided when you created the Azure Monitor for SAP solutions resource.

Next steps
Learn about Azure Monitor for SAP solutions provider types
Configure Linux provider for Azure
Monitor for SAP solutions
Article • 12/20/2023

In this how-to guide, you learn how to create a Linux OS provider for Azure Monitor for
SAP solutions resources.

Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
Install the node exporter latest version in each SAP host that you want to
monitor, either BareMetal or Azure virtual machine (VM). For more information, see
the node exporter GitHub repository .
Node exporter uses the default port 9100 to expose the metrics. If you want to use
a custom port, make sure to open the port in the firewall and use the same port
while creating the provider.
Default port 9100 or custom port that will be configured for node exporter should
be open and listening on the Linux host.

To install the node exporter on Linux:

Right click on the relevant node exporter version for linux from
https://prometheus.io/download/#node_exporter and copy the link address which will
be used in the below command. For example -
https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporte
r-1.6.1.linux-amd64.tar.gz

1. Change to the directory where you want to install the node exporter.

2. Run wget
https://github.com/prometheus/node_exporter/releases/download/v<xxx>/node_expo

rter-<xxx>.linux-amd64.tar.gz . Replace xxx with the version number.

3. Run tar xvfz node_exporter-<xxx>.linux-amd64.tar.gz

4. Run cd node_exporter-<xxx>linux-amd64

5. Run ./node_exporter .
6. Run ./node_exporter --web.listen-address=":9100" &

7. The node exporter now starts collecting data. You can export the data at
http://<ip>:9100/metrics .

Script to set up the node exporter


shell

# To get the latest node exporter version from:


https://prometheus.io/download/#node_exporter
# Right click on the linux node exporter version and copy the link address
which will be used in the below command. For example -
https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_ex
porter-1.6.1.linux-amd64.tar.gz
# Change to the directory where you want to install the node exporter.

wget
https://github.com/prometheus/node_exporter/releases/download/v<xxx>/node_ex
porter-<xxx>.linux-amd64.tar.gz
tar xvfz node_exporter-<xxx>.linux-amd64.tar.gz
cd node_exporter-<xxx>linux-amd64
nohup ./node_exporter --web.listen-address=":9100" &

Set up a systemctl service to start node exporter on a


Virtual Machine restart
1. If the target VM is restarted or stopped, node exporter service is stopped. It must
be manually started again to continue monitoring.

2. Run the below commands to enable node exporter to run as a service.

7 Note

Replace this xxxx with the version of node exporter. For example, 1.6.1 .

shell

# Change to the directory where node exporter bits are downloaded and
copy the node_exporter folder to path /usr/bin
sudo mv node_exporter-<xxxx>.linux-amd64 /usr/bin
# Create a node_exporter as a service file under etc/systemd/system
sudo tee /etc/systemd/system/node_exporter.service<<EOF
[Unit]
Description=Node Exporter
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/usr/bin/node_exporter-<xxxx>.linux-amd64/node_exporter $ARGS
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target
EOF
# Reload the system daemon and start the node exporter service.

sudo systemctl daemon-reload


sudo systemctl start node_exporter
sudo systemctl enable node_exporter

# Check the status of node exporter if it is running in active(running)


state.
sudo systemctl status node_exporter

# To test the node exporter running as a service


# NOTE - Downtime impacts the Business application running on VM
# Crash/Re-start the Virtual Machine, login back into VM and check node
exporter status to be active(running)
sudo systemctl status node_exporter

Prerequisites to enable secure communication


To enable TLS 1.2 or higher, follow the steps in this article .

Create Linux OS provider


1. Sign in to the Azure portal .
2. Go to the Azure Monitor for SAP solutions.
3. Select Create to make a new Azure Monitor for SAP solutions resource.
4. Select Add provider.
5. Configure the following settings for the new provider:
a. For Type, select OS (Linux).
b. For Name, enter a unique name of the provider.
c. (Optional) Select Enable secure communication, choose a certificate type.
d. For Node Exporter Endpoint, enter http://IP:9100/metrics if default port 9100
is used. If a custom port is used, enter http://IP:PORT/metrics . Replace IP with
the IP address of the Linux host and PORT with the custom port number.
e. For the IP address, use the private IP address of the Linux host. Make sure the
host and Azure Monitor for SAP solutions resource are in the same virtual
network.
6. Open firewall port 9100 on the Linux host.
a. If you're using firewall-cmd , run _firewall-cmd_ _--permanent_ _--add-
port=9100/tcp_ and then run _firewall-cmd_ _--reload_ .

b. If you're using ufw , run _ufw_ _allow_ _9100/tcp_ and then run _ufw_
_reload_ .

7. If the Linux host is an Azure VM, make sure that all applicable network security
groups allow inbound traffic at port 9100 from VirtualNetwork as the source.
8. Select Add provider to save your changes.
9. Continue to add more providers as needed.
10. Select Review + create to review the settings.
11. Select Create to finish creating the resource.

Troubleshooting
Use these steps to resolve common errors.

Unable to reach the Prometheus endpoint


When the provider settings validation operation fails with the code
PrometheusURLConnectionFailure :

1. Check the default port 9100 or custom port that is configured for node exporter is
open and listening on the Linux host.
2. Try to restart the node exporter agent:
a. Go to the folder where you installed the node exporter (the file name resembles
node_exporter-<xxxx>-amd64 ).

b. Run ./node_exporter .
c. Run nohup ./node_exporter & command to enable node_exporter. Adding
nohup and & to above command decouples the node_exporter from linux
machine commandline. If not included node_exporter would stop when the
commandline is closed.
3. Verify that the Prometheus endpoint is reachable from the subnet that you
provided when you created the Azure Monitor for SAP solutions resource.

Suggestion
Use this suggestion for troubleshooting

Enable the node exporter


1. Run the nohup ./node_exporter & command to enable node_exporter .
2. Adding nohup and & to the preceding command decouples node_exporter from
the Linux machine command line. If they're not included, node_exporter stops
when the command line is closed.

Next steps
Learn about Azure Monitor for SAP solutions provider types
Create IBM Db2 provider for Azure
Monitor for SAP solutions
Article • 06/20/2023

In this how-to guide, you learn how to create an IBM Db2 provider for Azure Monitor for
SAP solutions through the Azure portal.

Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.

Create a user for the Db2 server


First, create a new user for your Db2 server for use by Azure Monitor for SAP solutions.
Then run the following script to provide the new Db2 user with appropriate permissions.
Make sure to replace <username> with the Db2 username.

SQL

GRANT SECADM ON DATABASE TO USER <username>;


GRANT DATAACCESS ON DATABASE TO USER <username>;
GRANT ROLE SAPAPP TO USER <username>;

Next, if you don't have an SAPAPP role in your Db2 server, use the following query to
create the role.

SQL

CREATE ROLE SAPMON;


CREATE ROLE SAPAPP;
CREATE ROLE SAPTOOLS;
GRANT ROLE SAPMON TO ROLE SAPAPP;
GRANT ROLE SAPMON TO ROLE SAPTOOLS;
GRANT CONNECT ON DATABASE TO ROLE SAPMON;
GRANT SQLADM ON DATABASE TO ROLE SAPMON;
GRANT EXPLAIN ON DATABASE TO ROLE SAPMON;
GRANT BINDADD ON DATABASE TO ROLE SAPMON;
GRANT CREATETAB ON DATABASE TO ROLE SAPMON;
GRANT IMPLICIT_SCHEMA ON DATABASE TO ROLE SAPMON;
GRANT CREATE_EXTERNAL_ROUTINE ON DATABASE TO ROLE SAPMON;
GRANT LOAD ON DATABASE TO ROLE SAPAPP;
GRANT DBADM ON DATABASE TO ROLE SAPTOOLS;
GRANT WLMADM ON DATABASE TO ROLE SAPTOOLS;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.DB_GET_CFG TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_FORMAT_LOCK_NAME TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION
SYSPROC.MON_FORMAT_XML_COMPONENT_TIMES_BY_ROW TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_FORMAT_XML_METRICS_BY_ROW TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_FORMAT_XML_TIMES_BY_ROW TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_FORMAT_XML_WAIT_TIMES_BY_ROW
TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_ACTIVITY_DETAILS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_APPLICATION_HANDLE TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_APPLICATION_ID TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_APPL_LOCKWAIT TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_BUFFERPOOL TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_CONNECTION TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_CONNECTION_DETAILS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_CONTAINER TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_EXTENT_MOVEMENT_STATUS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_FCM TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_FCM_CONNECTION_LIST TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_INDEX TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_LOCKS TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_PKG_CACHE_STMT TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_PKG_CACHE_STMT_DETAILS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_SERVICE_SUBCLASS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_SERVICE_SUBCLASS_DETAILS
TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_TABLE TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_TABLESPACE TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_UNIT_OF_WORK TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_UNIT_OF_WORK_DETAILS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_WORKLOAD TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.MON_GET_WORKLOAD_DETAILS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_ACTIVITY_DETAILS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_CONN_ENV TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_QUEUE_STATS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_SERVICE_CLASS_AGENTS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_SERVICE_CLASS_AGENTS_V97
TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION
SYSPROC.WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION
SYSPROC.WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_SERVICE_SUBCLASS_STATS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION
SYSPROC.WLM_GET_SERVICE_SUBCLASS_STATS_V97 TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_SERVICE_SUPERCLASS_STATS
TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION
SYSPROC.WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION
SYSPROC.WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_WORKLOAD_STATS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_WORKLOAD_STATS_V97 TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC FUNCTION SYSPROC.WLM_GET_WORK_ACTION_SET_STATS TO
ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_CANCEL_ACTIVITY TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_CAPTURE_ACTIVITY_IN_PROGRESS
TO ROLE SAPMON;
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_COLLECT_STATS TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_COLLECT_STATS_WAIT TO ROLE
SAPMON;
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_SET_CONN_ENV TO ROLE SAPMON;

Prerequisites to enable secure communication


To enable TLS 1.2 or higher, follow the steps in this document .

Create an IBM Db2 provider


To create the IBM Db2 provider for Azure Monitor for SAP solutions:

1. Sign in to the Azure portal .


2. Go to the Azure Monitor for SAP solutions service.
3. Open the Azure Monitor for SAP solutions resource you want to modify.
4. On the resource's menu, under Settings, select Providers.
5. Select Add to add a new provider.
a. For Type, select IBM Db2.
b. (Optional) Select Enable secure communication and choose a certificate type
from the dropdown list.
c. Enter the IP address for the hostname.
d. Enter the database name.
e. Enter the database port.
f. Save your changes.
6. Configure more providers for each instance of the database.

Next steps
Learn about Azure Monitor for SAP solutions provider types
Data reference for Azure Monitor for
SAP solutions
Article • 05/15/2023

This article provides a reference of log data collected to analyze the performance and
availability of Azure Monitor for SAP solutions. See Monitor SAP on Azure for details on
collecting and analyzing monitoring data for SAP on Azure.

Metrics
Azure Monitor for SAP solutions doesn't support metrics.

Azure Monitor logs tables


This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure
Monitor for SAP solutions and available for query by Log Analytics. Azure Monitor for
SAP solutions uses custom logs. The schemas for some tables are defined by third-party
providers, such as SAP. Here are the current custom logs for Azure Monitor for SAP
solutions with links to sources for more information.

SapHana_HostConfig_CL
For more information, see M_LANDSCAPE_HOST_CONFIGURATION System View in
the SAP HANA SQL and System Views Reference.

SapHana_HostInformation_CL
For more information, see M_HOST_INFORMATION System View in the SAP HANA
SQL and System Views Reference.

SapHana_SystemOverview_CL
For more information, see M_SYSTEM_OVERVIEW System View in the SAP HANA SQL
and System Views Reference.

SapHana_LoadHistory_CL
For more information, see M_LOAD_HISTORY_HOST System View in the SAP HANA
SQL and System Views Reference.

SapHana_Disks_CL
For more information, see M_DISKS System View in the SAP HANA SQL and System
Views Reference.

SapHana_SystemAvailability_CL
For more information, see M_SYSTEM_AVAILABILITY System View in the SAP HANA
SQL and System Views Reference.

SapHana_BackupCatalog_CL
For more information, see:

M_BACKUP_CATALOG_FILES System View


M_BACKUP_CATALOG System View

SapHana_SystemReplication_CL
For more information, see M_SERVICE_REPLICATION System View in the SAP HANA
SQL and System Views Reference.

Prometheus_OSExporter_CL
For more information, see prometheus / node_exporter on GitHub .

Prometheus_HaClusterExporter_CL
For more information, see ClusterLabs/ha_cluster_exporter .

MSSQL_DBConnections_CL
For more information, see:

sys.dm_exec_sessions (Transact-SQL)
sys.databases (Transact-SQL)
MSSQL_SystemProps_CL
For more information, see:

sys.dm_os_windows_info (Transact-SQL)
sys.database_files (Transact-SQL)
sys.dm_exec_sql_text (Transact-SQL)
sys.dm_exec_query_stats (Transact-SQL)
sys.dm_io_virtual_file_stats (Transact-SQL)
sys.dm_db_partition_stats (Transact-SQL)
sys.dm_os_performance_counters (Transact-SQL)
sys.dm_os_wait_stats (Transact-SQL)
sys.fn_xe_file_target_read_file (Transact-SQL)
SQL Server Operating System Related Dynamic Management Views (Transact-SQL)
sys.availability_groups (Transact-SQL)
sys.dm_exec_requests (Transact-SQL)
sys.dm_xe_session_targets (Transact-SQL)
sys.fn_xe_file_target_read_file (Transact-SQL)
backupset (Transact-SQL)
sys.sysprocesses (Transact-SQL)

MSSQL_FileOverview_CL
For more information, see sys.database_files (Transact-SQL).

MSSQL_MemoryOverview_CL
For more information, see sys.dm_os_memory_clerks (Transact-SQL).

MSSQL_Top10Statements_CL
For more information, see:

sys.dm_exec_sql_text (Transact-SQL)
sys.dm_exec_query_stats (Transact-SQL)

MSSQL_IOPerformance_CL
For more information, see sys.dm_io_virtual_file_stats (Transact-SQL).

MSSQL_TableSizes_CL
For more information, see sys.dm_db_partition_stats (Transact-SQL).

MSSQL_BatchRequests_CL
For more information, see sys.dm_os_performance_counters (Transact-SQL).

MSSQL_WaitPercs_CL
For more information, see sys.dm_os_wait_stats (Transact-SQL).

MSSQL_PageLifeExpectancy2_CL
For more information, see sys.dm_os_performance_counters (Transact-SQL).

MSSQL_Error_CL
For more information, see sys.fn_xe_file_target_read_file (Transact-SQL).

MSSQL_CPUUsage_CL
For more information, see SQL Server Operating System Related Dynamic Management
Views (Transact-SQL).

MSSQL_AOOverview_CL
For more information, see sys.availability_groups (Transact-SQL).

MSSQL_AOWaiter_CL
For more information, see sys.dm_exec_requests (Transact-SQL).

MSSQL_AOWaitstats_CL
For more information, see sys.dm_os_wait_stats (Transact-SQL).

MSSQL_AOFailovers_CL
For more information, see:

sys.dm_xe_session_targets (Transact-SQL)
sys.fn_xe_file_target_read_file (Transact-SQL)

MSSQL_BckBackups2_CL
For more information, see: backupset (Transact-SQL).

MSSQL_BlockingProcesses_CL
For more information, see sys.sysprocesses (Transact-SQL).

Next steps
For more information on using Azure Monitor for SAP solutions, see Monitor SAP
on Azure.
For more information on Azure Monitor, see Monitoring Azure resources with
Azure Monitor.
What is SAP HANA on Azure (Large
Instances)?
Article • 02/10/2023

7 Note

HANA Large Instance service is in sunset mode and does not accept new customers
anymore. Providing units for existing HANA Large Instance customers is still
possible. For alternatives, please check the offers of HANA certified Azure VMs in
the HANA Hardware Directory .

SAP HANA on Azure (Large Instances) is a unique solution to Azure. In addition to


providing virtual machines for deploying and running SAP HANA, Azure offers you the
possibility to run and deploy SAP HANA on bare-metal servers that are dedicated to
you. The SAP HANA on Azure (Large Instances) solution builds on non-shared
host/server bare-metal hardware that is assigned to you. The server hardware is
embedded in larger stamps that contain compute/server, networking, and storage
infrastructure. SAP HANA on Azure (Large Instances) offers different server SKUs or
sizes. Units can have 36 Intel CPU cores and 768 GB of memory and go up to units that
have up to 480 Intel CPU cores and up to 24 TB of memory.

The customer isolation within the infrastructure stamp is performed in tenants, which
looks like:

Networking: Isolation of customers within infrastructure stack through virtual


networks per customer assigned tenant. A tenant is assigned to a single customer.
A customer can have multiple tenants. The network isolation of tenants prohibits
network communication between tenants in the infrastructure stamp level, even if
the tenants belong to the same customer.
Storage components: Isolation through storage virtual machines that have storage
volumes assigned to them. Storage volumes can be assigned to one storage virtual
machine only. A storage virtual machine is assigned exclusively to one single
tenant in the infrastructure stack. As a result, storage volumes assigned to a
storage virtual machine can be accessed in one specific and related tenant only.
They aren't visible between the different deployed tenants.
Server or host: A server or host unit isn't shared between customers or tenants. A
server or host deployed to a customer, is an atomic bare-metal compute unit that
is assigned to one single tenant. No hardware partitioning or soft partitioning is
used that might result in you sharing a host or a server with another customer.
Storage volumes that are assigned to the storage virtual machine of the specific
tenant are mounted to such a server. A tenant can have one to many server units
of different SKUs exclusively assigned.
Within an SAP HANA on Azure (Large Instances) infrastructure stamp, many
different tenants are deployed and isolated against each other through the tenant
concepts on networking, storage, and compute level.

These bare-metal server units are supported to run SAP HANA only. The SAP application
layer or workload middle-ware layer runs in virtual machines. The infrastructure stamps
that run the SAP HANA on Azure (Large Instances) units are connected to the Azure
network services backbones. In this way, low-latency connectivity between SAP HANA
on Azure (Large Instances) units and virtual machines is provided.

As of January 2021, we differentiate between two different revisions of HANA Large


Instance stamps and location of deployments:

"Revision 3" (Rev 3): Are the stamps that were made available for customer to
deploy before July 2019
"Revision 4" (Rev 4): New stamp design that is deployed in close proximity to Azure
VM hosts and which so far are released in the Azure regions of:
West US2
East US
East US2 (across two Availability Zones)
South Central US (across two Availability Zones)
West Europe
North Europe

This document is one of several documents that cover SAP HANA on Azure (Large
Instances). This document introduces the basic architecture, responsibilities, and services
provided by the solution. High-level capabilities of the solution are also discussed. For
most other areas, such as networking and connectivity, four other documents cover
details and drill-down information. The documentation of SAP HANA on Azure (Large
Instances) doesn't cover aspects of the SAP NetWeaver installation or deployments of
SAP NetWeaver in VMs. SAP NetWeaver on Azure is covered in separate documents
found in the same Azure documentation container.

The different documents of HANA Large Instance guidance cover the following areas:

SAP HANA (Large Instances) overview and architecture on Azure


SAP HANA (Large Instances) infrastructure and connectivity on Azure
Install and configure SAP HANA (Large Instances) on Azure
SAP HANA (Large Instances) high availability and disaster recovery on Azure
SAP HANA (Large Instances) troubleshooting and monitoring on Azure
High availability set up in SUSE by using a fencing device
OS Backup
Save on SAP HANA Large Instances with an Azure reservation

Next steps

Refer Know the terms


Know the terms
Article • 02/10/2023

Several common definitions are widely used in the Architecture and Technical
Deployment Guide. Note the following terms and their meanings:

IaaS: Infrastructure as a service.

PaaS: Platform as a service.

SaaS: Software as a service.

SAP component: An individual SAP application, such as ERP Central Component


(ECC), Business Warehouse (BW), Solution Manager, or Enterprise Portal (EP). SAP
components can be based on traditional ABAP or Java technologies or a non-
NetWeaver based application such as Business Objects.

SAP environment: One or more SAP components logically grouped to perform a


business function, such as development, quality assurance, training, disaster
recovery, or production.

SAP landscape: Refers to the entire SAP assets in your IT landscape. The SAP
landscape includes all production and non-production environments.

SAP system: The combination of DBMS layer and application layer of, for example,
an SAP ERP development system, an SAP BW test system, and an SAP CRM
production system. Azure deployments don't support dividing these two layers
between on-premises and Azure. An SAP system is either deployed on-premises or
it's deployed in Azure. You can deploy the different systems of an SAP landscape
into either Azure or on-premises. For example, you can deploy the SAP CRM
development and test systems in Azure while you deploy the SAP CRM production
system on-premises. For SAP HANA on Azure (Large Instances), it's intended that
you host the SAP application layer of SAP systems in VMs and the related SAP
HANA instance on a unit in the SAP HANA on Azure (Large Instances) stamp.

Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI-
certified and dedicated to run SAP HANA instances within Azure.

SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run
HANA instances in on SAP HANA TDI-certified hardware that's deployed in Large
Instance stamps in different Azure regions. The related term HANA Large Instance
is short for SAP HANA on Azure (Large Instances) and is widely used in this
technical deployment guide.
Cross-premises: Describes a scenario where VMs are deployed to an Azure
subscription that has site-to-site, multi-site, or Azure ExpressRoute connectivity
between on-premises data centers and Azure. In common Azure documentation,
these kinds of deployments are also described as cross-premises scenarios. The
reason for the connection is to extend on-premises domains, on-premises Azure
Active Directory/OpenLDAP, and on-premises DNS into Azure. The on-premises
landscape is extended to the Azure assets of the Azure subscriptions. With this
extension, the VMs can be part of the on-premises domain.

Domain users of the on-premises domain can access the servers and run services
on those VMs (such as DBMS services). Communication and name resolution
between VMs deployed on-premises and Azure-deployed VMs is possible. This
scenario is typical of the way in which most SAP assets are deployed. For more
information, see Azure VPN Gateway and Create a virtual network with a site-to-
site connection by using the Azure portal.

Tenant: A customer deployed in HANA Large Instance stamp gets isolated into a
tenant. A tenant is isolated in the networking, storage, and compute layer from
other tenants. Storage and compute units assigned to the different tenants can't
see each other or communicate with each other on the HANA Large Instance
stamp level. A customer can choose to have deployments into different tenants.
Even then, there is no communication between tenants on the HANA Large
Instance stamp level.

SKU category: For HANA Large Instance, the following two categories of SKUs are
offered:
Type I class: S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and
S224m
Type II class: S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm,
and S960m

Stamp: Defines the Microsoft internal deployment size of HANA Large Instances.
Before HANA Large Instance units can get deployed, a HANA Large Instance stamp
consisting out of compute, network, and storage racks need to be deployed in a
datacenter location. Such a deployment is called a HANA Large instance stamp or
from Revision 4 (see below) on we use the alternate of term of Large Instance Row

Revision: There are two different stamp revisions for HANA Large Instance stamps.
These differ in architecture and proximity to Azure virtual machine hosts.
"Revision 3" (Rev 3) is the original design deployed from the middle of 2016.
"Revision 4.2" (Rev 4.2) is a new design that provides closer proximity to Azure
virtual machine hosts. Rev 4.2 offers ultra-low network latency between Azure
VMs and HANA Large Instance units. Resources in the Azure portal are referred
to as BareMetal Infrastructure. Customers can access their resources as
BareMetal instances from the Azure portal.

A variety of additional resources are available on how to deploy an SAP workload in the
cloud. If you plan to execute a deployment of SAP HANA in Azure, you need to be
experienced with and aware of the principles of Azure IaaS and the deployment of SAP
workloads on Azure IaaS. Before you continue, see Use SAP solutions on Azure virtual
machines for more information.

Next steps
Refer to HLI Certification.
Certification
Article • 02/10/2023

Besides the NetWeaver certification, SAP requires a special certification for SAP HANA to
support SAP HANA on certain infrastructures, such as Azure IaaS and BareMetal
Infrastructure.

The core SAP Note on NetWeaver, and to a degree SAP HANA certification, is SAP Note
#1928533 – SAP applications on Azure: Supported products and Azure VM types .

The certification records for SAP HANA on Azure Large Instances can be found in the
SAP HANA certified IaaS Platforms site.

The SAP HANA on Azure (Large Instances) types, referred to in SAP HANA certified IaaS
Platforms site, provides Microsoft and SAP customers the ability to deploy:

Large SAP Business Suite


SAP BW
S/4 HANA
BW/4HANA
Other SAP HANA workloads in Azure.

The solution is based on the SAP-HANA certified dedicated hardware stamp (SAP HANA
tailored data center integration – TDI ). If you run an SAP HANA TDI-configured
solution, all the above SAP HANA-based applications work on the hardware
infrastructure.

Compared to running SAP HANA in VMs, this solution offers the benefit of much larger
memory volumes.

Key concepts
To enable this solution, you need to understand the following key aspects:

The SAP application layer and non-SAP applications run in VMs that are hosted in
the usual Azure hardware stamps.
Customer on-premises infrastructure, data centers, and application deployments
are connected to the cloud platform through ExpressRoute (recommended) or a
virtual private network (VPN). Active Directory and DNS also are extended into
Azure.
The SAP HANA database instance for HANA workload runs on SAP HANA on Azure
(Large Instances). The Large Instance stamp is connected into Azure networking, so
software running in VMs can interact with the HANA instance running in HANA
Large Instance.
Hardware of SAP HANA on Azure (Large Instances) is dedicated hardware provided
in an IaaS with SUSE Linux Enterprise Server or Red Hat Enterprise Linux
preinstalled. As with virtual machines, further updates and maintenance to the
operating system is your responsibility.
Installation of HANA or any other components necessary to run SAP HANA on
units of HANA Large Instance is your responsibility. All respective ongoing
operations and administration of SAP HANA on Azure are also your responsibility.
You can also install other components in your Azure subscription that connect to
SAP HANA on Azure (Large Instances). For example, components that enable
communication with the SAP HANA database, such as:
Jump servers
RDP servers
SAP HANA Studio
SAP Data Services for SAP BI scenarios
Network monitoring solutions.
As in Azure, HANA Large Instance offers support for high availability and disaster
recovery functionality.

Next steps
Learn about available SKUs for HANA Large Instances.

Available SKUs for HLI


Available SKUs for HANA Large
Instances
Article • 02/10/2023

BareMetal Infrastructure availability by region


BareMetal Infrastructure (certified for SAP HANA workloads) service based on Rev 4.2* is
available in the following regions:

West Europe
North Europe
Germany West Central with Zones support
East US with Zones support
East US 2
South Central US
West US 2 with Zones support

BareMetal Infrastructure (certified for SAP HANA workloads) service based on Rev 3* has
limited availability in the following regions:

West US
East US
Australia East
Australia Southeast
Japan East

List of available Azure Large Instances


The following is a list of available Azure Large Instances (also known as BareMetal
Infrastructure instances).

) Important

Be aware of the first column that represents the status of HANA certification for
each of the Large Instance types in the list. The column should correlate with the
SAP HANA hardware directory for the Azure SKUs that start with the letter S.
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified

YES SAP HANA on Azure 768 GB 768 GB --- 3.0 TB Available


OLAP , S96
OLTP – 2 x Intel® Xeon®
Processor E7-8890 v4
48 CPU cores and 96
CPU threads

YES SAP HANA on Azure 3.0 TB 3.0 TB --- 6.3 TB Available


OLAP , S224
OLTP – 4 x Intel® Xeon®
Platinum 8276
processor
112 CPU cores and 224
CPU threads

YES SAP HANA on Azure 6.0 TB 6.0 TB --- 10.5 TB Available


OLTP S224m
– 4 x Intel® Xeon®
Platinum 8276
processor
112 CPU cores and 224
CPU threads

YES SAP HANA on Azure 6.0 TB 3.0 TB 3.0 TB 10.5 TB Available


OLAP , S224om
OLTP – 4 x Intel® Xeon®
Platinum 8276
processor
112 CPU cores and 224
CPU threads

YES SAP HANA on Azure 4.5 TB 1.5 TB 3.0 TB 8.4 TB Available


OLAP , S224oo
OLTP – 4 x Intel® Xeon®
Platinum 8276
processor
112 CPU cores and 224
CPU threads

YES SAP HANA on Azure 7.5 TB 1.5 TB 6.0 TB 12.7 TB Available


OLAP , S224ooo
OLTP – 4 x Intel® Xeon®
Platinum 8276
processor
112 CPU cores and 224
CPU threads
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified

YES SAP HANA on Azure 9.0 TB 3.0 TB 6.0 TB 14.8 TB Available


OLAP , S224oom
OLTP – 4 x Intel® Xeon®
Platinum 8276
processor
112 CPU cores and 224
CPU threads

YES SAP HANA on Azure 4.0 TB 4.0 TB --- 16 TB Available


OLAP , S384
OLTP – 8 x Intel® Xeon®
Processor E7-8890 v4
192 CPU cores and 384
CPU threads

YES SAP HANA on Azure 6.0 TB 6.0 TB --- 18 TB Available


OLTP S384m
– 8 x Intel® Xeon®
Processor E7-8890 v4
192 CPU cores and 384
CPU threads

YES SAP HANA on Azure 8.0 TB 8.0 TB --- 22 TB Available


OLAP , S384xm
OLTP – 8 x Intel® Xeon®
Processor E7-8890 v4
192 CPU cores and 384
CPU threads

YES SAP HANA on Azure 6.0 TB 6.0 TB --- 10.5 TB Available


OLAP , S448 (Rev 4.2
OLTP – 8 x Intel® Xeon® only)
Platinum 8276
processor
224 CPU cores and 448
CPU threads

YES SAP HANA on Azure 12.0 TB 12.0 TB --- 18.9 TB Available


OLAP , S448m (Rev 4.2
OLTP – 8 x Intel® Xeon® only)
Platinum 8276
processor
224 CPU cores and 448
CPU threads
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified

NO SAP HANA on Azure 9.0 TB 3.0 TB 6.0 TB 14.8 TB Available


S448oo (Rev 4.2
– 8 x Intel® Xeon® only)
Platinum 8276
processor
224 CPU cores and 448
CPU threads

YES SAP HANA on Azure 12.0 TB 6.0 TB 6.0 TB 18.9 TB Available


OLAP , S448om (Rev 4.2
OLTP – 8 x Intel® Xeon® only)
Platinum 8276
processor
224 CPU cores and 448
CPU threads

NO SAP HANA on Azure 15.0 TB 3.0 TB 12.0 TB 23.2 TB Available


S448ooo (Rev 4.2
– 8 x Intel® Xeon® only)
Platinum 8276
processor
224 CPU cores and 448
CPU threads

NO SAP HANA on Azure 18.0 TB 6.0 TB 12.0 TB 27.4 TB Available


S448oom (Rev 4.2
– 8 x Intel® Xeon® only)
Platinum 8276
processor
224 CPU cores and 448
CPU threads

YES SAP HANA on Azure 12.0 TB 12.0 TB --- 28 TB Available


OLTP S576m (Rev 4.2
– 12 x Intel® Xeon® only)
Processor E7-8890 v4
288 CPU cores and 576
CPU threads

NO SAP HANA on Azure 18.0 TB 18.0 --- 41 TB Available


S576xm
– 12 x Intel® Xeon®
Processor E7-8890 v4
288 CPU cores and 576
CPU threads
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified

YES SAP HANA on Azure 9.0 TB 9.0 TB --- 14.7 TB Available


OLAP , S672 (Rev 4.2
OLTP – 12 x Intel® Xeon® only)
Platinum 8276
processor
336 CPU cores and 672
CPU threads

YES SAP HANA on Azure 18.0 TB 18.0 TB --- 27.4 TB Available


OLAP , S672m (Rev 4.2
OLTP – 12 x Intel® Xeon® only)
Platinum 8276
processor
336 CPU cores and 672
CPU threads

NO SAP HANA on Azure 13.5 TB 4.5 TB 9.0 TB 21.1 TB Available


S672oo (Rev 4.2
– 12 x Intel® Xeon® only)
Platinum 8276
processor
336 CPU cores and 672
CPU threads

YES SAP HANA on Azure 18.0 TB 9.0 TB 9.0 TB 27.4 TB Available


OLAP , S672om (Rev 4.2
OLTP – 12 x Intel® Xeon® only)
Platinum 8276
processor
336 CPU cores and 672
CPU threads

NO SAP HANA on Azure 22.5 TB 4.5 TB 18.0 TB 33.7 TB Available


S672ooo (Rev 4.2
– 12 x Intel® Xeon® only)
Platinum 8276
processor
336 CPU cores and 672
CPU threads
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified

NO SAP HANA on Azure 27.0 TB 9.0 TB 18.0 TB 40.0 TB Available


S672oom (Rev 4.2
– 12 x Intel® Xeon® only)
Platinum 8276
processor
336 CPU cores and 672
CPU threads

YES SAP HANA on Azure 16.0 TB 16.0 TB -- 36 TB Available


OLTP S768m
– 16 x Intel® Xeon®
Processor E7-8890 v4
384 CPU cores and 768
CPU threads

NO SAP HANA on Azure 24.0 TB 24.0 TB --- 56 TB Available


S768xm
– 16 x Intel® Xeon®
Processor E7-8890 v4
384 CPU cores and 768
CPU threads

YES SAP HANA on Azure 12.0 TB 12.0 TB --- 18.9 TB Available


OLAP , S896 (Rev 4.2
OLTP – 16 x Intel® Xeon® only)
Platinum 8276
processor
448 CPU cores and 896
CPU threads

YES SAP HANA on Azure 24.0 TB 24.0 TB -- 35.8 TB Available


OLAP , S896m
OLTP – 16 x Intel® Xeon®
Platinum 8276
processor
448 CPU cores and 896
CPU threads

NO SAP HANA on Azure 18.0 TB 6.0 TB 12.0 TB 27.4 TB Available


S896oo (Rev 4.2
– 16 x Intel® Xeon® only)
Platinum 8276
processor
448 CPU cores and 896
CPU threads
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified

YES SAP HANA on Azure 24.0 TB 12.0 TB 12.0 TB 35.8 TB Available


OLAP , S896om (Rev 4.2
OLTP – 16 x Intel® Xeon® only)
Platinum 8276
processor
448 CPU cores and 896
CPU threads

NO SAP HANA on Azure 30.0 TB 6.0 TB 24.0 TB 44.3 TB Available


S896ooo (Rev 4.2
– 16 x Intel® Xeon® only)
Platinum 8276
processor
448 CPU cores and 896
CPU threads

NO SAP HANA on Azure 36.0 TB 12.0 TB 24.0 TB 52.7 TB Available


S896oom (Rev 4.2
– 16 x Intel® Xeon® only)
Platinum 8276
processor
448 CPU cores and 896
CPU threads

YES SAP HANA on Azure 20.0 TB 20.0 TB -- 46 TB Available


OLTP S960m (Rev 4.2
– 20 x Intel® Xeon® only)
Processor E7-8890 v4
480 CPU cores and 960
CPU threads

CPU cores = sum of non-hyper-threaded CPU cores of the sum of the processors
of the server unit.
CPU threads = sum of compute threads provided by hyper-threaded CPU cores of
the sum of the processors of the server unit. Most units are configured by default
to use Hyper-Threading Technology.
Based on supplier recommendations, S768m, S768xm, and S960m aren't
configured to use Hyper-Threading for running SAP HANA.

) Important

The following SKUs, though still supported, can't be purchased anymore: S72,
S72m, S144, S144m, S192, and S192m.
Specific configurations chosen are dependent on workload, CPU resources, and desired
memory. It's possible for the OLTP workload to use the SKUs that are optimized for the
OLAP workload.

Two different classes of hardware divide the SKUs into:

S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and S224m, S224oo,
S224om, S224ooo, S224oom are referred to as the "Type I class" of SKUs.
All other SKUs are referred to as the "Type II class" of SKUs.
If you're interested in SKUs that aren't yet listed in the SAP hardware directory,
contact your Microsoft account team to get more information.

Tenant considerations
A complete HANA Large Instance stamp isn't exclusively allocated for a single
customer's use. This applies to the racks of compute and storage resources connected
through a network fabric deployed in Azure as well. HANA Large Instance infrastructure,
like Azure, deploys different customer "tenants" that are isolated from one another in
the following three levels:

Network: Isolation through virtual networks within the HANA Large Instance
stamp.
Storage: Isolation through storage virtual machines that have storage volumes
assigned and isolate storage volumes between tenants.
Compute: Dedicated assignment of server units to a single tenant. No hard or soft
partitioning of server units. No sharing of a single server or host unit between
tenants.

The deployments of HANA Large Instance units between different tenants aren't visible
to each other. HANA Large Instance units deployed in different tenants can't
communicate directly with each other on the HANA Large Instance stamp level. Only
HANA Large Instance units within one tenant can communicate with each other on the
HANA Large Instance stamp level.

A deployed tenant in the Large Instance stamp is assigned to one Azure subscription for
billing purposes. For a network, it can be accessed from virtual networks of other Azure
subscriptions within the same Azure enrollment. If you deploy with another Azure
subscription in the same Azure region, you also can choose to ask for a separated HANA
Large Instance tenant.
SAP HANA on HANA Large Instances vs. on
VMs
There are significant differences between running SAP HANA on HANA Large Instances
and SAP HANA running on VMs deployed in Azure:

There is no virtualization layer for SAP HANA on Azure (Large Instances). You get
the performance of the underlying bare-metal hardware.
Unlike Azure, the SAP HANA on Azure (Large Instances) server is dedicated to a
specific customer. There is no possibility that a server unit or host is hard or soft
partitioned. As a result, a HANA Large Instance unit is used as assigned as a whole
to a tenant and with that to you. A reboot or shutdown of the server doesn't lead
automatically to the operating system and SAP HANA being deployed on another
server. (For Type I class SKUs, the only exception is if a server encounters issues
and redeployment needs to be performed on another server.)
Unlike Azure, where host processor types are selected for the best
price/performance ratio, the processor types chosen for SAP HANA on Azure
(Large Instances) are the highest performing of the Intel E7v3 and E7v4 processor
line.

Next steps
Learn about sizing for HANA Large Instances.

HLI Sizing
Sizing
Article • 02/10/2023

In this article, we'll look at information helpful with sizing for HANA Large Instances. In
general, sizing for HANA Large Instances is no different than sizing for HANA.

Moving an existing system to SAP HANA (Large


Instances)
Let's say you want to move an existing deployed system from another relational
database management system (RDBMS) to HANA. SAP provides reports to run on your
existing SAP system. If the database is moved to HANA, these reports check the data
and calculate memory requirements for the HANA instance.

For more information on how to run these reports and obtain their most recent patches
or versions, read the following SAP Notes:

SAP Note #1793345 - Sizing for SAP Suite on HANA


SAP Note #1872170 - Suite on HANA and S/4 HANA sizing report
SAP Note #2121330 - FAQ: SAP BW on HANA sizing report
SAP Note #1736976 - Sizing report for BW on HANA
SAP Note #2296290 - New sizing report for BW on HANA

Sizing greenfield implementations


When you're starting an implementation from scratch, SAP Quick Sizer will calculate
memory requirements of the implementation of SAP software on top of HANA.

Memory requirements
Memory requirements for HANA increase as data volume grows. Be aware of your
current memory consumption to help you predict what it's going to be in the future.
Based on memory requirements, you then can map your demand into one of the HANA
Large Instance SKUs.

Next steps
Learn about onboarding requirements for HANA Large Instances.
Onboarding requirements
Onboarding requirements
Article • 02/10/2023

This article lists the requirements for running SAP HANA on Azure Large Instances (also
known as BareMetal Infrastructure instances).

Microsoft Azure
An Azure subscription that can be linked to SAP HANA on Azure (Large Instances).
Microsoft Premier support contract. For specific information related to running SAP
in Azure, see SAP Support Note #2015553 – SAP on Microsoft Azure: Support
prerequisites . If you use HANA Large Instance units with 384 and more CPUs,
you also need to extend the Premier support contract to include Azure Rapid
Response.
Awareness of the HANA Large Instance SKUs you need after you complete a sizing
exercise with SAP.

Network connectivity
ExpressRoute between on-premises to Azure: To connect your on-premises data
center to Azure, make sure to order at least a 1-Gbps connection from your ISP.
Connectivity between HANA Large Instances and Azure uses ExpressRoute
technology as well. This ExpressRoute connection between the HANA Large
Instances and Azure is included in the price of the HANA Large Instances. The price
also includes all data ingress and egress charges for this specific ExpressRoute
circuit. So you won't have added costs beyond your ExpressRoute link between on-
premises and Azure.

Operating system
Licenses for SUSE Linux Enterprise Server 12 and SUSE Linux Enterprise Server 15
for SAP Applications.

7 Note

The operating system delivered by Microsoft isn't registered with SUSE. It isn't
connected to a Subscription Management Tool instance.
SUSE Linux Subscription Management Tool deployed in Azure on a VM. This tool
provides the capability for SAP HANA on Azure (Large Instances) to be registered
and respectively updated by SUSE. (There's no internet access within the HANA
Large Instance data center.)

Licenses for Red Hat Enterprise Linux 7.9 and 8.2 for SAP HANA.

7 Note

The operating system delivered by Microsoft isn't registered with Red Hat. It
isn't connected to a Red Hat Subscription Manager instance.

Red Hat Subscription Manager deployed in Azure on a VM. The Red Hat
Subscription Manager provides the capability for SAP HANA on Azure (Large
Instances) to be registered and respectively updated by Red Hat. (There is no direct
internet access from within the tenant deployed on the Azure Large Instance
stamp.)

SAP requires you to have a support contract with your Linux provider as well. This
requirement isn't removed by the solution of HANA Large Instance or the fact that
you run Linux in Azure. Unlike with some of the Linux Azure gallery images, the
service fee is not included in the solution offer of HANA Large Instance. It's your
responsibility to fulfill the requirements of SAP as far as support contracts with the
Linux distributor.
For SUSE Linux, look up the requirements of support contracts in SAP Note
#1984787 - SUSE Linux Enterprise Server 12: Installation notes and SAP Note
#1056161 - SUSE priority support for SAP applications .
For Red Hat Linux, you need to have the correct subscription levels that include
support and service updates to the operating systems of HANA Large Instance.
Red Hat recommends the Red Hat Enterprise Linux subscription for SAP
solution. Refer to https://access.redhat.com/solutions/3082481 .

For the support matrix of the different SAP HANA versions with the different Linux
versions, see SAP Note #2235581 .

For the compatibility matrix of the operating system and HLI firmware/driver versions,
refer OS Upgrade for HLI.

) Important

For Type II units SLES 12 SP5, SLES 15 SP2 and SLES 15 SP3 OS versions are
supported at this point.
Database
Licenses and software installation components for SAP HANA (platform or
enterprise edition).

Applications
Licenses and software installation components for any SAP applications that
connect to SAP HANA and related SAP support contracts.
Licenses and software installation components for any non-SAP applications used
with SAP HANA on Azure (Large Instances) environments and related support
contracts.

Skills
Experience with and knowledge of Azure IaaS and its components.
Experience with and knowledge of how to deploy an SAP workload in Azure.
SAP HANA installation certified personal.
SAP architect skills to design high availability and disaster recovery around SAP
HANA.

SAP
Expectation is that you're an SAP customer and have a support contract with SAP.
Especially for implementations of the Type II class of HANA Large Instance SKUs,
consult with SAP on versions of SAP HANA and the eventual configurations on
large-sized scale-up hardware.

Next steps
Learn about using SAP HANA data tiering and extension nodes.

Use SAP HANA data tiering and extension nodes


Use SAP HANA data tiering and
extension nodes
Article • 02/10/2023

SAP supports a data tiering model for SAP Business Warehouse (BW) with different SAP
NetWeaver releases and SAP BW/4HANA. For more information about the data tiering
model, see SAP BW/4HANA and SAP BW on HANA with SAP HANA extension nodes .

With HANA Large Instance, you can use the option 1 configuration of SAP HANA
extension nodes, as explained in the FAQ and SAP blog documents. Option 2
configurations can be set up with the following HANA Large Instance SKUs: S72m, S192,
S192m, S384, and S384m.

Advantages of SAP HANA extension nodes


Using SAP HANA extension nodes, either option 1 or 2, is an easy way to make better
use of SAP HANA memory. The advantages of SAP HANA extension nodes become clear
when you look at the SAP sizing guidelines. Here are a few examples:

SAP HANA sizing guidelines usually require double the amount of data volume
compared to memory. When you run your SAP HANA instance with hot data, only
50 percent or less of your memory stores data. Ideally, the remaining memory is
held for SAP HANA to do its work.
That means in a HANA Large Instance S192 unit with 2 TB of memory running an
SAP BW database, you only have 1 TB in data volume.
If you use another SAP HANA extension node option 1, also a S192 HANA Large
Instance SKU, it gives you another 2-TB capacity in data volume. In the option 2
configuration, you get another 4 TB for warm data volume. Compared to the hot
node, the full memory capacity of the "warm" extension node can be used for
storing data for option 1. Double the memory can be used for data volume in the
option 2 SAP HANA extension node configuration.
You end up with a capacity of 3 TB for your data and a hot-to-warm ratio of 1:2 for
option 1. You have 5 TB of data and a 1:4 ratio with the option 2 extension node
configuration.

The higher the data volume compared to memory, the greater your chances that the
warm data you're asking for is stored on disk.

Next steps
Learn about the operations model for SAP HANA on Azure (Large Instances) and your
responsibilities.

Operations model and responsibilities


Operations model and responsibilities
Article • 02/10/2023

The service provided with SAP HANA on Azure (Large Instances) is aligned with Azure
IaaS services. You get a HANA Large Instance with an installed operating system
optimized for SAP HANA. As with Azure IaaS VMs, most of the tasks of hardening the
operating system (OS), installing another software, installing HANA, operating the OS
and HANA, and updating the OS and HANA are your responsibility. Microsoft doesn't
force OS updates or HANA updates on you.

As shown in the preceding diagram, SAP HANA on Azure (Large Instances) is a multi-
tenant IaaS offering. For the most part, the division of responsibility is at the OS-
infrastructure boundary. Microsoft is responsible for all aspects of the service below the
line of the operating system. You're responsible for all aspects of the service above the
line. The OS is your responsibility. You can continue to use most current on-premises
methods you might employ for compliance, security, application management, basis,
and OS management. The systems appear as if they're in your network.

This service is optimized for SAP HANA, so you'll need to work with Microsoft to use the
underlying infrastructure capabilities for the best results.

Your responsibilities
The following list provides more detail on each of the layers and your responsibilities:

Networking: All the internal networks for the Large Instance stamp running SAP HANA.
Your responsibility includes access to storage, connectivity between the instances (for
scale-out and other functions), connectivity to the landscape, and connectivity to Azure
where the SAP application layer is hosted in VMs. It also includes WAN connectivity
between Azure Data Centers for disaster recovery purposes and replication. All networks
are partitioned by the tenant and have quality of service applied.

Storage: The virtualized partitioned storage for all volumes needed by the SAP HANA
servers, and for snapshots.

Servers: The dedicated physical servers to run the SAP HANA databases assigned to
tenants. The servers of the Type I class of SKUs are hardware abstracted. With these
types of servers, the server configuration is collected and maintained in profiles, which
can be moved from one physical hardware to another physical hardware. Such a
(manual) move of a profile by operations can be compared to Azure service healing. The
servers of the Type II class SKUs don't offer this capability.

SDDC: The management software used to manage data centers as software-defined


entities. It allows Microsoft to pool resources for scale, availability, and performance
reasons.

O/S: The OS you choose (SUSE Linux or Red Hat Linux) that's running on the servers. The
OS images you're supplied with were provided by the individual Linux vendor to
Microsoft for running SAP HANA. You must have a subscription with the Linux vendor
for the specific SAP HANA-optimized image. You're responsible for registering the
images with the OS vendor.

From the point of handover by Microsoft, you're responsible for any further patching of
the Linux operating system. This patching includes added packages that might be
necessary for a successful SAP HANA installation and that weren't included by the Linux
vendor in their SAP HANA optimized OS images. (For more information, see SAP's
HANA installation documentation and SAP Notes.)

You're responsible for OS patching owing to malfunction or optimization of the OS and


its drivers relative to the specific server hardware. You're also responsible for security or
functional patching of the OS.

Your responsibility includes monitoring and capacity planning of:

CPU resource consumption.


Memory consumption.
Disk volumes related to free space, IOPS, and latency.
Network volume traffic between HANA the Large Instance and the SAP application
layer.

The underlying infrastructure of the HANA Large Instance provides functionality for
backup and restore of the OS volume. Using this functionality is your responsibility.
Middleware: The SAP HANA Instance, primarily. Administration, operations, and
monitoring are your responsibility. You can use storage snapshots for backup and
restore and disaster recovery. These capabilities are provided by the infrastructure.
You're responsible to design high availability or disaster recovery with these capabilities
and monitoring to determine whether storage snapshots executed successfully.

Data: Your data managed by SAP HANA, and other data such as backup files located on
volumes or file shares. Your responsibilities include monitoring disk free space and
managing the content on the volumes. You're also responsible for monitoring the
successful execution of backups of disk volumes and storage snapshots. Successful
execution of data replication to disaster recovery sites is the responsibility of Microsoft.

Applications: The SAP application instances, or in the case of non-SAP applications, the
application layer of those applications. Your responsibilities include deployment,
administration, operations, and monitoring of those applications. You're responsible for
capacity planning of CPU resource consumption, memory consumption, Azure storage
consumption, and network bandwidth consumption within virtual networks. You're also
responsible for capacity planning for resource consumption from virtual networks to
SAP HANA on Azure (Large Instances).

WANs: The connections you establish from on-premises to Azure deployments for
workloads. All customers with HANA Large Instances use Azure ExpressRoute for
connectivity. This connection isn't part of the SAP HANA on Azure (Large Instances)
solution. You're responsible for the setup of this connection.

Archive: You might prefer to archive copies of data by using your own methods in
storage accounts. Archiving requires management, compliance, costs, and operations.
You're responsible for generating archive copies and backups on Azure and storing
them in a compliant way.

See the SLA for SAP HANA on Azure (Large Instances) .

Next steps
Learn about compatible operating systems for HANA Large Instances.

Compatible operating systems for HANA Large Instances


Compatible operating systems for
HANA Large Instances
Article • 02/10/2023

HANA Large Instance Type I


Operating Availability SKUs
System

SLES 12 SP2 Not offered S72, S72m, S96, S144, S144m, S192, S192m, S192xm
anymore

SLES 12 SP3 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm

SLES 12 SP4 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m

SLES 12 SP5 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m

SLES 15 SP1 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m

RHEL 7.6 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m

Persistent memory SKUs

Operating System Availability SKUs

SLES 12 SP4 Available S224oo, S224om, S224ooo, S224oom

HANA Large Instance Type II


Operating Availability SKUs
System

SLES 12 SP2 Not offered S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
anymore S768xm, S960m

SLES 12 SP3 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S960m
Operating Availability SKUs
System

SLES 12 SP4 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S960m

SLES 12 SP5 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m

SLES 15 SP1 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m

RHEL 7.6 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m

RHEL 7.9 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m

Next steps
Learn more about:

Available SKUs
Upgrading the operating system
Supported scenarios for HANA Large Instances
Install HANA on SAP HANA on Azure
(Large Instances)
Article • 02/10/2023

In this article, we'll walk through installing HANA on SAP HANA on Azure Large
Instances (otherwise known as BareMetal Infrastructure).

Prerequisites
To install HANA on SAP HANA on Azure (Large Instances), first:

Provide Microsoft with all the data to deploy for you on an SAP HANA Large
Instance.
Receive the SAP HANA Large Instance from Microsoft.
Create an Azure virtual network that is connected to your on-premises network.
Connect the ExpressRoute circuit for HANA Large Instances to the same Azure
virtual network.
Install an Azure virtual machine that you use as a jump box for HANA Large
Instances.
Ensure that you can connect from the jump box to your HANA Large Instance and
vice versa.
Check whether all the necessary packages and patches are installed.
Read the SAP notes and documentation about HANA installation on the operating
system you're using. Make sure that the HANA release of choice is supported on
the operating system release.

Download the SAP HANA installation bits


Now let's download the HANA installation packages to the jump box virtual machine. In
this example, the operating system is Windows.

The HANA Large Instance units aren't directly connected to the internet. You can't
directly download the installation packages from SAP to the HANA Large Instance
virtual machine. Instead, you download the packages to the jump box virtual machine.

You need an SAP S-user or other user, which allows you to access the SAP Marketplace.

1. Sign in, and go to SAP Service Marketplace . Select Download Software >
Installations and Upgrade > By Alphabetical Index. Then select Under H – SAP
HANA Platform Edition > SAP HANA Platform Edition 2.0 > Installation.
Download the files shown in the following screenshot.

2. In this example, we downloaded SAP HANA 2.0 installation packages. On the Azure
jump box virtual machine, expand the self-extracting archives into the directory as
shown below.

3. As the archives are extracted, copy the directory created by the extraction (in this
case, 51052030). Copy the directory from the HANA Large Instance unit
/hana/shared volume into a directory you created.

) Important

Don't copy the installation packages into the root or boot LUN. Space is
limited and needs to be used by other processes as well.
Install SAP HANA on the HANA Large Instance
unit
1. To install SAP HANA, sign in as user root. Only root has enough permissions to
install SAP HANA. Set permissions on the directory you copied over into
/hana/shared.

chmod –R 744 <Installation bits folder>

To install SAP HANA by using the graphical user interface setup, the gtk2 package
needs to be installed on HANA Large Instances. To check whether it's installed, run
the following command:

rpm –qa | grep gtk2

(In later steps, we show the SAP HANA setup with the graphical user interface.)

2. Go into the installation directory, and navigate into the sub directory
HDB_LCM_LINUX_X86_64.

Out of that directory, start:

./hdblcmgui

3. Now you'll progress through a sequence of screens in which you provide the data
for the installation. In this example, we're installing the SAP HANA database server
and the SAP HANA client components. So our selection is SAP HANA Database.
4. Select Install New System.
5. Select among several other components that you can install.
6. Choose the SAP HANA Client and the SAP HANA Studio. Also install a scale-up
instance. Then select Single-Host System.
7. Next you'll provide some data. For the installation path, use the /hana/shared
directory.

) Important

As HANA System ID (SID), you must provide the same SID as you provided
Microsoft when you ordered the HANA Large Instance deployment. Choosing
a different SID causes the installation to fail, due to access permission
problems on the different volumes.

8. Provide the locations for the HANA data files and the HANA log files.
7 Note

The SID you specified when you defined system properties (two screens ago)
should match the SID of the mount points. If there is a mismatch, go back and
adjust the SID to the value you have on the mount points.

9. Review the host name and correct it as needed.


10. Retrieve the data you gave to Microsoft when you ordered the HANA Large
Instance deployment.
) Important

Provide the System Administrator User ID and ID of User Group that you
provided to Microsoft when you ordered the unit deployment. Otherwise, the
installation of SAP HANA on the HANA Large Instance unit will fail.

11. The next two screens aren't shown here. They enable you to provide the password
for the SYSTEM user of the SAP HANA database, and the password for the sapadm
user. The latter is used for the SAP Host Agent that gets installed as part of the
SAP HANA database instance.

After defining the password, you see a confirmation screen. check all the data
listed, and continue with the installation. You'll reach a progress screen that
documents the installation progress, like this one:
12. As the installation finishes, you should see a screen like this one:
The SAP HANA instance should now be up and running, and ready for usage. You
can connect to it from SAP HANA Studio. Make sure you check for and apply the
latest updates.

Next steps
Learn about SAP HANA Large Instances high availability and disaster recovery on Azure.

SAP HANA Large Instances high availability and disaster recovery on Azure
SAP HANA (Large Instances)
architecture on Azure
Article • 02/10/2023

In this article, we'll describe the architecture for deploying SAP HANA on Azure Large
Instances (otherwise known as BareMetal Infrastructure).

At a high level, the SAP HANA on Azure (Large Instances) solution has the SAP
application layer on virtual machines (VMs). The database layer is on the SAP certified
HANA Large Instance (HLI). The HLI is located in the same Azure region as the Azure
IaaS VMs.

7 Note

Deploy the SAP application layer in the same Azure region as the SAP database
management system (DBMS) layer. This rule is well documented in published
information about SAP workloads on Azure.

Architectural overview
The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI-
certified hardware configuration. The hardware is a non-virtualized, bare metal, high-
performance server for the SAP HANA database. It gives you the flexibility to scale
resources for the SAP application layer to meet your needs.
The architecture shown is divided into three sections:

Right: Shows an on-premises infrastructure that runs different applications in data


centers so that end users can access line-of-business (LOB) applications, such as
SAP. Ideally, this on-premises infrastructure is connected to Azure with
ExpressRoute .

Center: Shows Azure IaaS and, in this case, use of VMs to host SAP or other
applications that use SAP HANA as a DBMS. Smaller HANA instances that function
with the memory that VMs provide are deployed in VMs together with their
application layer. For more information about virtual machines, see Virtual
machines .
Azure network services are used to group SAP systems together with other
applications into virtual networks. These virtual networks connect to on-premises
systems and to SAP HANA on Azure (Large Instances).

For SAP NetWeaver applications and databases that are supported to run in Azure,
see SAP Support Note #1928533 – SAP applications on Azure: Supported products
and Azure VM types . For documentation on how to deploy SAP solutions on
Azure, see:
Use SAP on Windows virtual machines
Use SAP solutions on Azure virtual machines

Left: Shows the SAP HANA TDI-certified hardware in the Azure Large Instance
stamp. The HANA Large Instance units connect to the virtual networks of your
Azure subscription using the same technology on-premises servers use to connect
into Azure. In May 2019, we introduced an optimization that allows communication
between the HANA Large Instance units and the Azure VMs without the
ExpressRoute Gateway. This optimization, called ExpressRoute FastPath, is shown in
the preceding diagram by the red lines.

Components of the Azure Large Instance stamp


The Azure Large Instance stamp itself combines the following components:

Computing: Servers based on different generations of Intel Xeon processors that


offer the necessary computing capability and are SAP HANA certified.
Network: A unified high-speed network fabric that interconnects the computing,
storage, and LAN components.
Storage: A storage infrastructure that is accessed through a unified network fabric.
The storage capacity provided depends on the SAP HANA on Azure (Large
Instances) configuration deployed. More storage capacity is available at added
monthly cost.

Tenants
Within the multi-tenant infrastructure of the Large Instance stamp, customers are
deployed as isolated tenants. At deployment of the tenant, you name an Azure
subscription within your Azure enrollment. This Azure subscription is the one the HANA
Large Instance is billed against. These tenants have a 1:1 relationship to the Azure
subscription.
For a network, it's possible to access a HANA Large Instance deployed in one tenant in
one Azure region from different virtual networks belonging to different Azure
subscriptions. Those Azure subscriptions must belong to the same Azure enrollment.

Availability across regions


As with VMs, SAP HANA on Azure (Large Instances) is offered in multiple Azure regions.
To offer disaster recovery capabilities, you can choose to opt in. Different Large Instance
stamps within one geo-political region are connected to each other. For example, HANA
Large Instance Stamps in US West and US East are connected through a dedicated
network link for disaster recovery replication.

Available SKUs
Just as Azure allows you to choose between different VM types, you can choose from
different SKUs of HANA Large Instances. You can select the SKU appropriate for the
specific SAP HANA workload type. SAP applies memory-to-processor-socket ratios for
varying workloads based on the Intel processor generations. For more information on
available SKUs, see Available SKUs for HLI.

Next steps
Learn about SAP HANA Large Instances network architecture.

SAP HANA (Large Instances) network architecture


SAP HANA (Large Instances) network
architecture
Article • 02/10/2023

In this article, we'll look at the network architecture for deploying SAP HANA on Azure
Large Instances (otherwise known as BareMetal Infrastructure).

The architecture of Azure network services is a key component of successfully deploying


SAP applications on HANA Large Instance. Typically, SAP HANA on Azure (Large
Instances) deployments have a larger SAP landscape. They likely include several SAP
solutions with varying sizes of databases, CPU resource consumption, and memory use.

It's likely that not all IT systems are located in Azure already. Your SAP landscape may be
hybrid as well. Your database management system (DBMS) and SAP application may use
a mixture of NetWeaver, S/4HANA, and SAP HANA. Your SAP application might even use
another DBMS.

Azure offers different services that allow you to run the DBMS, NetWeaver, and
S/4HANA systems in Azure. Azure offers network technology to make Azure look like a
virtual data center to your on-premises software deployments. The Azure network
functionality includes:

Azure virtual networks connected to the ExpressRoute circuit that connects to


your on-premises network assets.
An ExpressRoute circuit that connects on-premises to Azure with a minimum
bandwidth of 1 Gbps or higher . This circuit allows adequate bandwidth for the
transfer of data between on-premises systems and systems that run on virtual
machines (VMs). It also allows adequate bandwidth for connection to Azure
systems from on-premises users.
All SAP systems in Azure set up in virtual networks to communicate with each
other.
Active Directory and DNS hosted on-premises are extended into Azure through
ExpressRoute from on-premises. They may also run completely in Azure.

When integrating HANA Large Instances into the Azure data center network fabric,
Azure ExpressRoute technology is used as well.

7 Note

Only one Azure subscription can be linked to only one tenant in a HANA Large
Instance stamp in a specific Azure region. Conversely, a single HANA Large Instance
stamp tenant can be linked to only one Azure subscription. This requirement is
consistent with other billable objects in Azure.

If SAP HANA on Azure (Large Instances) is deployed in multiple Azure regions, a


separate tenant is deployed in the HANA Large Instance stamp. You can run both under
the same Azure subscription provided these instances are part of the same SAP
landscape.

) Important

Only the Azure Resource Manager deployment method is supported with SAP
HANA on Azure (Large Instances).

Extra virtual network information


To connect a virtual network to ExpressRoute, an Azure ExpressRoute gateway must be
created. For more information, see About Expressroute gateways for ExpressRoute.

An Azure ExpressRoute gateway is used with ExpressRoute to an infrastructure outside


of Azure or to an Azure Large Instance stamp. You can connect the Azure ExpressRoute
gateway to a maximum of four ExpressRoute circuits, but only if those connections
come from different Microsoft Enterprise Edge Routers (MSEEs). For more information,
see SAP HANA (Large Instances) infrastructure and connectivity on Azure.

7 Note

The maximum throughput you can achieve with a ExpressRoute gateway is 10 Gbps
by using an ExpressRoute connection. Copying files between a VM that resides in a
virtual network and a system on-premises (as a single copy stream) doesn't achieve
the full throughput of the different gateway SKUs. To leverage the complete
bandwidth of the ExpressRoute gateway, use multiple streams or copy different
files in parallel streams of a single file.

Networking architecture for HANA Large


Instance
The networking architecture for HANA Large Instances can be separated into four parts:
On-premises networking and ExpressRoute connection to Azure. This part is your
(the customer's) domain and is connected to Azure through ExpressRoute. This
ExpressRoute circuit is fully paid by you. The bandwidth should be large enough to
handle the network traffic between your on-premises assets and the Azure region
you're connecting with. See the lower right in the following figure.
Azure network services, as previously discussed, with virtual networks, which again
need ExpressRoute gateways added. For this part, you need to create the
appropriate designs to meet your application, security, and compliance
requirements. Consider whether to use HANA Large Instances given the number of
virtual networks and Azure gateway SKUs to choose from. See the upper right in
the figure.
Connectivity of your HANA Large Instance through ExpressRoute into Azure. This
part is deployed and handled by Microsoft. All you need to do is provide some IP
address ranges after you've deployed your assets in the HANA Large Instance and
connected the ExpressRoute circuit to the virtual networks. For more information,
see SAP HANA (Large Instances) infrastructure and connectivity on Azure. There's
no added fee for the connectivity between the Azure data center network fabric
and HANA Large Instance units.
Networking within the HANA Large Instance stamp, which is mostly transparent for
you.
The following two requirements still hold even though you use Hana Large Instances:

Your on-premises assets must connect through ExpressRoute to Azure.


You need one or more virtual networks that run your VMs. These VMs host the
application layer that connects to the HANA instances hosted in HANA Large
Instances.

The differences in SAP deployments in Azure are:

The HANA Large Instances of your tenant are connected through another
ExpressRoute circuit into your virtual networks. The on-premises to Azure virtual
network ExpressRoute circuits and the circuits between Azure virtual networks and
HANA Large Instances don't share the same routers. Their load conditions remain
separate.
The workload profile between the SAP application layer and the HANA Large
Instance is of a different nature. SAP HANA generates many small requests and
bursts like data transfers (result sets) into the application layer.
The SAP application architecture is more sensitive to network latency than typical
scenarios where data is exchanged between on-premises and Azure.
The Azure ExpressRoute gateway has at least two ExpressRoute connections. One
circuit is connected from on-premises and one is connected from the HANA Large
Instance. This configuration leaves only room for two more circuits from different
MSEEs to connect to the ExpressRoute Gateway. This restriction is independent of
the usage of ExpressRoute FastPath. All the connected circuits share the maximum
bandwidth for incoming data of the ExpressRoute gateway.

With Revision 3 of HANA Large Instance stamps, the network latency between VMs and
HANA Large Instance units can be higher than typical VM-to-VM network round-trip
latencies. Depending on the Azure region, values can exceed the 0.7-ms round-trip
latency classified as below average in SAP Note #1100926 - FAQ: Network
performance . Depending on Azure Region and the tool to measure network round-
trip latency between an Azure VM and HANA Large Instance, the latency can be up to 2
milliseconds. Still, customers successfully deploy SAP HANA-based production SAP
applications on SAP HANA Large Instances. Make sure you test your business processes
thoroughly with Azure HANA Large Instances. A new functionality, called ExpressRoute
FastPath, can substantially reduce the network latency between HANA Large Instances
and application layer VMs in Azure (see below).

Revision 4 of HANA Large Instance stamps improves network latency between Azure
VMs deployed in proximity to the HANA Large Instance stamp. Latency meets the
average or better than average classification as documented in SAP Note #1100926 -
FAQ: Network performance if Azure ExpressRoute FastPath is configured (see below).

To deploy Azure VMs in proximity to HANA Large Instances of Revision 4, you need to
apply Azure Proximity Placement Groups. Proximity placement groups can be used to
locate the SAP application layer in the same Azure datacenter as Revision 4 hosted
HANA Large Instances. For more information, see Azure Proximity Placement Groups for
optimal network latency with SAP applications.

To provide deterministic network latency between VMs and HANA Large Instance, using
the ExpressRoute gateway SKU is essential. Unlike the traffic patterns between on-
premises and VMs, the traffic patterns between VMs and HANA Large Instances can
develop small but high bursts of requests and data volumes. To handle such bursts, we
highly recommend using the UltraPerformance gateway SKU. For the Type II class of
HANA Large Instance SKUs, using the UltraPerformance gateway SKU as a ExpressRoute
gateway is mandatory.

) Important

Given the overall network traffic between the SAP application and database layers,
only the HighPerformance or UltraPerformance gateway SKUs for virtual networks
are supported for connecting to SAP HANA on Azure (Large Instances). For HANA
Large Instance Type II SKUs, only the UltraPerformance gateway SKU is supported
as a ExpressRoute gateway. Exceptions apply when using ExpressRoute FastPath
(see below).

ExpressRoute FastPath
In May 2019, we released ExpressRoute FastPath. FastPath lowers the latency between
HANA Large Instances and Azure virtual networks that host the SAP application VMs.
With FastPath, the data flows between VMs and HANA Large Instances aren't routed
through the ExpressRoute gateway. The VMs assigned in the subnet(s) of the Azure
virtual network directly communicate with the dedicated enterprise edge router.

) Important

ExpressRoute FastPath requires that the subnets running the SAP application VMs
are in the same Azure virtual network that is connected to the HANA Large
Instances. VMs located in Azure virtual networks that are peered with the Azure
virtual network connected to the HANA Large Instance units do not benefit from
ExpressRoute FastPath. As a result, typical hub and spoke virtual network designs,
where the ExpressRoute circuits connect against a hub virtual network and virtual
networks containing the SAP application layer (spokes) are peered, the optimization
by ExpressRoute FastPath won't work. ExpressRoute FastPath also doesn't currently
support user defined routing rules (UDR). For more information, see ExpressRoute
virtual network gateway and FastPath.

For more information on how to configure ExpressRoute FastPath, see Connect a virtual
network to HANA large instances.

7 Note

An UltraPerformance ExpressRoute gateway is required to use ExpressRoute


FastPath.
Single SAP system
The on-premises infrastructure previously shown is connected through ExpressRoute
into Azure. The ExpressRoute circuit connects into an MSEE. For more information, see
ExpressRoute technical overview. After the route is established, it connects into the
Azure backbone.

7 Note

To run SAP landscapes in Azure, connect to the enterprise edge router closest to
the Azure region in the SAP landscape. HANA Large Instance stamps are connected
through dedicated enterprise edge routers to minimize network latency between
VMs in Azure IaaS and HANA Large Instance stamps.

The ExpressRoute gateway for the VMs that host SAP application instances are
connected to one ExpressRoute circuit that connects to on-premises. The same virtual
network is connected to a separate enterprise edge router. That edge router is
dedicated to connecting to Large Instance stamps. Again, with FastPath, the data flow
from HANA Large Instances to the SAP application layer VMs isn't routed through the
ExpressRoute gateway. This configuration reduces the network round-trip latency.

This system is a straightforward example of a single SAP system. The SAP application
layer is hosted in Azure. The SAP HANA database runs on SAP HANA on Azure (Large
Instances). The assumption is that the ExpressRoute gateway bandwidth of 2-Gbps or
10-Gbps throughput doesn't represent a bottleneck.

Multiple SAP systems or large SAP systems


If you deploy multiple SAP systems or large SAP systems connecting to SAP HANA
(Large Instances), the throughput of the ExpressRoute gateway might become a
bottleneck. In that case, split the application layers into multiple virtual networks. You
can also split the application layers if you want to isolate production and non-
production systems in different Azure virtual networks.

You might create a special virtual network that connects to HANA Large Instances when:

Doing backups directly from the HANA instances in a HANA Large Instance to a
VM in Azure that hosts NFS shares.
Copying large backups or other files from HANA Large Instances to disk space
managed in Azure.

Use a separate virtual network to host VMs that manage storage for mass transfer of
data between HANA Large Instances and Azure. This arrangement avoids large file or
data transfer from HANA Large Instances to Azure on the ExpressRoute gateway that
serves the VMs running the SAP application layer.

For a more expandable network architecture:

Use multiple virtual networks for a single, larger SAP application layer.

Deploy one separate virtual network for each SAP system deployed, compared to
combining these SAP systems in separate subnets under the same virtual network.

The following diagram shows a more expandable networking architecture for SAP
HANA on Azure (Large Instances):
Depending on the rules and restrictions you want to apply between the different virtual
networks hosting VMs of different SAP systems, you should peer those virtual networks.
For more information about virtual network peering, see Virtual network peering.

Routing in Azure
By default deployment, three network routing considerations are important for SAP
HANA on Azure (Large Instances):

SAP HANA on Azure (Large Instances) can be accessed only through Azure VMs
and the dedicated ExpressRoute connection, not directly from on-premises. Direct
access from on-premises to the HANA Large Instance units, as delivered by
Microsoft to you, isn't possible immediately. The transitive routing restrictions are
because of the current Azure network architecture used for SAP HANA Large
Instances. Some administration clients and any applications that need direct
access, such as SAP Solution Manager running on-premises, can't connect to the
SAP HANA database. For exceptions, see the following section, Direct Routing to
HANA Large Instances.

If you have HANA Large Instance units deployed in two different Azure regions for
disaster recovery, the same transient routing restrictions apply as in the past. In
other words, IP addresses of a HANA Large Instance in one region (for example, US
West) weren't routed to a HANA Large Instance deployed in another region (for
example, US East). This restriction is independent of the use of Azure network
peering across regions or cross-connecting the ExpressRoute circuits that connect
HANA Large Instances to virtual networks. For a graphic representation, see the
figure in the section, Use HANA Large Instance units in multiple regions. This
restriction, which came with the deployed architecture, prohibited the immediate
use of HANA system replication for disaster recovery. For recent changes, again,
see Use HANA Large Instance units in multiple regions.

SAP HANA on Azure Large Instances has an assigned IP address from the server IP
pool address range that you submitted when requesting the HANA Large Instance
deployment. For more information, see SAP HANA (Large Instances) infrastructure
and connectivity on Azure. This IP address is accessible through the Azure
subscriptions and circuit that connects Azure virtual networks to HANA Large
Instances. The IP address assigned out of that server IP pool address range is
directly assigned to the hardware unit. It's not assigned through network address
translation (NAT) anymore, as was the case in the first deployments of this solution.

Direct Routing to HANA Large Instances


By default, the transitive routing doesn't work in these scenarios:

Between HANA Large Instance units and an on-premises deployment.

Between HANA Large Instance units deployed in different regions.

There are three ways to enable transitive routing in those scenarios:

A reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with
Traffic Manager deployed in the Azure virtual network that connects to HANA
Large Instances and to on-premises as a virtual firewall/traffic routing solution.
Using IPTables rules in a Linux VM to enable routing between on-premises
locations and HANA Large Instance units, or between HANA Large Instance units in
different regions. The VM running IPTables must be deployed in the Azure virtual
network that connects to HANA Large Instances and to on-premises. The VM must
be sized so that the network throughput of the VM is sufficient for the expected
network traffic. For more information on VM network bandwidth, check the article
Sizes of Linux virtual machines in Azure.
Azure Firewall would be another solution to enable direct traffic between on-
premises and HANA Large instance units.

All the traffic of these solutions would be routed through an Azure virtual network. As
such, the traffic could also be restricted by the soft appliances used or by Azure Network
Security Groups. In this way, specific IP addresses or IP address ranges from on-premises
could either be blocked or explicitly allowed access to HANA Large Instances.

7 Note

Be aware that implementation and support for custom solutions involving third-
party network appliances or IPTables isn't provided by Microsoft. Support must be
provided by the vendor of the component used or by the integrator.

Express Route Global Reach


Microsoft introduced a new functionality called ExpressRoute Global Reach. Global
Reach can be used for HANA Large Instances in two scenarios:

Enable direct access from on-premises to your HANA Large Instance units
deployed in different regions.
Enable direct communication between your HANA Large Instance units deployed
in different regions.

Direct Access from on-premises

In Azure regions where Global Reach is offered, you can request enabling Global Reach
for your ExpressRoute circuit. That circuit connects your on-premises network to the
Azure virtual network that connects to your HANA Large Instances. There are costs for
the on-premises side of your ExpressRoute circuit. For more information, see the pricing
for Global Reach Add-On . You won't pay added costs for the circuit that connects the
HANA Large Instances to Azure.

) Important
When using Global Reach to enable direct access between your HANA Large
Instance units and on-premises assets, the network data and control flow is not
routed through Azure virtual networks. Instead, network data and control flow is
routed directly between the Microsoft enterprise exchange routers. So any NSG or
ASG rules, or any type of firewall, NVA, or proxy you deployed in an Azure virtual
network, won't be touched. If you use ExpressRoute Global Reach to enable direct
access from on-premises to HANA Large instance units, restrictions and
permissions to access HANA large Instance units need to be defined in firewalls
on the on-premises side.

Connecting HANA Large Instances in different Azure regions

Similarly, ExpressRoute Global Reach can be used to connect two HANA Large Instance
tenants deployed in different regions. The isolation is the ExpressRoute circuits that your
HANA Large Instance tenants use to connect to Azure in both regions. There are no
added charges for connecting two HANA Large Instance tenants deployed in different
regions.

) Important

The data flow and control flow of the network traffic between the HANA Large
instance tenants won't be routed through Azure networks. So you can't use Azure
functionality or network virtual appliances (NVAs) to enforce communication
restrictions between your HANA Large Instances tenants.

For more information on how to enable ExpressRoute Global Reach, see Connect a
virtual network to HANA large instances.

Internet connectivity of HANA Large Instance


HANA Large Instances don't have direct internet connectivity. As an example, this
limitation might restrict your ability to register the OS image directly with the OS
vendor. You might need to work with your local SUSE Linux Enterprise Server
Subscription Management Tool server or Red Hat Enterprise Linux Subscription
Manager.

Data encryption between VMs and HANA Large


Instance
Data transferred between HANA Large Instances and VMs isn't encrypted. Purely for the
exchange between the HANA DBMS side and JDBC/ODBC-based applications, however,
you can enable encryption of traffic. For more information, see Secure Communication
Between SAP HANA and JDBC/ODBC Clients .

Use HANA Large Instance units in multiple


regions
For disaster recovery, you need to have HANA Large Instance units in multiple Azure
regions. Using only Azure Global Vnet Peering, by default the transitive routing won't
work between HANA Large Instance tenants in different regions. Global Reach, however,
opens up communication between HANA Large Instance units in different regions. This
scenario using ExpressRoute Global Reach enables:

HANA system replication without any more proxies or firewalls.


Copying backups between HANA Large Instance units in different regions to make
system copies or do system refreshes.

The preceding figure shows how the virtual networks in both regions are connected to
two ExpressRoute circuits. The circuits are used to connect to SAP HANA on Azure
(Large Instances) in both Azure regions (grey lines). The reason for the two cross
connections is to protect from an outage of the MSEEs on either side. The
communication flow between the two virtual networks in the two Azure regions is
supposed to be handled over the global peering of the two virtual networks in the two
different regions (blue dotted line). The thick red line describes the ExpressRoute Global
Reach connection. This connection allows the HANA Large Instance units of your tenants
in different regions to communicate with each other.
) Important

If you used multiple ExpressRoute circuits, use AS Path prepending and Local
Preference BGP settings to ensure proper routing of traffic.

Next steps
Learn about the storage architecture of SAP HANA (Large Instances).

SAP HANA (Large Instances) storage architecture


SAP HANA (Large Instances) storage
architecture
Article • 02/10/2023

In this article, we'll look at the storage architecture for deploying SAP HANA on Azure
Large Instances (also known as BareMetal Infrastructure).

The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP
HANA on the classic deployment model per SAP recommended guidelines.

Type I class of HANA Large Instances come with four times the memory volume as
storage volume. Whereas Type II class of HANA Large Instances come with a volume
intended for storing HANA transaction log backups. For more information, see Install
and configure SAP HANA (Large Instances) on Azure.

See the following table for storage allocation. The table lists the rough capacity for
volumes provided with the different HANA Large Instance units.

HANA Large Instance SKU hana/data hana/log hana/shared hana/logbackups

S72 1,280 GB 512 GB 768 GB 512 GB

S72m 3,328 GB 768 GB 1,280 GB 768 GB

S96 1,280 GB 512 GB 768 GB 512 GB

S192 4,608 GB 1,024 GB 1,536 GB 1,024 GB

S192m 11,520 GB 1,536 GB 1,792 GB 1,536 GB

S192xm 11,520 GB 1,536 GB 1,792 GB 1,536 GB

S384 11,520 GB 1,536 GB 1,792 GB 1,536 GB

S384m 12,000 GB 2,050 GB 2,050 GB 2,040 GB

S384xm 16,000 GB 2,050 GB 2,050 GB 2,040 GB

S384xxm 20,000 GB 3,100 GB 2,050 GB 3,100 GB

S576m 20,000 GB 3,100 GB 2,050 GB 3,100 GB

S576xm 31,744 GB 4,096 GB 2,048 GB 4,096 GB

S768m 28,000 GB 3,100 GB 2,050 GB 3,100 GB

S768xm 40,960 GB 6,144 GB 4,096 GB 6,144 GB


HANA Large Instance SKU hana/data hana/log hana/shared hana/logbackups

S960m 36,000 GB 4,100 GB 2,050 GB 4,100 GB

S896m 33,792 GB 512 GB 1,024 GB 512 GB

More recent SKUs of HANA Large Instances are delivered with the following storage
configurations.

HANA Large Instance SKU hana/data hana/log hana/shared hana/logbackups

S224 4,224 GB 512 GB 1,024 GB 512 GB

S224oo 6,336 GB 512 GB 1,024 GB 512 GB

S224m 8,448 GB 512 GB 1,024 GB 512 GB

S224om 8,448 GB 512 GB 1,024 GB 512 GB

S224ooo 10,560 GB 512 GB 1,024 GB 512 GB

S224oom 12,672 GB 512 GB 1,024 GB 512 GB

S448 8,448 GB 512 GB 1,024 GB 512 GB

S448oo 12,672 GB 512 GB 1,024 GB 512 GB

S448m 16,896 GB 512 GB 1,024 GB 512 GB

S448om 16,896 GB 512 GB 1,024 GB 512 GB

S448ooo 21,120 GB 512 GB 1,024 GB 512 GB

S448oom 25,344 GB 512 GB 1,024 GB 512 GB

S672 12,672 GB 512 GB 1,024 GB 512 GB

S672oo 19,008 GB 512 GB 1,024 GB 512 GB

S672m 25,344 GB 512 GB 1,024 GB 512 GB

S672om 25,344 GB 512 GB 1,024 GB 512 GB

S672ooo 31,680 GB 512 GB 1,024 GB 512 GB

S672oom 38,016 GB 512 GB 1,024 GB 512 GB

S896 16,896 GB 512 GB 1,024 GB 512 GB

S896oo 25,344 GB 512 GB 1,024 GB 512 GB

S896om 33,792 GB 512 GB 1,024 GB 512 GB


HANA Large Instance SKU hana/data hana/log hana/shared hana/logbackups

S896ooo 42,240 GB 512 GB 1,024 GB 512 GB

S896oom 50,688 GB 512 GB 1,024 GB 512 GB

Actual deployed volumes might vary based on deployment and the tool used to show
the volume sizes.

If you subdivide a HANA Large Instance SKU, a few examples of possible division pieces
might look like:

Memory partition in GB hana/data hana/log hana/shared hana/log/backup

256 400 GB 160 GB 304 GB 160 GB

512 768 GB 384 GB 512 GB 384 GB

768 1,280 GB 512 GB 768 GB 512 GB

1,024 1,792 GB 640 GB 1,024 GB 640 GB

1,536 3,328 GB 768 GB 1,280 GB 768 GB

These sizes are rough volume numbers that can vary slightly based on deployment and
the tools used to look at the volumes. There are also other partition sizes, such as 2.5 TB.
These storage sizes are calculated using a formula similar to the one used for the
previous partitions. The term "partitions" doesn't mean the operating system, memory,
or CPU resources are partitioned. It indicates storage partitions for the different HANA
instances you might want to deploy on one single HANA Large Instance unit.

If you need more storage, you can buy more in 1-TB units. The extra storage may be
added as more volume or used to extend one or more of the existing volumes. You can't
reduce the sizes of the volumes as originally deployed and as documented by the
previous tables. You also aren't able to change the names of the volumes or mount
names. The storage volumes previously described are attached to the HANA Large
Instance units as NFS4 volumes.

You can use storage snapshots for backup and restore and disaster recovery purposes.
For more information, see SAP HANA (Large Instances) high availability and disaster
recovery on Azure.

For more information on the storage layout for your scenario, see HLI supported
scenarios.
Run multiple SAP HANA instances on one
HANA Large Instance unit
It's possible to host more than one active SAP HANA instance on HANA Large Instance
units. To provide the capabilities of storage snapshots and disaster recovery, such a
configuration requires a volume set per instance. Currently, HANA Large Instance units
can be subdivided as follows:

S72, S72m, S96, S144, S192: In increments of 256 GB, with 256 GB as the smallest
starting unit. Different increments such as 256 GB and 512 GB can be combined to
the maximum memory of the unit.
S144m and S192m: In increments of 256 GB, with 512 GB as the smallest unit.
Different increments such as 512 GB and 768 GB can be combined to the
maximum memory of the unit.
Type II class: In increments of 512 GB, with the smallest starting unit of 2 TB.
Different increments such as 512 GB, 1 TB, and 1.5 TB can be combined to the
maximum memory of the unit.

The following examples show what it might look like running multiple SAP HANA
instances.

SKU Memory size Storage size Sizes with multiple databases

S72 768 GB 3 TB 1x768-GB HANA instance


or 1x512-GB instance + 1x256-GB instance
or 3x256-GB instances

S72m 1.5 TB 6 TB 3x512GB HANA instances


or 1x512-GB instance + 1x1-TB instance
or 6x256-GB instances
or 1x1.5-TB instance

S192m 4 TB 16 TB 8x512-GB instances


or 4x1-TB instances
or 4x512-GB instances + 2x1-TB instances
or 4x768-GB instances + 2x512-GB instances
or 1x4-TB instance

S384xm 8 TB 22 TB 4x2-TB instances


or 2x4-TB instances
or 2x3-TB instances + 1x2-TB instances
or 2x2.5-TB instances + 1x3-TB instances
or 1x8-TB instance

There are other variations as well.


Encryption of data at rest
The storage for HANA Large Instances uses transparent encryption for the data, as it's
stored on the disks. In deployments before the end of 2018, you could have the volumes
encrypted. If you decided against that option, you could have the volumes encrypted
online. The move from non-encrypted to encrypted volumes is transparent and doesn't
require downtime.

With the Type I class of SKUs of HANA Large Instance, the volume storing the boot LUN
is encrypted. In Revision 3 HANA Large Instance stamps using Type II class of SKUs, you
need to encrypt the boot LUN with OS methods. In Revision 4 HANA Large Instance
stamps using Type II class of SKUs, the volume storing the boot LUN is encrypted at rest
by default.

Required settings for larger HANA instances on


HANA Large Instances
The storage used in HANA Large Instances has a file size limitation. The size limitation is
16 TB per file. Unlike file size limitations in the EXT3 file systems, HANA isn't implicitly
aware of the storage limitation enforced by HANA Large Instances storage. As a result,
HANA won't automatically create a new data file when the file size limit of 16 TB is
reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors and
the index server will finally crash.

) Important

In order to prevent HANA from trying to grow data files beyond the 16 TB file size
limit of HANA Large Instance storage, you need to set the following parameters in
the global.ini configuration file of HANA:

datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285

Next steps
Learn about deploying SAP HANA (Large Instances).
SAP HANA (Large Instances) deployment
Supported scenarios for HANA Large
Instances
Article • 02/10/2023

This article describes the supported scenarios and architectural details for HANA Large
Instances (HLI).

7 Note

If your scenario isn't mentioned in this article, contact the Microsoft Service
Management team to assess your requirements. Before you set up the HLI unit,
validate the design with SAP or your service implementation partner.

Terms and definitions


Let's understand the terms and definitions used in this article:

SID: A system identifier for the HANA system.


HLI: Hana Large Instances.
DR: Disaster recovery (DR).
Normal DR: A system setup with a dedicated resource for DR purposes only.
Multipurpose DR: A DR site system that's configured to use a non-production
environment alongside a production instance that's configured for a DR event.
Single-SID: A system with one instance installed.
Multi-SID: A system with multiple instances configured; also called an MCOS
environment.
HSR: SAP HANA system replication.

Overview
HANA Large Instances support various architectures to help you accomplish your
business requirements. The following sections cover the architectural scenarios and their
configuration details.

The derived architectural designs are purely from an infrastructure perspective. Consult
SAP or your implementation partners for the HANA deployment. If your scenarios aren't
listed in this article, contact the Microsoft account team to review the architecture and
derive a solution for you.
7 Note

These architectures are fully compliant with Tailored Data Integration (TDI) design
and are supported by SAP.

This article describes the details of the two components in each supported architecture:

Ethernet
Storage

Ethernet
Each provisioned server comes preconfigured with sets of Ethernet interfaces. The
Ethernet interfaces configured on each HLI unit are categorized into four types:

A: Used for or by client access.


B: Used for node-to-node communication. This interface is configured on all
servers no matter what topology you request. However, it's used only for scale-out
scenarios.
C: Used for node-to-storage connectivity.
D: Used for node-to-iSCSI device connection for fencing setup. This interface is
configured only when an HSR setup is requested.

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node

C TYPE I eth1.tenant eno2.tenant Node-to-


storage

D TYPE I eth4.tenant eno4.tenant Fencing

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-


storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Fencing


You choose the interface based on the topology that's configured on the HLI unit. For
example, interface “B” is set up for node-to-node communication, which is useful when
you have a scale-out topology configured. This interface isn't used for single node scale-
up configurations. For more information about interface usage, review your required
scenarios (later in this article).

If necessary, you can define more NIC cards on your own. However, the configurations
of existing NICs can't be changed.

7 Note

You might find additional interfaces that are physical interfaces or bonding.
Consider only the previously mentioned interfaces for your use case. Ignore any
others.

The distribution for units with two assigned IP addresses should look as follows:

Ethernet “A” should have an assigned IP address that's within the server IP pool
address range that you submitted to Microsoft. This IP address should be
maintained in the /etc/hosts directory of the operating system (OS).

Ethernet “C” should have an assigned IP address that's used for communication to
NFS. You don't need to maintain this address in the etc/hosts directory to allow
instance-to-instance traffic within the tenant.

For HANA system replication or HANA scale-out deployment, a blade configuration with
two assigned IP addresses isn't suitable. If you have only two assigned IP addresses, and
you want to deploy such a configuration, contact SAP HANA on Azure Service
Management. They can assign you a third IP address in a third VLAN. For HANA Large
Instances with three assigned IP addresses on three NIC ports, the following usage rules
apply:

Ethernet “A” should have an assigned IP address that's outside of the server IP
pool address range that you submitted to Microsoft. This IP address shouldn't be
maintained in the etc/hosts directory of the OS.

Ethernet “B” should be maintained exclusively in the etc/hosts directory for


communication between the various instances. Maintain these IP addresses in
scale-out HANA configurations as the IP addresses that HANA uses for the inter-
node configuration.

Ethernet “C” should have an assigned IP address that's used for communication to
NFS storage. This type of address shouldn't be maintained in the etc/hosts
directory.

Ethernet “D” should be used exclusively for access to fencing devices for
Pacemaker. This interface is required when you configure HANA system replication
and want to achieve auto failover of the operating system by using an SBD-based
device.

Storage
Storage is preconfigured based on the requested topology. The volume sizes and mount
points vary depending on the number of servers and SKUs, and the configured
topology. For more information, review your required scenarios (later in this article). If
you require more storage, you can purchase it in 1-TB increments.

7 Note

The mount point /usr/sap/<SID> is a symbolic link to the /hana/shared mount


point.

Supported scenarios
The architectural diagrams in the next sections use the following notations:

Here are the supported scenarios:

Single node with one SID


Single node MCOS
Single node with DR (normal)
Single node with DR (multipurpose)
HSR with fencing
HSR with DR (normal/multipurpose)
Host auto failover (1+1)
Scale-out with standby
Scale-out without standby
Scale-out with DR

Single node with one SID


This topology supports one node in a scale-up configuration with one SID.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

/hana/shared/SID HANA installation

/hana/data/SID/mnt00001 Data files installation

/hana/log/SID/mnt00001 Log files installation

/hana/logbackups/SID Redo logs

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.

Single node MCOS


This topology supports one node in a scale-up configuration with multiple SIDs.

Architecture diagram

Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI


NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

/hana/shared/SID1 HANA installation for SID1

/hana/data/SID1/mnt00001 Data files installation for SID1

/hana/log/SID1/mnt00001 Log files installation for SID1

/hana/logbackups/SID1 Redo logs for SID1

/hana/shared/SID2 HANA installation for SID2

/hana/data/SID2/mnt00001 Data files installation for SID2

/hana/log/SID2/mnt00001 Log files installation for SID2

/hana/logbackups/SID2 Redo logs for SID2

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
Volume size distribution is based on the database size in memory. To learn what
database sizes in memory are supported in a multi-SID environment, see Overview
and architecture.

Single node with DR using storage replication


This topology supports one node in a scale-up configuration with one or multiple SIDs.
Storage-based replication to the DR site is used for a primary SID. In the diagram, only a
single-SID system is shown at the primary site, but MCOS systems are supported as well.

Architecture diagram

Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI


NIC logical
B SKU I
TYPE Name with SUSE
eth2.tenant Name with RHEL
eno3.tenant Use case but not
Configured
interface type OS OS in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

/hana/shared/SID HANA installation for SID

/hana/data/SID/mnt00001 Data files installation for SID

/hana/log/SID/mnt00001 Log files installation for SID

/hana/logbackups/SID Redo logs for SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.

Single node with DR (multipurpose) using


storage replication
This topology supports one node in a scale-up configuration with one or multiple SIDs.
Storage-based replication to the DR site is used for a primary SID.

In the diagram, only a single-SID system is shown at the primary site, but multi-SID
(MCOS) systems are supported as well. At the DR site, the HLI unit is used for the QA
instance. Production operations run from the primary site. During DR failover (or failover
test), the QA instance at the DR site is taken down.

Architecture diagram

Ethernet
The following network interfaces are preconfigured:
ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

At the primary site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID


Mount point Use case

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.

HSR with fencing for high availability


This topology supports two nodes for the HANA system replication configuration. This
configuration is supported only for single HANA instances on a node. MCOS scenarios
aren't supported.

7 Note

As of December 2019, this architecture is supported only for the SUSE operating
system.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Used for fencing


NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Used for fencing

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

On the primary node

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

On the secondary node

/hana/shared/SID HANA installation for secondary SID

/hana/data/SID/mnt00001 Data files installation for secondary SID

/hana/log/SID/mnt00001 Log files installation for secondary SID

/hana/logbackups/SID Redo logs for secondary SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
Fencing: An SBD is configured for the fencing device setup. However, the use of
fencing is optional.

High availability with HSR and DR with storage


replication
This topology supports two nodes for the HANA system replication configuration. Both
normal and multipurpose DRs are supported. These configurations are supported only
for single HANA instances on a node. MCOS scenarios aren't supported with these
configurations.

In the diagram, a multipurpose scenario is shown at the DR site, where the HLI unit is
used for the QA instance. Production operations run from the primary site. During DR
failover (or failover test), the QA instance at the DR site is taken down.

Architecture diagram

Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI


NIC logical SKU Name with SUSE Name with RHEL Use case
B TYPE I eth2.tenant eno3.tenant Configured but not
interface type OS OS
in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Used for fencing

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Used for fencing

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

On the primary node at the primary site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

On the secondary node at the primary site

/hana/shared/SID HANA installation for secondary SID

/hana/data/SID/mnt00001 Data files installation for secondary SID

/hana/log/SID/mnt00001 Log files installation for secondary SID

/hana/logbackups/SID Redo logs for secondary SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID


Mount point Use case

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
Fencing: An SBD is configured for the fencing setup. However, the use of fencing is
optional.
At the DR site: Two sets of storage volumes are required for primary and secondary
node replication.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.

Host auto failover (1+1)


This topology supports two nodes in a host auto failover configuration. There's one
node with a primary/worker role and another as a standby. SAP supports this scenario
only for S/4 HANA. For more information, see OSS note 2408419 - SAP S/4HANA -
Multi-Node Support .

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI


NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

On the primary and standby nodes

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
On standby: The volumes and mount points are configured (marked as “Required
for HANA installation”) for the HANA instance installation at the standby unit.

Scale-out with standby


This topology supports multiple nodes in a scale-out configuration. There's one node
with a primary role, one or more nodes with a worker role, and one or more nodes as
standby. However, there can be only one primary node at any given time.

Architecture diagram

Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

On the primary, worker, and standby nodes

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Scale-out without standby


This topology supports multiple nodes in a scale-out configuration. There's one node
with a primary role, and one or more nodes with a worker role. However, there can be
only one primary node at any given time.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage


NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

On the primary and worker nodes

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.

Scale-out with DR using storage replication


This topology supports multiple nodes in a scale-out with a DR. Both normal and
multipurpose DRs are supported. In the diagram, only the single purpose DR is shown.
You can request this topology with or without the standby node.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:
ノ Expand table

Mount point Use case

On the primary node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

On the DR node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.

Single node with DR using HSR


This topology supports one node in a scale-up configuration with one SID, with HANA
system replication to the DR site for a primary SID. In the diagram, only a single-SID
system is shown at the primary site, but multi-SID (MCOS) systems are supported as
well.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage


NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured on both HLI units (Primary and DR):

ノ Expand table

Mount point Use case

/hana/shared/SID HANA installation for SID

/hana/data/SID/mnt00001 Data files installation for SID

/hana/log/SID/mnt00001 Log files installation for SID

/hana/logbackups/SID Redo logs for SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
The primary node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.

Single node HSR to DR (cost optimized)


This topology supports one node in a scale-up configuration with one SID. HANA
system replication to the DR site is used for a primary SID. In the diagram, only a single-
SID system is shown at the primary site, but multi-SID (MCOS) systems are supported as
well. At the DR site, an HLI unit is used for the QA instance. Production operations run
from the primary site. During DR failover (or failover test), the QA instance at the DR site
is taken down.

Architecture diagram
Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use
Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

At the primary site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “PROD
Instance at DR site”) for the production HANA instance installation at the DR HLI
unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The primary node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.

High availability and disaster recovery with


HSR
This topology support two nodes for the HANA system replication configuration for the
local regions' high availability. For the DR, the third node at the DR region syncs with the
primary site by using HSR (async mode).

Architecture diagram

Ethernet
The following network interfaces are preconfigured:

ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

At the primary site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID


Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD
DR instance”) for the production HANA instance installation at the DR HLI unit.
The primary site node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.

High availability and disaster recovery with


HSR (cost optimized)
This topology supports two nodes for the HANA system replication configuration for the
local regions' high availability. For the DR, the third node at the DR region syncs with the
primary site by using HSR (async mode), while another instance (for example, QA) is
already running out from the DR node.

Architecture diagram

Ethernet
The following network interfaces are preconfigured:
ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Configured but not


in use

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not


in use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Configured but not


in use

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not


in use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

At the primary site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

At the DR site

/hana/shared/SID HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID


Mount point Use case

/hana/logbackups/SID Redo logs for production SID

/hana/shared/QA-SID HANA installation for QA SID

/hana/data/QA-SID/mnt00001 Data files installation for QA SID

/hana/log/QA-SID/mnt00001 Log files installation for QA SID

/hana/logbackups/QA-SID Redo logs for QA SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD
DR instance”) for the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The primary site node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.

Scale-out with DR using HSR


This topology supports multiple nodes in a scale-out with a DR. You can request this
topology with or without the standby node. The primary site node syncs with the DR site
node by using HANA system replication (async mode).

Architecture diagram


Ethernet
The following network interfaces are preconfigured:

ノ Expand table

NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS

A TYPE I eth0.tenant eno1.tenant Client-to-HLI/HSR

B TYPE I eth2.tenant eno3.tenant Node-to-node


communication

C TYPE I eth1.tenant eno2.tenant Node-to-storage

D TYPE I eth4.tenant eno4.tenant Configured but not in


use

A TYPE II vlan<tenantNo> team0.tenant Client-to-HLI/HSR

B TYPE II vlan<tenantNo+2> team0.tenant+2 Node-to-node


communication

C TYPE II vlan<tenantNo+1> team0.tenant+1 Node-to-storage

D TYPE II vlan<tenantNo+3> team0.tenant+3 Configured but not in


use

Storage
The following mount points are preconfigured:

ノ Expand table

Mount point Use case

On the primary node

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

On the DR node
Mount point Use case

/hana/shared HANA installation for production SID

/hana/data/SID/mnt00001 Data files installation for production SID

/hana/log/SID/mnt00001 Log files installation for production SID

/hana/logbackups/SID Redo logs for production SID

Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured for the production
HANA instance installation at the DR HLI unit.
The primary site node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.

Next steps
Learn about deploying HANA Large Instances.

SAP HANA (Large Instances) deployment


SAP HANA (Large Instances)
deployment
Article • 02/10/2023

In this article, we'll list the information you'll need to deploy SAP HANA Large Instances
(otherwise known as BareMetal Infrastructure instances). First, for background, see:

HANA Large Instances common terms


HANA Large Instances SKUs

Required information
You've purchased SAP HANA on Azure Large Instances from Microsoft and want to
deploy. Microsoft will need the following information from you:

Customer name.
Business contact information (including email address and phone number).
Technical contact information (including email address and phone number).
Technical networking contact information (including email address and phone
number).
Azure deployment region (for example, West US, Australia East, or North Europe).
SAP HANA on Azure (large instances) SKU (configuration).
For every Azure deployment region:
A /29 IP address range for ER-P2P connections that connect Azure virtual
networks to HANA Large Instances.
A /24 CIDR Block used for the HANA Large Instances server IP pool.
Optional when using ExpressRoute Global Reach, reserve another /29 IP address
range. The added range enables direct routing from on-premises to HANA
Large Instance units. The added range also enables routing between HANA
Large Instance units in different Azure regions. This particular range can't
overlap with the IP address ranges you defined before.
The IP address range values used in the virtual network address space attribute of
every Azure virtual network that connects to the HANA Large Instances.
Data for each HANA Large Instances system:
Desired hostname, ideally with a fully qualified domain name.
Desired IP address for the HANA Large Instance unit out of the Server IP pool
address range. (The first 30 IP addresses in the server IP pool address range are
reserved for internal use within HANA Large Instances.)
SAP HANA SID name for the SAP HANA instance (required to create the
necessary SAP HANA-related disk volumes). Microsoft needs the HANA SID for
creating the permissions for sidadm on the NFS volumes. These volumes attach
to the HANA Large Instance unit. The HANA SID is also used as one of the name
components of the disk volumes that get mounted. If you want to run more
than one HANA instance on the unit, you should list multiple HANA SIDs. Each
one gets a separate set of volumes assigned.
In the Linux OS, the sidadm user has a group ID. This ID is required to create the
necessary SAP HANA-related disk volumes. The SAP HANA installation usually
creates the sapsys group, with a group ID of 1001. The sidadm user is part of
that group.
In the Linux OS, the sidadm user has a user ID. This ID is required to create the
necessary SAP HANA-related disk volumes. If you're running several HANA
instances on the unit, list all the sidadm users.
The Azure subscription ID for the Azure subscription to which SAP HANA on Azure
HANA Large Instances are going to be directly connected. This subscription ID
references the Azure subscription, which is going to be charged with the HANA
Large Instance unit or units.

After you provide the preceding information, Microsoft provisions SAP HANA on Azure
(Large Instances). Microsoft sends you information to link your Azure virtual networks to
HANA Large Instances. You can also access the HANA Large Instance units.

Next steps
See the following articles in sequence to connect to the HANA Large Instances after
Microsoft has deployed them:

1. Connecting Azure VMs to HANA Large Instances


2. Connecting a VNet to HANA Large Instances ExpressRoute
3. More network requirements (optional)
Connecting Azure VMs to HANA Large
Instances
Article • 02/10/2023

In this article, we'll look at what's involved in connecting your Azure VMs to HANA Large
Instances (otherwise known as BareMetal Infrastructure instances).

The article What is SAP HANA on Azure (Large Instances)? mentions that the minimal
deployment of HANA Large Instances with the SAP application layer in Azure looks like
this:
Looking closer at the Azure virtual network side, you'll need:

The definition of an Azure virtual network into which you're going to deploy the
VMs of the SAP application layer.
The definition of a default subnet in the Azure virtual network that is really the one
into which the VMs are deployed.
The Azure virtual network that's created needs to have at least one VM subnet and
one Azure ExpressRoute virtual network gateway subnet. These subnets should be
assigned the IP address ranges as specified and discussed in the following sections.

Create the Azure virtual network for HANA


Large Instances

7 Note

The Azure virtual network for HANA Large Instances must be created by using the
Azure Resource Manager deployment model. The older Azure deployment model,
commonly known as the classic deployment model, isn't supported by the HANA
Large Instance solution.

You can use the Azure portal, PowerShell, an Azure template, or the Azure CLI to create
the virtual network. (For more information, see Create a virtual network using the Azure
portal). In the following example, we look at a virtual network that's created by using the
Azure portal.

In this documentation, address space refers to the address space that the Azure virtual
network is allowed to use. This address space is also the address range that the virtual
network uses for BGP route propagation. This address space can be seen here:

In the previous example, with 10.16.0.0/16, the Azure virtual network was given a rather
large and wide IP address range to use. All the IP address ranges of subsequent subnets
within this virtual network can have their ranges within that address space. We don't
usually recommend such a large address range for a single virtual network in Azure. But
let's look into the subnets defined in the Azure virtual network:

You see a virtual network with a first VM subnet (here called "default") and a subnet
called "GatewaySubnet".

In the two previous images, the virtual network address space covers both the subnet
IP address range of the Azure VM and that of the virtual network gateway.

You can restrict the virtual network address space to the specific ranges used by each
subnet. You can also define the virtual network address space of a virtual network as
multiple specific ranges, as shown here:

In this case, the virtual network address space has two spaces defined. They're the
same as the IP address ranges defined for the subnet IP address range of the Azure VM
and the virtual network gateway.

You can use any naming standard you like for these tenant subnets (VM subnets).
However, there must always be one, and only one, gateway subnet for each virtual
network that connects to the SAP HANA on Azure (Large Instances) ExpressRoute
circuit. This gateway subnet has to be named "GatewaySubnet" to make sure the
ExpressRoute gateway is properly placed.

2 Warning
It's critical that the gateway subnet always be named "GatewaySubnet".

You can use multiple VM subnets and non-contiguous address ranges. These address
ranges must be covered by the virtual network address space of the virtual network.
They can be in an aggregated form. They can also be in a list of the exact ranges of the
VM subnets and the gateway subnet.

The following list summarizes important facts about Azure virtual networks that connect
to HANA Large Instances:

You must submit the virtual network address space to Microsoft when you're
initially deploying HANA Large Instances.
The virtual network address space can be one larger range that covers the ranges
for both the subnet IP address range of the Azure VM and the virtual network
gateway.
Or you can submit multiple ranges that cover the different IP address ranges of
VM subnet IP address range(s) and the virtual network gateway IP address range.
The defined virtual network address space is used for BGP routing propagation.
The name of the gateway subnet must be: "GatewaySubnet".
The address space is used as a filter on the HANA Large Instance side to allow or
disallow traffic to the HANA Large Instance units from Azure. The BGP routing
information of the Azure virtual network and the IP address ranges configured for
filtering on the HANA Large Instance side should match. Otherwise, connectivity
issues can occur.
There are further important details about the gateway subnet. For more
information, see Connect a virtual network to HANA large instances.

Different IP address ranges to be defined


Some of the IP address ranges necessary for deploying HANA Large Instances have
already been introduced. There are other important IP address ranges as well. Not all of
the following IP address ranges need to be submitted to Microsoft. However, you do
need to define them before sending a request for initial deployment:

Virtual network address space: The virtual network address space is the IP
address ranges that you assign to your address space parameter in the Azure
virtual networks. These networks connect to the SAP HANA Large Instance
environment. We recommend that this address space parameter is a multi-line
value. It should consist of the subnet range of the Azure VM and the subnet
range(s) of the Azure gateway.
This subnet range was shown in the previous graphics. It must NOT overlap with
your on-premises or server IP pool or ER-P2P address ranges. How do you get
these IP address range(s)? Your corporate network team or service provider should
provide one or more IP address range(s) that aren't used inside your network. For
example, the subnet of your Azure VM is 10.0.1.0/24, and the subnet of your Azure
gateway subnet is 10.0.2.0/28. We recommend that your Azure virtual network
address space is defined as: 10.0.1.0/24 and 10.0.2.0/28. Although the address
space values can be aggregated, we recommend matching them to the subnet
ranges. This way you can avoid accidentally reusing IP address ranges within larger
address spaces elsewhere in your network. The virtual network address space is
an IP address range. It needs to be submitted to Microsoft when you ask for an
initial deployment.

Azure VM subnet IP address range: This IP address range is the one you assign to
the Azure virtual network subnet parameter. This parameter is in your Azure virtual
network and connects to the SAP HANA Large Instance environment. This IP
address range is used to assign IP addresses to your Azure VMs. The IP addresses
out of this range are allowed to connect to your SAP HANA Large Instance
server(s). If needed, you can use multiple Azure VM subnets. We recommend a /24
CIDR block for each Azure VM subnet. This address range must be a part of the
values used in the Azure virtual network address space. How do you get this IP
address range? Your corporate network team or service provider should provide an
IP address range that isn't being used inside your network.

Virtual network gateway subnet IP address range: Depending on the features


that you plan to use, the recommended size is:
Ultra-performance ExpressRoute gateway: /26 address block--required for Type
II class of SKUs.
Coexistence with VPN and ExpressRoute using a high-performance
ExpressRoute virtual network gateway (or smaller): /27 address block.
All other situations: /28 address block. This address range must be a part of the
values used in the "VNet address space" values. This address range must be a
part of the values used in the Azure virtual network address space values that
you submit to Microsoft. How do you get this IP address range? Your corporate
network team or service provider should provide an IP address range that's not
currently being used inside your network.

Address range for ER-P2P connectivity: This range is the IP range for your SAP
HANA Large Instance ExpressRoute (ER) P2P connection. This range of IP addresses
must be a /29 CIDR IP address range. This range must NOT overlap with your on-
premises or other Azure IP address ranges. This IP address range is used to set up
the ER connectivity from your ExpressRoute virtual gateway to the SAP HANA
Large Instance servers. How do you get this IP address range? Your corporate
network team or service provider should provide an IP address range that's not
currently being used inside your network. This range is an IP address range. It
needs to be submitted to Microsoft when you ask for an initial deployment.

Server IP pool address range: This IP address range is used to assign the
individual IP address to HANA Large Instance servers. The recommended subnet
size is a /24 CIDR block. If needed, it can be smaller, with as few as 64 IP addresses.
From this range, the first 30 IP addresses are reserved for use by Microsoft. Make
sure that you account for this when you choose the size of the range. This range
must NOT overlap with your on-premises or other Azure IP addresses. How do you
get this IP address range? Your corporate network team or service provider should
provide an IP address range that's not currently being used inside your network.
This range is an IP address range, which needs to be submitted to Microsoft
when asking for an initial deployment.

Optional IP address ranges to eventually submit to Microsoft:

If you choose to use ExpressRoute Global Reach to enable direct routing from on-
premises to HANA Large Instance units, you need to reserve another /29 IP
address range. This range may not overlap with any of the other IP addresses
ranges you defined before.
If you choose to use ExpressRoute Global Reach to enable direct routing from a
HANA Large Instance tenant in one Azure region to another HANA Large Instance
tenant in another Azure region, you need to reserve another /29 IP address range.
This range may not overlap with the other IP address ranges you defined before.

For more information about ExpressRoute Global Reach and usage around HANA large
instances, see:

SAP HANA (Large Instances) network architecture


Connect a virtual network to HANA large instances

You need to define and plan the IP address ranges previously described. However, you
don't need to transmit all of them to Microsoft. The IP address ranges that you're
required to name to Microsoft are:

Azure virtual network address space(s)


Address range for ER-P2P connectivity
Server IP pool address range

If you add more virtual networks that need to connect to HANA Large Instances, submit
the new Azure virtual network address space you're adding to Microsoft.
The following example shows the different ranges and some example ranges you need
to configure and eventually provide to Microsoft. The value for the Azure virtual network
address space isn't aggregated in the first example. However, it's defined from the
ranges of the first Azure VM subnet IP address range and the virtual network gateway
subnet IP address range.

You can use multiple VM subnets within the Azure virtual network when you configure
and submit the other IP address ranges of the added VM subnet(s) as part of the Azure
virtual network address space.

The preceding image doesn't show the added IP address range(s) required for the
optional use of ExpressRoute Global Reach.

You can also aggregate the data that you submit to Microsoft. In that case, the address
space of the Azure virtual network only includes one space. Using the IP address ranges
from the earlier example, the aggregated virtual network address space could look like
the following image:

In this example, instead of two smaller ranges that defined the address space of the
Azure virtual network, we have one larger range that covers 4096 IP addresses. Such a
large definition of the address space leaves some rather large ranges unused. Since the
virtual network address space value(s) are used for BGP route propagation, using the
unused ranges on-premises or elsewhere in your network can cause routing issues. The
preceding graphic doesn't show the added IP address range(s) required for the optional
use of ExpressRoute Global Reach.

We recommend that you keep the address space tightly aligned with the actual subnet
address space that you use. If needed, you can always add new address space values
later without incurring downtime on the virtual network.

) Important

Each IP address range in ER-P2P, the server IP pool, and the Azure virtual network
address space must NOT overlap with one another or with any other range that's
used in your network. Each must be discrete. As the two previous graphics show,
they also can't be a subnet of any other range. If overlaps occur between ranges,
the Azure virtual network might not connect to the ExpressRoute circuit.

Next steps after address ranges have been


defined
After the IP address ranges have been defined, the following things need to happen:

1. Submit the IP address ranges for the Azure virtual network address space, the ER-
P2P connectivity, and server IP pool address range, together with other data that
has been listed at the beginning of the document. At this point, you could also
start to create the virtual network and the VM subnets.
2. An ExpressRoute circuit is created by Microsoft between your Azure subscription
and the HANA Large Instance stamp.
3. A tenant network is created on the Large Instance stamp by Microsoft.
4. Microsoft configures networking in the SAP HANA on Azure (Large Instances)
infrastructure to accept IP addresses from your Azure virtual network address
space that communicates with HANA Large Instances.
5. Depending on the specific SAP HANA on Azure (Large Instances) SKU that you
bought, Microsoft assigns a compute unit in a tenant network. It also allocates and
mounts storage, and installs the operating system (SUSE or Red Hat Linux). IP
addresses for these units are taken out of the Server IP Pool address range you
submitted to Microsoft.

At the end of the deployment process, Microsoft delivers the following data to you:
Information that's needed to connect your Azure virtual network(s) to the
ExpressRoute circuit that connects Azure virtual networks to HANA Large Instances:
Authorization key(s)
ExpressRoute PeerID
Data for accessing HANA Large Instances after you establish the ExpressRoute
circuit and Azure virtual network.

You can also find the sequence of connecting HANA Large Instances in the document
SAP HANA on Azure (Large Instances) Setup . Many of the steps are shown in an
example deployment in that document.

Next steps
Learn about connecting a virtual network to HANA Large Instance ExpressRoute.

Connect a virtual network to HANA large instances


Connect a virtual network to HANA
Large Instances
Article • 02/10/2023

You've created an Azure virtual network. You can now connect that network to SAP
HANA Large Instances (otherwise known as BareMetal Infrastructure instances). In this
article, we'll look at the steps you'll need to take.

Create an Azure ExpressRoute gateway on the


virtual network
First, create an Azure ExpressRoute gateway on your virtual network. This gateway allows
you to link the virtual network to the ExpressRoute circuit that connects to your tenant
on the HANA Large Instance stamp.

7 Note

This step can take up to 30 minutes to complete. You create the new gateway in the
designated Azure subscription and then connect it to the specified Azure virtual
network.

7 Note

We recommend that you use the Azure Az PowerShell module to interact with
Azure. See Install Azure PowerShell to get started. To learn how to migrate to the
Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

If a gateway already exists, check whether it's an ExpressRoute gateway. If it isn't an


ExpressRoute gateway, delete the gateway and recreate it as an ExpressRoute
gateway. If an ExpressRoute gateway is already established, skip to the following
section of this article, Link virtual networks.

Use either the Azure portal or PowerShell to create an ExpressRoute VPN


gateway connected to your virtual network.
If you use the Azure portal, add a new Virtual Network Gateway, and then
select ExpressRoute as the gateway type.
If you use PowerShell, first download and use the latest Azure PowerShell
SDK .
The following commands create an ExpressRoute gateway. The texts preceded by a
$ are user-defined variables that should be updated with your specific information.

PowerShell

# These Values should already exist, update to match your environment


$myAzureRegion = "eastus"
$myGroupName = "SAP-East-Coast"
$myVNetName = "VNet01"

# These values are used to create the gateway, update for how you wish
the GW components to be named
$myGWName = "VNet01GW"
$myGWConfig = "VNet01GWConfig"
$myGWPIPName = "VNet01GWPIP"
$myGWSku = "UltraPerformance" # Supported values for HANA large
instances are: UltraPerformance

# These Commands create the Public IP and ExpressRoute Gateway


$vnet = Get-AzVirtualNetwork -Name $myVNetName -ResourceGroupName
$myGroupName
$subnet = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -
VirtualNetwork $vnet
New-AzPublicIpAddress -Name $myGWPIPName -ResourceGroupName
$myGroupName `
-Location $myAzureRegion -AllocationMethod Dynamic
$gwpip = Get-AzPublicIpAddress -Name $myGWPIPName -ResourceGroupName
$myGroupName
$gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name $myGWConfig -
SubnetId $subnet.Id `
-PublicIpAddressId $gwpip.Id

New-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName


$myGroupName -Location $myAzureRegion `
-IpConfigurations $gwipconfig -GatewayType ExpressRoute `
-GatewaySku $myGWSku -VpnType PolicyBased -EnableBgp $true

The only supported gateway SKU for SAP HANA on Azure (Large Instances) is
UltraPerformance.

Link virtual networks


The Azure virtual network now has an ExpressRoute gateway. Use the authorization
information provided by Microsoft to connect the ExpressRoute gateway to the SAP
HANA Large Instances ExpressRoute circuit. You can connect by using the Azure portal
or PowerShell. The PowerShell instructions are as follows.

Run the following commands for each ExpressRoute gateway by using a different
AuthGUID for each connection. The first two entries shown in the following script come
from the information provided by Microsoft. Also, the AuthGUID is specific for every
virtual network and its gateway. If you want to add another Azure virtual network, you
need to get another AuthID for your ExpressRoute circuit that connects HANA Large
Instances into Azure from Microsoft.

PowerShell

# Populate with information provided by Microsoft Onboarding team


$PeerID = "/subscriptions/9cb43037-9195-4420-a798-
f87681a0e380/resourceGroups/Customer-USE-
Circuits/providers/Microsoft.Network/expressRouteCircuits/Customer-USE01"
$AuthGUID = "76d40466-c458-4d14-adcf-3d1b56d1cd61"

# Your ExpressRoute Gateway information


$myGroupName = "SAP-East-Coast"
$myGWName = "VNet01GW"
$myGWLocation = "East US"

# Define the name for your connection


$myConnectionName = "VNet01GWConnection"

# Create a new connection between the ER Circuit and your Gateway using the
Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName
$myGroupName

New-AzVirtualNetworkGatewayConnection -Name $myConnectionName `


-ResourceGroupName $myGroupName -Location $myGWLocation -
VirtualNetworkGateway1 $gw `
-PeerId $PeerID -ConnectionType ExpressRoute -AuthorizationKey $AuthGUID -
ExpressRouteGatewayBypass

7 Note

The last parameter in the command New-AzVirtualNetworkGatewayConnection,


ExpressRouteGatewayBypass, is a new parameter that enables ExpressRoute
FastPath. This functionality was added in May 2019 and reduces network latency
between your HANA Large Instance units and Azure VMs. For more information,
see SAP HANA (Large Instances) network architecture. Make sure you're running
the latest version of PowerShell cmdlets before running the commands.

You may need to connect the gateway to more than one ExpressRoute circuit associated
with your subscription. In that case, you'll need to run this step more than once. For
example, you're likely to connect the same virtual network gateway to the ExpressRoute
circuit that connects the virtual network to your on-premises network.
Applying ExpressRoute FastPath to existing
HANA Large Instance ExpressRoute circuits
You've seen how to connect a new ExpressRoute circuit created with a HANA Large
Instance deployment to an Azure ExpressRoute gateway on one of your Azure virtual
networks. But what if you already have your ExpressRoute circuits set up, and your
virtual networks are already connected to HANA Large Instances?

The new ExpressRoute FastPath reduces network latency. We recommend you apply the
change to take advantage of this reduced latency. The commands to connect a new
ExpressRoute circuit are the same as to change an existing ExpressRoute circuit. So you'll
need to run this sequence of PowerShell commands to change an existing circuit.

PowerShell

# Populate with information provided by Microsoft Onboarding team


$PeerID = "/subscriptions/9cb43037-9195-4420-a798-
f87681a0e380/resourceGroups/Customer-USE-
Circuits/providers/Microsoft.Network/expressRouteCircuits/Customer-USE01"
$AuthGUID = "76d40466-c458-4d14-adcf-3d1b56d1cd61"

# Your ExpressRoute Gateway information


$myGroupName = "SAP-East-Coast"
$myGWName = "VNet01GW"
$myGWLocation = "East US"

# Define the name for your connection


$myConnectionName = "VNet01GWConnection"

# Create a new connection between the ER Circuit and your Gateway using the
Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName
$myGroupName

New-AzVirtualNetworkGatewayConnection -Name $myConnectionName `


-ResourceGroupName $myGroupName -Location $myGWLocation -
VirtualNetworkGateway1 $gw `
-PeerId $PeerID -ConnectionType ExpressRoute -AuthorizationKey $AuthGUID -
ExpressRouteGatewayBypass

It's important that you add the last parameter as displayed above to enable the
ExpressRoute FastPath functionality.

ExpressRoute Global Reach


Enable Global Reach for either of the following scenarios:
HANA system replication without any added proxies or firewalls.
Copying backups between HANA Large Instance units in two different regions to
make system copies or for system refreshes.

To enable Global Reach:

Provide an address space range of a /29 address space. That address range may
not overlap with any of the other address space ranges you used so far connecting
HANA Large Instances to Azure. The address range should also not overlap with
any of the IP address ranges you used elsewhere in Azure or on-premises.
There's a limitation on the autonomous system numbers (ASNs) that can be used
to advertise your on-premises routes to HANA Large Instances. Your on-premises
mustn't advertise any routes with private ASNs in the range of 65000 – 65020 or
65515.
When connecting on-premises direct access to HANA Large instances, you need to
calculate a fee for the circuit that connects you to Azure. For more information,
check the pricing for Global Reach Add-On .

To have one or both of the scenarios applied to your deployment, open a support
message with Azure as described in Open a support request for HANA large Instances

The data and keywords you'll need to use for Microsoft to route and execute your
request are as follows:

Service: SAP HANA Large Instance


Problem type: Configuration and Setup
Problem subtype: My problem isn't listed above.
Subject "Modify my Network - add Global Reach"
Details: "Add Global Reach to HANA Large Instance to HANA Large Instance
tenant." or "Add Global Reach to on-premises to HANA Large Instance tenant."
Additional details for the HANA Large Instance to HANA Large Instance tenant
case: You need to define the two Azure regions where the two tenants to connect
are located, AND you need to submit the /29 IP address range.
Additional details for the on-premises to HANA Large Instance tenant case:
Define the Azure Region where the HANA Large Instance tenant is deployed
that you want to directly connect to.
Provide the Auth GUID and Circuit Peer ID you received when you established
your ExpressRoute circuit between on-premises and Azure.
Name your ASN.
Provide a /29 IP address range for ExpressRoute Global Reach.

7 Note
If you want to have both cases handled, you need to supply two different /29 IP
address ranges that don't overlap with any other IP address range used so far.

Next steps
Learn about other network requirements you may have to deploy SAP HANA Large
Instances on Azure.

Additional network requirements for Large Instances


Other network requirements for Large
Instances
Article • 02/10/2023

In this article, we'll look at other network requirements you may have when deploying
SAP HANA Large Instances on Azure.

Prerequisites
This article assumes you've completed the steps in:

Connecting Azure VMs to HANA Large Instances


Connect a virtual network to HANA Large Instances

Add more IP addresses or subnets


You may find you need to add more IP addresses or subnets. Use either the Azure
portal, PowerShell, or the Azure CLI when you add more IP addresses or subnets.

Add the new IP address range as a new range to the virtual network address space.
Don't generate a new aggregated range. Submit this change to Microsoft. This way you
can connect from that new IP address range to the HANA Large Instances in your client.
You can open an Azure support request to get the new virtual network address space
added. Once you receive confirmation, do the steps discussed in Connecting Azure VMs
to HANA Large Instances.

To create another subnet from the Azure portal, see Create a virtual network using the
Azure portal. To create one from PowerShell, see Create a virtual network using
PowerShell.

Add virtual networks


After initially connecting one or more Azure virtual networks, you might want to connect
more virtual networks that access SAP HANA on Azure (Large Instances). First, submit an
Azure support request. In that request, include the specific information identifying the
particular Azure deployment. Also include the IP address space range or ranges of the
Azure virtual network address space. SAP HANA on Microsoft Service Management then
provides the necessary information you need to connect the added virtual networks and
Azure ExpressRoute. For every virtual network, you need a unique authorization key to
connect to the ExpressRoute circuit to HANA Large Instances.

Increase ExpressRoute circuit bandwidth


Consult with SAP HANA on Microsoft Service Management. If they advise you to
increase the bandwidth of the SAP HANA on Azure (Large Instances) ExpressRoute
circuit, create an Azure support request. (You can request an increase for a single circuit
bandwidth up to a maximum of 10 Gbps.) You then receive notification after the
operation is complete; you don't need to do anything else to enable this higher speed in
Azure.

Add another ExpressRoute circuit


Consult with SAP HANA on Microsoft Service Management. If they advise you to add
another ExpressRoute circuit, create an Azure support request (including a request to
get authorization information to connect to the new circuit). Before making the request,
you must define the address space used on the virtual networks. SAP HANA on
Microsoft Service Management can then provide authorization.

When the new circuit is created, and the SAP HANA on Microsoft Service Management
configuration is complete, you'll receive notification with the information you need to
continue. You can't connect Azure virtual networks to this added circuit if they're already
connected to another SAP HANA on Azure (Large Instance) ExpressRoute circuit in the
same Azure region.

Delete a subnet
To remove a virtual network subnet, you can use the Azure portal, PowerShell, or the
Azure CLI. If your Azure virtual network IP address range or address space was an
aggregated range, you don't have to take any action with Microsoft. (The virtual network
is still propagating the BGP route address space that includes the deleted subnet.)

You might have defined the Azure virtual network address range or address space as
multiple IP address ranges. One of these ranges could have been assigned to your
deleted subnet. Be sure to delete that from your virtual network address space. Then
inform SAP HANA on Microsoft Service Management to remove it from the ranges that
SAP HANA on Azure (Large Instances) is allowed to communicate with.

For more information, see Delete a subnet.


Delete a virtual network
For information, see Delete a virtual network.

SAP HANA on Microsoft Service Management removes the existing authorizations on


the SAP HANA on Azure (Large Instances) ExpressRoute circuit. It also removes the
Azure virtual network IP address range or address space for the communication with
HANA Large Instances.

After you remove the virtual network, open an Azure support request to provide the IP
address space range or ranges to be removed.

Be sure you remove everything. Delete the:

ExpressRoute connection
Virtual network gateway
Virtual network gateway public IP
Virtual network

Delete an ExpressRoute circuit


To remove an extra SAP HANA on Azure (Large Instances) ExpressRoute circuit, open an
Azure support request with SAP HANA on Microsoft Service Management. Request that
they delete the circuit. Within the Azure subscription, you may delete or keep the virtual
network as needed. However, you must delete the connection between the HANA Large
Instances ExpressRoute circuit and the linked virtual network gateway.

Next steps
Learn how to install and configure SAP HANA (Large Instances) on Azure.

Install and configure SAP HANA (Large Instances)


SAP HANA Large Instances high
availability and disaster recovery on
Azure
Article • 02/10/2023

) Important

This documentation doesn't replace the SAP HANA administration documentation


or SAP Notes. We expect you have expertise in SAP HANA administration and
operations, especially with the topics of backup, restore, high availability, and
disaster recovery.

In this article, we'll give an overview of high availability (HA) and disaster recovery (DR)
of SAP HANA on Azure Large Instances (otherwise known as BareMetal Infrastructure).
We'll also detail some of the requirements and considerations related to HA and DR.

Some of the processes described in this documentation are simplified. They aren't
intended as detailed steps to be included in operation handbooks. To create operation
handbooks for your configurations, run and test your processes with your specific HANA
versions and releases. You can then document the processes specific to your
configurations.

HA and DR
High availability and disaster recovery are crucial aspects of running your mission-critical
SAP HANA on the Azure (Large Instances) server. It's important to work with SAP, your
system integrator, or Microsoft to properly architect and implement the right high
availability and disaster recovery strategies. Also consider the recovery point objective
(RPO) and recovery time objective (RTO), which are specific to your environment.

Microsoft supports some SAP HANA high-availability capabilities with HANA Large
Instances. These capabilities include:

Storage replication: The storage system's ability to replicate all data to another
HANA Large Instance stamp in another Azure region. SAP HANA operates
independently of this method. This functionality is the default disaster recovery
mechanism offered for HANA Large Instances.
HANA system replication: The replication of all data in SAP HANA to a separate
SAP HANA system. The RTO is minimized through data replication at regular
intervals. SAP HANA supports asynchronous, synchronous in-memory, and
synchronous modes. Synchronous mode is used only for SAP HANA systems within
the same datacenter or less than 100 km apart. With the current design of HANA
Large Instance stamps, HANA system replication can be used for high availability
within one region only. HANA system replication requires a third-party reverse
proxy or routing component for disaster recovery configurations into another
Azure region.
Host auto-failover: A local fault-recovery solution for SAP HANA that's an
alternative to HANA system replication. If the primary node becomes unavailable,
you configure one or more standby SAP HANA nodes in scale-out mode, and SAP
HANA automatically fails over to a standby node.

SAP HANA on Azure (Large Instances) is offered in two Azure regions in four
geopolitical areas: US, Australia, Europe, and Japan. Two regions within a geopolitical
area that host HANA Large Instance (HLI) stamps are connected to separate dedicated
network circuits. These HLIs are used for replicating storage snapshots to provide
disaster recovery methods. Replication isn't set up by default but only for customers
who order disaster recovery functionality. Storage replication is dependent on the usage
of storage snapshots for HANA Large Instances. You can't choose an Azure region as a
DR region that's in a different geopolitical area.

Currently supported options


The following table shows the currently supported high availability and disaster recovery
methods and combinations:

Scenario High Disaster recovery option Comments


supported in availability
HANA Large option
Instances

Single node Not available. Dedicated DR setup.


Multipurpose DR setup.

Host automatic Possible with Dedicated DR setup. HANA volume sets are
failover: Scale- the standby Multipurpose DR setup. attached to all the nodes.
out (with or taking the DR synchronization by DR site must have the same
without standby) active role. using storage replication. number of nodes.
including 1+1 HANA controls
the role
switch.
Scenario High Disaster recovery option Comments
supported in availability
HANA Large option
Instances

HANA system Possible with Dedicated DR setup. Separate set of disk volumes
replication primary or Multipurpose DR setup. are attached to each node.
secondary DR synchronization by Only disk volumes of
setup. using storage replication. secondary replica in the
Secondary DR by using HANA system production site get replicated
moves to replication isn't yet to the DR location.
primary role in possible without third- One set of volumes is
a failover case. party components. required at the DR site.
HANA system
replication and
OS control
failover.

A dedicated DR setup is where the HANA Large Instance unit in the DR site isn't used for
running any other workload or non-production system. The unit is passive and is
deployed only if a disaster failover is executed. This setup isn't the preferred option for
most customers.

To learn about storage layout and ethernet details for your architecture, see HLI
supported scenarios.

7 Note

Before HANA2.0 SPS4 it was not supported to take database snapshots of multi-
tenant database container databases (more than one tenant). With SPS4 and newer
SAP is fully supporting this snapshot feature.

A multipurpose DR setup is where the HANA Large Instance unit on the DR site runs a
non-production workload. If there's a disaster, shut down the non-production system,
mount the storage-replicated (added) volume sets, and start the production HANA
instance. Most customers who use the HANA Large Instance disaster recovery
functionality use this configuration.

You can find more information on SAP HANA high availability in the following SAP
articles:

SAP HANA High Availability Whitepaper


SAP HANA Administration Guide
SAP HANA Academy Video on SAP HANA System Replication
SAP Support Note #1999880 – FAQ on SAP HANA System Replication
SAP Support Note #2165547 – SAP HANA Back up and Restore within SAP HANA
System Replication Environment
SAP Support Note #1984882 – Using SAP HANA System Replication for Hardware
Exchange with Minimum/Zero Downtime

Network considerations for disaster recovery


with HANA Large Instances
To take advantage of the disaster recovery functionality of HANA Large Instances, you
need to design network connectivity to the two Azure regions. You need an Azure
ExpressRoute circuit connection from on-premises in your main Azure region, and
another circuit connection from on-premises to your disaster recovery region. This
measure covers a situation in which there's a problem in an Azure region, including a
Microsoft Enterprise Edge Router (MSEE) location.

You can also connect all Azure virtual networks that connect to SAP HANA on Azure
(Large Instances) in one region to an ExpressRoute circuit that connects HANA Large
Instances in the other region. With this cross connect, services running on an Azure
virtual network in Region 1 can connect to HANA Large Instance units in Region 2, and
the other way around. This measure addresses a case in which only one of the MSEE
locations that connects to your on-premises location with Azure goes offline.

The following graphic illustrates a resilient configuration for disaster recovery cases:
Other requirements with HANA Large Instances
storage replication for disaster recovery
Order SAP HANA on Azure (Large Instances) SKUs of the same size as your
production SKUs and deploy them in the disaster recovery region. In current
customer deployments, these instances are used to run non-production HANA
instances. These configurations are referred to as multipurpose DR setups.
Order more storage on the DR site for each of your SAP HANA on Azure (Large
Instances) SKUs that you want to recover in the disaster recovery site. Buying more
storage lets you allocate the storage volumes. You can allocate the volumes that
are the target of the storage replication from your production Azure region into
the disaster recovery Azure region.
You may have SAP HANA system replication set up on primary and storage-based
replication to the DR site. Then you must purchase more storage at the DR site so
the data of both primary and secondary nodes gets replicated to the DR site.
Next steps
Learn about Backup and restore of SAP HANA on HANA Large Instances.

Backup and restore of SAP HANA on HANA Large Instances


Backup and restore of SAP HANA on
HANA Large Instances
Article • 02/10/2023

) Important

This article doesn't replace the SAP HANA administration documentation or SAP
Notes. We expect you have expertise in SAP HANA administration and operations,
especially with the topics of backup, restore, high availability, and disaster recovery.
In this article, screenshots from SAP HANA Studio are shown. Content, structure,
and the nature of the screens of SAP administration tools and the tools themselves
might change from SAP HANA release to release.

In this article, we'll walk through the steps of backing up and restoring SAP HANA on
HANA Large Instances (otherwise known as BareMetal Infrastructure).

Some of the processes described in this article are simplified. They aren't intended as
detailed steps to be included in operation handbooks. To create operation handbooks
for your configurations, run and test your processes with your specific HANA versions
and releases. You can then document the processes for your configurations.

One of the most important aspects of operating databases is to protect them from
catastrophic events. Such events may be caused by anything from natural disasters to
simple user errors. Backing up a database, with the ability to restore it to any point in
time, such as before someone deleted critical data, offers critical protection. You can
restore your database to a state that's as close as possible to the way it was prior to the
disruption.

Two types of backups must be performed to achieve the capability to restore:

Database backups: Full, incremental, or differential backups


Transaction log backups

You can do full-database backups at an application level or do backups with storage


snapshots. Storage snapshots don't replace transaction log backups. Transaction log
backups remain important to restore the database to a certain point in time or to empty
the logs from already committed transactions. Storage snapshots can accelerate
recovery by quickly providing a roll-forward image of the database.

SAP HANA on Azure (Large Instances) offers two backup and restore options:
You can use a third-party data protection tool to create backups. This tool should
be able to create application consistent snapshots or it must be able to use the
backing interface to stream with multiple sessions to a proper backup location.
There are several supported tools available. The choice of the tool should be
discussed and designed with the project team to meet the customer backup
windows requirements. And very important is to test the backup and restore
procedure during the project phase.
You can use storage snapshot backups with a utility provided by Microsoft as
described in the next chapter

7 Note

Before HANA2.0 SPS4 it was not supported to take database snapshots of multi-
tenant database container databases (more than one tenant). With SPS4 and newer
SAP is fully supporting this snapshot feature.

Use storage snapshots of SAP HANA on Azure


(Large Instances)
The storage infrastructure underlying SAP HANA on Azure (Large Instances) supports
storage snapshots of volumes. Both backup and restoration of volumes is supported,
with the following considerations:

Instead of full database backups, storage volume snapshots are taken on a


frequent basis.
Before a storage snapshot is triggered over /hana/data volume(s), the snapshot
tool (azacsnap) starts an SAP HANA snapshot. This SAP HANA snapshot is the
consistency point for eventual log restorations after recovery of the storage
snapshot.
For a HANA snapshot to be successful, you need an active HANA instance. In a
scenario with HANA System Replication (HSR), a storage snapshot isn't supported
on a current secondary node where a HANA snapshot can’t be performed.
After the storage snapshot runs successfully, the SAP HANA snapshot is deleted
Other volumes like /hana/shared (incl. /usr/sap) can be snapshotted anytime
without any database interaction

Transaction log backups are taken frequently and stored in the /hana/logbackups
volume or in Azure. You can trigger the /hana/logbackups volume that contains the
transaction log backups to take a snapshot separately. In that case, you don't need to
run a HANA data snapshot. Since all files in /hana/logbackup are consistent, because
they are “offline”, you can backup them also anytime to a different backup location to
archive them. If you must restore a database to a certain point in time, for a production
outage, the azacsnap tool can either clone any data snapshot to a new volume to
recover the database (preferred restore way) or restore a snapshot to the same data
volume where the database is located

7 Note

If you restore a older snapshot (snaprevert) to the original datavolume all newer
created snapshots will be deleted. The storage system is doing this because the
data points for the newer created snapshots will be invalid. Always start to revert
the latest snapshot or even better clone the snapshot to a new volume. By the
clone process nothing will be deleted.

Storage snapshot considerations

7 Note

Storage snapshots consume storage space that's allocated to the HANA Large
Instance units. Consider the following aspects of scheduling storage snapshots and
how many storage snapshots to keep.

The specific mechanics of storage snapshots for SAP HANA on Azure (Large Instances)
include:

A specific storage snapshot at the point in time when it's taken consumes little
storage.
As data content changes and the content in SAP HANA data files change on the
storage volume, the snapshot needs to store the original block content and the
data changes.
As a result, the storage snapshot increases in size. The longer the snapshot exists,
the larger the storage snapshot becomes.
The more changes made to the SAP HANA database volume over the lifetime of a
storage snapshot, the larger the space consumption of the storage snapshot.

SAP HANA on Azure (Large Instances) comes with fixed volume sizes for the SAP HANA
data and log volumes. Taking snapshots of those volumes eats into your volume space.
You need to:

Determine when to schedule storage snapshots.


Monitor the space consumption of the storage volumes.
Manage the number of snapshots that you store.

You can disable the storage snapshots when you either import masses of data or make
other significant changes to the HANA database.

The following sections provide information for taking these snapshots and include
general recommendations:

Although the hardware can sustain 255 snapshots per volume, you want to stay
well below this number. The recommendation is 250 or less.
Before you do storage snapshots, monitor and keep track of free space.
Lower the number of storage snapshots based on free space. You can lower the
number of snapshots that you keep, or you can extend the volumes. You can order
more storage in 1-terabyte units.
During activities such as moving data into SAP HANA with SAP platform migration
tools (R3load) or restoring SAP HANA databases from backups, disable storage
snapshots on the /hana/data volume.
During larger reorganizations of SAP HANA tables, avoid storage snapshots if
possible.
Storage snapshots are a prerequisite to take advantage of the DR capabilities of
SAP HANA on Azure (Large Instances).

Prerequisites for using self-service storage


snapshots
Read the documentation What is Azure Application Consistent Snapshot tool

There are two ways of implementing this tool.

1. Locally on the database server


2. Remotely on a “backup” VM

If you create a backup VM make sure the latest HANA client is installed in that VM. With
this method azacsnap must be able open a remote database connection to a HANA
instance running in a different VM. You need to request a ssh-key and a storage user
from the Microsoft Support team to be able to access the storage. Without this ssh-key
and the user it is not possible to create snapshots.

Download and set up azacsnap


To set up storage snapshots with HANA Large Instances, start with downloading the and
installing the azacsnap tool as described in Get started with Azure Application
Consistent Snapshot tool

Azacsnap is creating an user called azacsnap by default. If you prefer another name, you
can specify this during the installation. Check the above documentation for details.

Subsequent next steps


Follow the documentation of azacsnap to:

Install Azure Application Consistent Snapshot tool


Configure Azure Application Consistent Snapshot tool
Test Azure Application Consistent Snapshot tool
Back up using Azure Application Consistent Snapshot tool
Obtain details using Azure Application Consistent Snapshot tool
Delete using Azure Application Consistent Snapshot tool
Restore using Azure Application Consistent Snapshot tool
Disaster recovery using Azure Application Consistent Snapshot tool
Troubleshoot Azure Application Consistent Snapshot tool
Tips and tricks for using Azure Application Consistent Snapshot tool

Next steps
Read the article What is Azure Application Consistent Snapshot tool
Disaster Recovery principles and
preparation
Article • 02/10/2023

In this article, we'll discuss important disaster recovery (DR) principles for HANA Large
Instances (otherwise known as BareMetal Infrastructure). We'll walk through the steps
you need to take in preparation for disaster recovery. You'll also see how to achieve your
recovery time objective (RTO) and recovery point objective (RPO) in a disaster.

DR principles for HANA Large Instances


HANA Large Instances offer disaster recovery functionality between HANA Large
Instance stamps in different Azure regions. For instance, let's say you deploy HANA
Large Instances in the US West region of Azure. Then you can use the HANA Large
Instances in the US East region as disaster recovery units. Disaster recovery isn't
configured automatically, because it requires you to pay for another HANA Large
Instance in the DR region. The disaster recovery setup works for scale-up and scale-out
setups.

Most customers use the unit in the DR region to run non-production systems that use
an installed HANA instance. The HANA Large Instance needs to be of the same SKU as
the SKU used for production purposes. The following image shows what the disk
configuration between the server unit in the Azure production region and the disaster
recovery region looks like:
As shown in this overview graphic, you'll need to order a second set of disk volumes.
The target disk volumes associated with the HANA Large Instance server in the DR site
are the same size as the production volumes.

The following volumes are replicated from the production region to the DR site:

/hana/data
/hana/logbackups
/hana/shared (includes /usr/sap)

The /hana/log volume isn't replicated. The SAP HANA transaction log isn't needed when
restoring from those volumes.

HANA Large Instance storage replication


The basis of the DR functionality in the HANA Large Instance infrastructure is its storage
replication. The functionality used on the storage side isn't a constant stream of changes
that replicate in an asynchronous manner as changes happen to the storage volume.
Instead, it's a mechanism that relies on creating snapshots of these volumes regularly.
The delta between an already replicated snapshot and a new snapshot that isn't yet
replicated is then transferred to the DR site into target disk volumes. These snapshots
are stored on the volumes. If there's a disaster recovery failover, they need to be
restored on those volumes.

The first transfer of the complete data of the volume should happen before the amount
of data becomes smaller than the deltas between snapshots. Then the volumes in the
DR site will contain all of the volume snapshots taken in the production site. Eventually,
you can use that DR system to get to an earlier status to recover lost data, without
rolling back the production system.

If there's an MCOD deployment with multiple independent SAP HANA instances on one
HANA Large Instance, all SAP HANA instances should have storage replicated to the DR
side.

When you use HANA System Replication for high-availability in your production site,
and you use storage-based replication for the DR site, the volumes of both nodes from
the primary site to the DR instance are replicated. Purchase extra storage (same size as
primary node) at the DR site to accommodate replication from both primary and
secondary nodes to the DR.

7 Note
The HANA Large Instance storage replication functionality mirrors and replicates
storage snapshots. If you don't take storage snapshots as described in Backup and
restore, there can't be any replication to the DR site. Storage snapshot execution is
a prerequisite to storage replication to the disaster recovery site.

Preparation of the disaster recovery scenario


In this DR scenario, you have a production system running on HANA Large Instances in
the production Azure region. For the steps that follow, let's say the SID of that HANA
system is "PRD." You also have a non-production system running on HANA Large
Instances in the DR Azure region. Its SID is "TST." The following image shows this
configuration:

Let's say the server instance hasn't yet been ordered with the extra storage volume set.
Then SAP HANA on Azure Service Management attaches the added volumes. They're a
target for the production replica to the HANA Large Instance on which you're running
the TST HANA instance. You'll need to provide the SID of your production HANA
instance. After SAP HANA on Azure Service Management confirms the attachment of
those volumes, you'll need to mount those volumes to the HANA Large Instance.
The next step is for you to install the second SAP HANA instance on the HANA Large
Instance in the DR Azure region where you run the TST HANA instance. The newly
installed SAP HANA instance needs to have the same SID. The users created need to
have the same UID and Group ID as the production instance. Read Backup and restore
for details. If the installation succeeds, you need to:

Execute step 2 of the storage snapshot preparation described in Backup and


restore.
Create a public key for the DR unit of the HANA Large Instance if you haven't yet
done so. See step 3 of the storage snapshot preparation described in Backup and
restore.
Maintain the HANABackupCustomerDetails.txt with the new HANA instance and
test whether connectivity into storage works correctly.
Stop the newly installed SAP HANA instance on the HANA Large Instance in the DR
Azure region.
Unmount these PRD volumes and contact SAP HANA on Azure Service
Management. The volumes can't stay mounted to the unit because they can't be
accessible while functioning as the storage replication target.
The operations team establishes the replication relationship between the PRD volumes
in the production region and the PRD volumes in the DR region.

) Important

The /hana/log volume isn't replicated because it isn't necessary to restore the
replicated SAP HANA database to a consistent state in the disaster recovery site.

Next, set the storage snapshot backup schedule to achieve your RTO and RPO if there's
a disaster. To minimize the RPO, set the following replication intervals in the HANA
Large Instance service:

For the volumes covered by the combined snapshot (snapshot type hana), set to
replicate every 15 minutes to the equivalent storage volume targets in the disaster
recovery site.
For the transaction log backup volume (snapshot type logs), set to replicate every
3 minutes to the equivalent storage volume targets in the disaster recovery site.

To minimize the RPO:

Take a hana type storage snapshot every 30 minutes to 1 hour. For more
information, see Back up using Azure Application Consistent Snapshot tool.
Do SAP HANA transaction log backups every 5 minutes.
Take a logs type storage snapshot every 5-15 minutes. With this interval period,
you achieve an RPO of around 15-25 minutes.

With this setup, the sequence of transaction log backups, storage snapshots, and the
replication of the HANA transaction log backup volume and /hana/data, and
/hana/shared (includes /usr/sap) might look like the data shown in this graphic:
To achieve an even better RPO in the disaster recovery case, you can copy the HANA
transaction log backups from SAP HANA on Azure (Large Instances) to the other Azure
region. To achieve this further RPO reduction, take the following steps:

1. Back up the HANA transaction log as frequently as possible to /hana/logbackups.


2. Use rsync to copy the transaction log backups to the NFS share-hosted Azure
virtual machines. The virtual machines (VMs) are in Azure virtual networks in the
Azure production region and in the DR region. Connect both Azure virtual
networks to the circuit connecting the production HANA Large Instances to Azure.
For more information, see Network considerations for disaster recovery with HANA
Large Instances.
3. Keep the transaction log backups in the region of the VM attached to the NFS
exported storage.
4. In a disaster failover case, supplement the transaction log backups you find on the
/hana/logbackups volume with more recently taken transaction log backups on the
NFS share in the DR site.
5. Start a transaction log backup to restore to the latest backup that might be saved
over to the DR region.
When HANA Large Instance operations confirms the replication relationship setup, and
you start the execution storage snapshot backups, the data replication begins.

As the replication progresses, the snapshots on the PRD volumes in the DR Azure
regions aren't restored. The snapshots are only stored. If the volumes are mounted in
such a state, they represent the state in which you unmounted those volumes after the
PRD SAP HANA instance was installed on the server in the DR Azure region. They also
represent the storage backups that aren't yet restored.

If there's a failover, you can also choose to restore to an older storage snapshot instead
of to the latest storage snapshot.

Next steps
Learn about the disaster recovery failover procedure.

Disaster recovery failover procedure


Disaster recovery failover procedure
Article • 02/10/2023

) Important

This article isn't a replacement for the SAP HANA administration documentation or
SAP Notes. We expect that you have a solid understanding of and expertise in SAP
HANA administration and operations, especially for backup, restore, high
availability, and disaster recovery (DR). In this article, screenshots from SAP HANA
Studio are shown. Content, structure, and the nature of the screens of SAP
administration tools and the tools themselves might change from SAP HANA
release to release.

In this article, we'll walk through the steps of failover to a DR site for SAP HANA on
Azure Large Instances (otherwise known as BareMetal Infrastructure).

Failover scenarios and options


There are two cases to consider when you fail over to a DR site:

You need the SAP HANA database to go back to the latest status of data. In this
case, there's a self-service script you can use to do the failover without the need to
contact Microsoft. For the failback, you need to work with Microsoft.
You want to restore to a storage snapshot that's not the latest replicated snapshot.
In this case, you need to work with Microsoft.

7 Note

The following steps must be done on the HANA Large Instance in the DR site.

To restore to the latest replicated storage snapshots, follow the steps in "Perform full DR
failover - azure_hana_dr_failover" in Microsoft snapshot tools for SAP HANA on Azure .

If you want to have multiple SAP HANA instances failed over, run the
azure_hana_dr_failover command several times. When requested, enter the SAP HANA
SID you want to fail over and restore.

You can test the DR failover without impacting the actual replication relationship. To do
a test failover, follow the steps in "Perform a test DR failover -
azure_hana_test_dr_failover" in Microsoft snapshot tools for SAP HANA on Azure .
) Important

Do not run any production transactions on the instance that you created in the DR
site through the process of testing a failover. The command
azure_hana_test_dr_failover creates a set of volumes that have no relationship to
the primary site. As a result, synchronization back to the primary site is not possible.

If you want to test multiple SAP HANA instances, run the script several times. When
requested, enter the SAP HANA SID of the instance you want to test for failover.

Set DR volumes to an earlier snapshot


Let's say you need to fail over to the DR site to rescue data deleted hours before and
need the DR volumes to be set to an earlier snapshot. Then the following procedure
applies:

1. Shut down the nonproduction instance of HANA on the DR HANA Large Instance
that you're running. A dormant HANA production instance is preinstalled.

2. Make sure that no SAP HANA processes are running. Use the following command
for this check:

/usr/sap/hostctrl/exe/sapcontrol –nr <HANA instance number> - function


GetProcessList .

The output should show you the hdbdaemon process in a stopped state and no
other HANA processes in a running or started state.

3. Determine to which snapshot name or SAP HANA backup ID you want to have the
disaster recovery site restored. In real disaster recovery cases, this snapshot is
usually the latest snapshot. If you need to recover lost data, pick an earlier
snapshot.

4. Contact Azure Support through a high-priority support request. Ask for the restore
of that snapshot with the name and date of the snapshot. You can also identify it
by the HANA backup ID on the DR site. The default is for the operations side to
restore the /hana/data volume only. If you want to have the /hana/logbackups
volumes too, you need to specifically state that. Don't restore the /hana/shared
volume. Instead, choose specific files like global.ini out of the .snapshot directory
and its subdirectories after you remount the /hana/shared volume for PRD.

Microsoft operations will take these steps:


a. Stop the replication of snapshots from the production volume to the disaster
recovery volumes. This disruption might have already happened if an outage at the
production site caused the disaster.

b. Restore the storage snapshot name or snapshot with the backup ID you chose
on the disaster recovery volumes.

After the restore, the disaster recovery volumes are available to be mounted to the
HANA Large Instances in the DR region.

1. Mount the disaster recovery volumes to the HANA Large Instance unit in the
disaster recovery site.
2. Start the dormant SAP HANA production instance.
3. Let's say you chose to copy transaction log backup logs to reduce the recovery
point objective (RPO) time. Then merge the transaction log backups into the newly
mounted DR /hana/logbackups directory. Don't overwrite existing backups. Copy
newer backups that weren't replicated with the latest replication of a storage
snapshot.
4. You can also restore single files out of the snapshots that weren't replicated to the
/hana/shared/PRD volume in the DR Azure region.

Recover the SAP HANA production instance


The following steps show how to recover the SAP HANA production instance from the
restored storage snapshot and the available transaction log backups.

1. Change the backup location to /hana/logbackups by using SAP HANA Studio.


2. SAP HANA scans through the backup file locations and suggests the most recent
transaction log backup to restore to. The scan can take a few minutes until a
screen like the following appears:
3. Adjust some of the default settings:

Clear Use Delta Backups.


Select Initialize Log Area.
4. Select Finish.
A progress window, like the one shown here, should appear. Keep in mind that the
example is of a disaster recovery restore of a three-node scale-out SAP HANA
configuration.
If the restore stops responding at the Finish screen and doesn't show the progress
screen, confirm that all the SAP HANA instances on the worker nodes are running. If
necessary, start the SAP HANA instances manually.

Failback from a DR to a production site


You can fail back from a DR site to a production site. Let's look at a scenario where
failover into the DR site was caused by problems in the production Azure region and not
by your need to recover lost data.

You've been running your SAP production workload for a while in the disaster recovery
site. As the problems in the production site are resolved, you want to fail back to your
production site. Because you can't lose data, the step back into the production site
involves several steps and close cooperation with the SAP HANA on Azure operations
team. It's up to you to trigger the operations team to start synchronizing back to the
production site after the problems are resolved.
Follow these steps:

1. The SAP HANA on Azure operations team gets the trigger to synchronize the
production storage volumes from the DR storage volumes, which now represent
the production state. In this state, the HANA Large Instance in the production site
is shut down.
2. The SAP HANA on Azure operations team monitors the replication and makes sure
that it's caught up before they inform you.
3. You shut down the applications that use the production HANA Instance in the
disaster recovery site. You then do a HANA transaction log backup. Next, you stop
the HANA instance that's running on the HANA Large Instances in the disaster
recovery site.
4. Now the operations team manually synchronizes the disk volumes again.
5. The SAP HANA on Azure operations team starts the HANA Large Instance in the
production site again. They hand it over to you. You make sure the SAP HANA
instance is shut down at the startup time of the HANA Large Instance.
6. You take the same database restore steps you did when you previously failed over
to the DR site.

Monitor disaster recovery replication


To monitor the status of your storage replication progress, run the script
azure_hana_replication_status . This command must be run from a unit that runs in the

disaster recovery location to function as expected. The command works whether


replication is active or not. The command can be run for every HANA Large Instance of
your tenant in the DR location. It can't be used to obtain details about the boot volume.

For more information on the command and its output, see "Get DR replication status -
azure_hana_replication_status" in Microsoft snapshot tools for SAP HANA on Azure .

Next steps
Learn about monitoring SAP HANA (Large Instances) on Azure.

Monitor SAP HANA (large instances) on Azure


Azure Large Instances high availability
for SAP on RHEL
Article • 02/10/2023

7 Note

This article contains references to the terms blacklist and slave, terms that Microsoft
no longer uses. When the term is removed from the software, we’ll remove it from
this article.

In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate
an SAP HANA database failover. You need to have a good understanding of Linux, SAP
HANA, and Pacemaker to complete the steps in this guide.

The following table includes the host names that are used throughout this article. The
code blocks in the article show the commands that need to be run, as well as the output
of those commands. Pay close attention to which node is referenced in each command.

Type Host name Node

Primary host sollabdsm35 node 1

Secondary host sollabdsm36 node 2

Configure your Pacemaker cluster


Before you can begin configuring the cluster, set up SSH key exchange to establish trust
between nodes.

1. Use the following commands to create identical /etc/hosts on both nodes.

root@sollabdsm35 ~]# cat /etc/hosts


27.0.0.1 localhost localhost.azlinux.com
10.60.0.35 sollabdsm35.azlinux.com sollabdsm35 node1
10.60.0.36 sollabdsm36.azlinux.com sollabdsm36 node2
10.20.251.150 sollabdsm36-st
10.20.251.151 sollabdsm35-st
10.20.252.151 sollabdsm36-back
10.20.252.150 sollabdsm35-back
10.20.253.151 sollabdsm36-node
10.20.253.150 sollabdsm35-node

2. Create and exchange the SSH keys.


a. Generate ssh keys.

[root@sollabdsm35 ~]# ssh-keygen -t rsa -b 1024


[root@sollabdsm36 ~]# ssh-keygen -t rsa -b 1024

b. Copy keys to the other hosts for passwordless ssh.

[root@sollabdsm35 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub


sollabdsm35
[root@sollabdsm35 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub
sollabdsm36
[root@sollabdsm36 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub
sollabdsm35
[root@sollabdsm36 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub
sollabdsm36

3. Disable selinux on both nodes.

[root@sollabdsm35 ~]# vi /etc/selinux/config

...

SELINUX=disabled

[root@sollabdsm36 ~]# vi /etc/selinux/config

...

SELINUX=disabled

4. Reboot the servers and then use the following command to verify the status of
selinux.

[root@sollabdsm35 ~]# sestatus


SELinux status: disabled

[root@sollabdsm36 ~]# sestatus

SELinux status: disabled

5. Configure NTP (Network Time Protocol). The time and time zones for both cluster
nodes must match. Use the following command to open chrony.conf and verify
the contents of the file.

a. The following contents should be added to config file. Change the actual values
as per your environment.

vi /etc/chrony.conf

Use public servers from the pool.ntp.org project.

Please consider joining the pool


(http://www.pool.ntp.org/join.html).

server 0.rhel.pool.ntp.org iburst

b. Enable chrony service.

systemctl enable chronyd

systemctl start chronyd

chronyc tracking

Reference ID : CC0BC90A (voipmonitor.wci.com)

Stratum : 3

Ref time (UTC) : Thu Jan 28 18:46:10 2021

chronyc sources

210 Number of sources = 8

MS Name/IP address Stratum Poll Reach LastRx Last sample

====================================================================
===========

^+ time.nullroutenetworks.c> 2 10 377 1007 -2241us[-2238us] +/-


33ms

^* voipmonitor.wci.com 2 10 377 47 +956us[ +958us] +/- 15ms

^- tick.srs1.ntfo.org 3 10 177 801 -3429us[-3427us] +/- 100ms

6. Update the System

a. First, install the latest updates on the system before you start to install the SBD
device.

b. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of
the resource-agents-sap-hana package installed, as documented in Support
Policies for RHEL High Availability Clusters - Management of SAP HANA in a
Cluster

c. If you don’t want a complete update of the system, even if it is recommended,


update the following packages at a minimum.
i. resource-agents-sap-hana
ii. selinux-policy
iii. iscsi-initiator-utils

node1:~ # yum update

7. Install the SAP HANA and RHEL-HA repositories.

subscription-manager repos –list

subscription-manager repos
--enable=rhel-sap-hana-for-rhel-7-server-rpms

subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms

8. Install the Pacemaker, SBD, OpenIPMI, ipmitool, and fencing_sbd tools on all
nodes.
yum install pcs sbd fence-agent-sbd.x86_64 OpenIPMI
ipmitool

Configure Watchdog
In this section, you learn how to configure Watchdog. This section uses the same two
hosts, sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

1. Make sure that the watchdog daemon is not running on any systems.

[root@sollabdsm35 ~]# systemctl disable watchdog


[root@sollabdsm36 ~]# systemctl disable watchdog
[root@sollabdsm35 ~]# systemctl stop watchdog
[root@sollabdsm36 ~]# systemctl stop watchdog
[root@sollabdsm35 ~]# systemctl status watchdog

● watchdog.service - watchdog daemon

Loaded: loaded (/usr/lib/systemd/system/watchdog.service; disabled;


vendor preset: disabled)

Active: inactive (dead)

Nov 28 23:02:40 sollabdsm35 systemd[1]: Collecting watchdog.service

2. The default Linux watchdog, that will be installed during the installation, is the
iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore,
this watchdog must be disabled.

a. The wrong watchdog is installed and loaded on the system:

sollabdsm35:~ # lsmod |grep iTCO

iTCO_wdt 13480 0

iTCO_vendor_support 13718 1 iTCO_wdt

b. Unload the wrong driver from the environment:


sollabdsm35:~ # modprobe -r iTCO_wdt iTCO_vendor_support

sollabdsm36:~ # modprobe -r iTCO_wdt iTCO_vendor_support

c. To make sure the driver is not loaded during the next system boot, the driver
must be blocklisted. To blocklist the iTCO modules, add the following to the end
of the 50-blacklist.conf file:

sollabdsm35:~ # vi /etc/modprobe.d/50-blacklist.conf

unload the iTCO watchdog modules

blacklist iTCO_wdt

blacklist iTCO_vendor_support

d. Copy the file to secondary host.

sollabdsm35:~ # scp /etc/modprobe.d/50-blacklist.conf sollabdsm35:


/etc/modprobe.d/50-blacklist.conf

e. Test if the ipmi service is started. It is important that the IPMI timer is not
running. The timer management will be done from the SBD pacemaker service.

sollabdsm35:~ # ipmitool mc watchdog get

Watchdog Timer Use: BIOS FRB2 (0x01)

Watchdog Timer Is: Stopped

Watchdog Timer Actions: No action (0x00)

Pre-timeout interval: 0 seconds

Timer Expiration Flags: 0x00

Initial Countdown: 0 sec

Present Countdown: 0 sec


3. By default the required device is /dev/watchdog will not be created.

sollabdsm35:~ # ls -l /dev/watchdog

ls: cannot access /dev/watchdog: No such file or directory

4. Configure the IPMI watchdog.

sollabdsm35:~ # mv /etc/sysconfig/ipmi /etc/sysconfig/ipmi.org

sollabdsm35:~ # vi /etc/sysconfig/ipmi

IPMI_SI=yes
DEV_IPMI=yes
IPMI_WATCHDOG=yes
IPMI_WATCHDOG_OPTIONS="timeout=20 action=reset nowayout=0
panic_wdt_timeout=15"
IPMI_POWEROFF=no
IPMI_POWERCYCLE=no
IPMI_IMB=no

5. Copy the watchdog config file to secondary.

sollabdsm35:~ # scp /etc/sysconfig/ipmi


sollabdsm36:/etc/sysconfig/ipmi

6. Enable and start the ipmi service.

[root@sollabdsm35 ~]# systemctl enable ipmi

Created symlink from


/etc/systemd/system/multi-user.target.wants/ipmi.service to
/usr/lib/systemd/system/ipmi.service.

[root@sollabdsm35 ~]# systemctl start ipmi

[root@sollabdsm36 ~]# systemctl enable ipmi

Created symlink from


/etc/systemd/system/multi-user.target.wants/ipmi.service to
/usr/lib/systemd/system/ipmi.service.
[root@sollabdsm36 ~]# systemctl start ipmi

Now the IPMI service is started and the device /dev/watchdog is created – But the
timer is still stopped. Later the SBD will manage the watchdog reset and enables
the IPMI timer.

7. Check that the /dev/watchdog exists but is not in use.

[root@sollabdsm35 ~]# ipmitool mc watchdog get


Watchdog Timer Use: SMS/OS (0x04)
Watchdog Timer Is: Stopped
Watchdog Timer Actions: No action (0x00)
Pre-timeout interval: 0 seconds
Timer Expiration Flags: 0x10
Initial Countdown: 20 sec
Present Countdown: 20 sec

[root@sollabdsm35 ~]# ls -l /dev/watchdog


crw------- 1 root root 10, 130 Nov 28 23:12 /dev/watchdog
[root@sollabdsm35 ~]# lsof /dev/watchdog

SBD configuration
In this section, you learn how to configure SBD. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

1. Make sure the iSCSI or FC disk is visible on both nodes. This example uses an FC-
based SBD device. For more information about SBD fencing, see Design Guidance
for RHEL High Availability Clusters - SBD Considerations and Support Policies for
RHEL High Availability Clusters - sbd and fence_sbd

2. The LUN-ID must be identically on all nodes.

3. Check multipath status for the sbd device.

multipath -ll
3600a098038304179392b4d6c6e2f4b62 dm-5 NETAPP ,LUN C-Mode
size=1.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 8:0:1:2 sdi 8:128 active ready running
| `- 10:0:1:2 sdk 8:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 8:0:3:2 sdj 8:144 active ready running
`- 10:0:3:2 sdl 8:176 active ready running

4. Creating the SBD discs and setup the cluster primitive fencing. This step must be
executed on first node.

sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 -4 20 -1 10 create

Initializing device /dev/mapper/3600a098038304179392b4d6c6e2f4b62


Creating version 2.1 header on device 4 (uuid:
ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce)

Initializing 255 slots on device 4

Device /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is initialized.

5. Copy the SBD config over to node2.

vi /etc/sysconfig/sbd

SBD_DEVICE="/dev/mapper/3600a09803830417934d6c6e2f4b62"
SBD_PACEMAKER=yes
SBD_STARTMODE=always
SBD_DELAY_START=no
SBD_WATCHDOG_DEV=/dev/watchdog
SBD_WATCHDOG_TIMEOUT=15
SBD_TIMEOUT_ACTION=flush,reboot
SBD_MOVE_TO_ROOT_CGROUP=auto
SBD_OPTS=

scp /etc/sysconfig/sbd node2:/etc/sysconfig/sbd

6. Check that the SBD disk is visible from both nodes.

sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 dump

==Dumping header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62

Header version : 2.1

UUID : ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce

Number of slots : 255


Sector size : 512
Timeout (watchdog) : 5
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 10

==Header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is


dumped

7. Add the SBD device in the SBD config file.

# SBD_DEVICE specifies the devices to use for exchanging sbd messages


# and to monitor. If specifying more than one path, use ";" as
# separator.
#

SBD_DEVICE="/dev/mapper/3600a098038304179392b4d6c6e2f4b62"
## Type: yesno
Default: yes
# Whether to enable the pacemaker integration.
SBD_PACEMAKER=yes

Cluster initialization
In this section, you initialize the cluster. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

1. Set up the cluster user password (all nodes).

passwd hacluster

2. Start PCS on all systems.

systemctl enable pcsd

3. Stop the firewall and disable it on (all nodes).


systemctl disable firewalld

systemctl mask firewalld

systemctl stop firewalld

4. Start pcsd service.

systemctl start pcsd

5. Run the cluster authentication only from node1.

pcs cluster auth sollabdsm35 sollabdsm36

Username: hacluster

Password:
sollabdsm35.localdomain: Authorized
sollabdsm36.localdomain: Authorized

6. Create the cluster.

pcs cluster setup --start --name hana sollabdsm35 sollabdsm36

7. Check the cluster status.

pcs cluster status

Cluster name: hana

WARNINGS:

No stonith devices and `stonith-enabled` is not false

Stack: corosync

Current DC: sollabdsm35 (version 1.1.20-5.el7_7.2-3c4c782f70) -


partition with quorum
Last updated: Sat Nov 28 20:56:57 2020

Last change: Sat Nov 28 20:54:58 2020 by hacluster via crmd on


sollabdsm35

2 nodes configured

0 resources configured

Online: [ sollabdsm35 sollabdsm36 ]

No resources

Daemon Status:

corosync: active/disabled

pacemaker: active/disabled

pcsd: active/disabled

8. If one node is not joining the cluster check if the firewall is still running.

9. Create and enable the SBD Device

pcs stonith create SBD fence_sbd


devices=/dev/mapper/3600a098038303f4c467446447a

10. Stop the cluster restart the cluster services (on all nodes).

pcs cluster stop --all

11. Restart the cluster services (on all nodes).

systemctl stop pcsd


systemctl stop pacemaker
systemctl stop corocync
systemctl enable sbd
systemctl start corosync
systemctl start pacemaker
systemctl start pcsd

12. Corosync must start the SBD service.


systemctl status sbd

● sbd.service - Shared-storage based fencing daemon

Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor


preset: disabled)

Active: active (running) since Wed 2021-01-20 01:43:41 EST; 9min ago

13. Restart the cluster (if not automatically started from pcsd).

pcs cluster start –-all

sollabdsm35: Starting Cluster (corosync)...

sollabdsm36: Starting Cluster (corosync)...

sollabdsm35: Starting Cluster (pacemaker)...

sollabdsm36: Starting Cluster (pacemaker)...

14. Enable fencing device settings.

pcs stonith enable SBD --


device=/dev/mapper/3600a098038304179392b4d6c6e2f4d65
pcs property set stonith-watchdog-timeout=20
pcs property set stonith-action=reboot

15. Check the new cluster status with now one resource.

pcs status

Cluster name: hana

Stack: corosync

Current DC: sollabdsm35 (version 1.1.16-12.el7-94ff4df) - partition


with quorum

Last updated: Tue Oct 16 01:50:45 2018


Last change: Tue Oct 16 01:48:19 2018 by root via cibadmin on
sollabdsm35

2 nodes configured

1 resource configured

Online: [ sollabdsm35 sollabdsm36 ]

Full list of resources:

SBD (stonith:fence_sbd): Started sollabdsm35

Daemon Status:

corosync: active/disabled

pacemaker: active/disabled

pcsd: active/enabled

sbd: active/enabled

[root@node1 ~]#

16. Now the IPMI timer must run and the /dev/watchdog device must be opened by
sbd.

ipmitool mc watchdog get

Watchdog Timer Use: SMS/OS (0x44)

Watchdog Timer Is: Started/Running

Watchdog Timer Actions: Hard Reset (0x01)

Pre-timeout interval: 0 seconds

Timer Expiration Flags: 0x10

Initial Countdown: 20 sec

Present Countdown: 19 sec

[root@sollabdsm35 ~] lsof /dev/watchdog

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME

sbd 117569 root 5w CHR 10,130 0t0 323812 /dev/watchdog


17. Check the SBD status.

sbd -d /dev/mapper/3600a098038304445693f4c467446447a list

0 sollabdsm35 clear

1 sollabdsm36 clear

18. Test the SBD fencing by crashing the kernel.

Trigger the Kernel Crash.

echo c > /proc/sysrq-trigger

System must reboot after 5 Minutes (BMC timeout) or the value


which is
set as panic_wdt_timeout in the /etc/sysconfig/ipmi config file.

Second test to run is to fence a node using PCS commands.

pcs stonith fence sollabdsm36

19. For the rest of the SAP HANA clustering you can disable fencing by setting:

pcs property set stonith-enabled=false


It is sometimes easier to keep fencing deactivated during setup of the cluster,
because you will avoid unexpected reboots of the system.
This parameter must be set to true for productive usage. If this parameter is not
set to true, the cluster will be not supported.
pcs property set stonith-enabled=true

HANA integration into the cluster


In this section, you integrate HANA into the cluster. This section uses the same two
hosts, sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

The default and supported way is to create a performance optimized scenario where the
database can be switched over directly. Only this scenario is described here in this
document. In this case we recommend installing one cluster for the QAS system and a
separate cluster for the PRD system. Only in this case it is possible to test all
components before it goes into production.

This process is build of the RHEL description on page:


https://access.redhat.com/articles/3004101

Steps to follow to configure HSR

Log Description
Replication
Mode

Synchronous Synchronous in memory (mode=syncmem) means the log write is considered as


in-memory successful, when the log entry has been written to the log volume of the primary
(default) and sending the log has been acknowledged by the secondary instance after
copying to memory. When the connection to the secondary system is lost, the
primary system continues transaction processing and writes the changes only to
the local disk. Data loss can occur when primary and secondary fail at the same
time as long as the secondary system is connected or when a takeover is
executed, while the secondary system is disconnected. This option provides
better performance because it is not necessary to wait for disk I/O on the
secondary instance, but is more vulnerable to data loss.

Synchronous Synchronous (mode=sync) means the log write is considered as successful when
the log entry has been written to the log volume of the primary and the
secondary instance. When the connection to the secondary system is lost, the
primary system continues transaction processing and writes the changes only to
the local disk. No data loss occurs in this scenario as long as the secondary
system is connected. Data loss can occur, when a takeover is executed while the
secondary system is disconnected. Additionally, this replication mode can run
with a full sync option. This means that log write is successful when the log
buffer has been written to the log file of the primary and the secondary instance.
In addition, when the secondary system is disconnected (for example, because of
network failure) the primary systems suspends transaction processing until the
connection to the secondary system is reestablished. No data loss occurs in this
scenario. You can set the full sync option for system replication only with the
parameter [system_replication]/enable_full_sync). For more information on how
to enable the full sync option, see Enable Full Sync Option for System
Replication.
Log Description
Replication
Mode

Asynchronous Asynchronous (mode=async) means the primary system sends redo log buffers
to the secondary system asynchronously. The primary system commits a
transaction when it has been written to the log file of the primary system and
sent to the secondary system through the network. It does not wait for
confirmation from the secondary system. This option provides better
performance because it is not necessary to wait for log I/O on the secondary
system. Database consistency across all services on the secondary system is
guaranteed. However, it is more vulnerable to data loss. Data changes may be
lost on takeover.

1. These are the actions to execute on node1 (primary).

a. Make sure that the database log mode is set to normal.

* su - hr2adm

* hdbsql -u system -p $YourPass -i 00 "select value from


"SYS"."M_INIFILE_CONTENTS" where key='log_mode'"

VALUE

"normal"

b. SAP HANA system replication will only work after initial backup has been
performed. The following command creates an initial backup in the /tmp/
directory. Select a proper backup filesystem for the database.

* hdbsql -i 00 -u system -p $YourPass "BACKUP DATA USING FILE


('/tmp/backup')"

Backup files were created

ls -l /tmp

total 2031784
-rw-r----- 1 hr2adm sapsys 155648 Oct 26 23:31 backup_databackup_0_1

-rw-r----- 1 hr2adm sapsys 83894272 Oct 26 23:31


backup_databackup_2_1

-rw-r----- 1 hr2adm sapsys 1996496896 Oct 26 23:31


backup_databackup_3_1

c. Backup all database containers of this database.

* hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA USING


FILE ('/tmp/sydb')"

* hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA FOR


HR2
USING FILE ('/tmp/rh2')"

d. Enable the HSR process on the source system.

hdbnsutil -sr_enable --name=DC1

nameserver is active, proceeding ...

successfully enabled system as system replication source site

done.

e. Check the status of the primary system.

hdbnsutil -sr_state

System Replication State

online: true

mode: primary

operation mode: primary

site id: 1
site name: DC1

is source system: true

is secondary/consumer system: false

has secondaries/consumers attached: false

is a takeover active: false

Host Mappings:

~~~~~~~~~~~~~~

Site Mappings:

~~~~~~~~~~~~~~

DC1 (primary/)

Tier of DC1: 1

Replication mode of DC1: primary

Operation mode of DC1:

done.

2. These are the actions to execute on node2 (secondary).


a. Stop the database.

su – hr2adm

sapcontrol -nr 00 -function StopSystem

b. For SAP HANA2.0 only, copy the SAP HANA system PKI SSFS_HR2.KEY and
SSFS_HR2.DAT files from primary node to secondary node.

scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT

c. Enable secondary as the replication site.

su - hr2adm

hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=00


--replicationMode=syncmem --name=DC2

adding site ...

--operationMode not set; using default from


global.ini/[system_replication]/operation_mode: logreplay

nameserver node2:30001 not responding.

collecting information ...

updating local ini files ...

done.

d. Start the database.

sapcontrol -nr 00 -function StartSystem

e. Check the database state.

hdbnsutil -sr_state

~~~~~~~~~
System Replication State

online: true

mode: syncmem

operation mode: logreplay

site id: 2

site name: DC2


is source system: false

is secondary/consumer system: true

has secondaries/consumers attached: false

is a takeover active: false

active primary site: 1

primary primarys: node1

Host Mappings:

node2 -> [DC2] node2

node2 -> [DC1] node1

Site Mappings:

DC1 (primary/primary)

|---DC2 (syncmem/logreplay)

Tier of DC1: 1

Tier of DC2: 2

Replication mode of DC1: primary

Replication mode of DC2: syncmem

Operation mode of DC1: primary

Operation mode of DC2: logreplay

Mapping: DC1 -> DC2

done.
~~~~~~~~~~~~~~
3. It is also possible to get more information on the replication status:

~~~~~
hr2adm@node1:/usr/sap/HR2/HDB00> python
/usr/sap/HR2/HDB00/exe/python_support/systemReplicationStatus.py

| Database | Host | Port | Service Name | Volume ID | Site ID | Site


Name | Secondary | Secondary | Secondary | Secondary | Secondary |
Replication | Replication | Replication |

| | | | | | | | Host | Port | Site ID | Site Name | Active Status |


Mode | Status | Status Details |

| SYSTEMDB | node1 | 30001 | nameserver | 1 | 1 | DC1 | node2 | 30001


| 2 | DC2 | YES | SYNCMEM | ACTIVE | |

| HR2 | node1 | 30007 | xsengine | 2 | 1 | DC1 | node2 | 30007 | 2 |


DC2 | YES | SYNCMEM | ACTIVE | |

| HR2 | node1 | 30003 | indexserver | 3 | 1 | DC1 | node2 | 30003 | 2


| DC2 | YES | SYNCMEM | ACTIVE | |

status system replication site "2": ACTIVE

overall system replication status: ACTIVE

Local System Replication State

mode: PRIMARY

site id: 1

site name: DC1

Log Replication Mode Description

For more information about log replication mode, see the official SAP documentation .

Network Setup for HANA System Replication

To ensure that the replication traffic is using the right VLAN for the replication, it must
be configured properly in the global.ini . If you skip this step, HANA will use the Access
VLAN for the replication, which might be undesired.
The following examples show the host name resolution configuration for system
replication to a secondary site. Three distinct networks can be identified:

Public network with addresses in the range of 10.0.1.*

Network for internal SAP HANA communication between hosts at each site:
192.168.1.*

Dedicated network for system replication: 10.5.1.*

In the first example, the [system_replication_communication]listeninterface parameter


has been set to .global and only the hosts of the neighboring replicating site are
specified.

In the following example, the [system_replication_communication]listeninterface


parameter has been set to .internal and all hosts of both sites are specified.

For more information, see Network Configuration for SAP HANA System Replication .

For system replication, it is not necessary to edit the /etc/hosts file, internal ('virtual')
host names must be mapped to IP addresses in the global.ini file to create a
dedicated network for system replication. The syntax for this is as follows:

global.ini

[system_replication_hostname_resolution]

<ip-address_site>=<internal-host-name_site>

Configure SAP HANA in a Pacemaker cluster


In this section, you learn how to configure SAP HANA in a Pacemaker cluster. This
section uses the same two hosts, sollabdsm35 and sollabdsm36 , referenced at the
beginning of this article.

Ensure you have met the following prerequisites:

Pacemaker cluster is configured according to documentation and has proper and


working fencing

SAP HANA startup on boot is disabled on all cluster nodes as the start and stop
will be managed by the cluster

SAP HANA system replication and takeover using tools from SAP are working
properly between cluster nodes
SAP HANA contains monitoring account that can be used by the cluster from both
cluster nodes

Both nodes are subscribed to 'High-availability' and 'RHEL for SAP HANA' (RHEL
6,RHEL 7) channels

In general, please execute all pcs commands only from on node because the CIB
will be automatically updated from the pcs shell.

More info on quorum policy

Steps to configure
1. Configure pcs.

[root@node1 ~]# pcs property unset no-quorum-policy (optional – only


if it was set before)
[root@node1 ~]# pcs resource defaults resource-stickiness=1000
[root@node1 ~]# pcs resource defaults migration-threshold=5000

2. Configure corosync. For more information, see How can I configure my RHEL 7
High Availability Cluster with pacemaker and corosync .

cat /etc/corosync/corosync.conf

totem {

version: 2

secauth: off

cluster_name: hana

transport: udpu

nodelist {

node {

ring0_addr: node1.localdomain
nodeid: 1

node {

ring0_addr: node2.localdomain

nodeid: 2

quorum {

provider: corosync_votequorum

two_node: 1

logging {

to_logfile: yes

logfile: /var/log/cluster/corosync.log

to_syslog: yes

3. Create cloned SAPHanaTopology resource. SAPHanaTopology resource is


gathering status and configuration of SAP HANA System Replication on each node.
SAPHanaTopology requires following attributes to be configured.

pcs resource create SAPHanaTopology_HR2_00 SAPHanaTopology SID=HR2 op


start timeout=600 \
op stop timeout=300 \
op monitor interval=10 timeout=600 \
clone clone-max=2 clone-node-max=1 interleave=true
Attribute Description
Name

SID SAP System Identifier (SID) of SAP HANA installation. Must be the same
for all nodes.

InstanceNumber 2-digit SAP Instance Identifier.

Resource status

pcs resource show SAPHanaTopology_HR2_00

Clone: SAPHanaTopology_HR2_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_HR2_00 (class=ocf provider=heartbeat
type=SAPHanaTopology)
Attributes: InstanceNumber=00 SID=HR2
Operations: monitor interval=60 timeout=60
(SAPHanaTopology_HR2_00-monitor-interval-60)
start interval=0s timeout=180
(SAPHanaTopology_HR2_00-start-interval-0s)
stop interval=0s timeout=60
(SAPHanaTopology_HR2_00-stop-interval-0s)

4. Create Primary/Secondary SAPHana resource.

SAPHana resource is responsible for starting, stopping, and relocating the


SAP HANA database. This resource must be run as a Primary/Secondary
cluster resource. The resource has the following attributes.

Attribute Name Required? Default Description


value

SID Yes None SAP System Identifier (SID) of SAP


HANA installation. Must be same for all
nodes.

InstanceNumber Yes none 2-digit SAP Instance identifier.

PREFER_SITE_TAKEOVER no yes Should cluster prefer to switchover to


secondary instance instead of restarting
primary locally? ("no": Do prefer restart
locally; "yes": Do prefer takeover to
remote site)
Attribute Name Required? Default Description
value

AUTOMATED_REGISTER no FALSE Should the former SAP HANA primary


be registered as secondary after
takeover and
DUPLICATE_PRIMARY_TIMEOUT?
("false": no, manual intervention will be
needed; "true": yes, the former primary
will be registered by resource agent as
secondary)

DUPLICATE_PRIMARY_TIMEOUT no 7200 Time difference (in seconds) needed


between primary time stamps, if a dual-
primary situation occurs. If the time
difference is less than the time gap,
then the cluster holds one or both
instances in a "WAITING" status. This is
to give an admin a chance to react on a
failover. A failed former primary will be
registered after the time difference is
passed. After this registration to the
new primary, all data will be overwritten
by the system replication.

5. Create the HANA resource.

pcs resource create SAPHana_HR2_00 SAPHana SID=HR2 InstanceNumber=00


PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=true op start timeout=3600 \
op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 \
op demote timeout=3600 \
master meta notify=true clone-max=2 clone-node-max=1 interleave=true

pcs resource show SAPHana_HR2_00-primary

Primary: SAPHana_HR2_00-primary
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Resource: SAPHana_HR2_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=false DUPLICATE_PRIMARY_TIMEOUT=7200
InstanceNumber=00 PREFER_SITE_TAKEOVER=true SID=HR2
Operations: demote interval=0s timeout=320 (SAPHana_HR2_00-demote-
interval-0s)
monitor interval=120 timeout=60 (SAPHana_HR2_00-monitor-
interval-120)
monitor interval=121 role=Secondary timeout=60
(SAPHana_HR2_00-monitor-
interval-121)
monitor interval=119 role=Primary timeout=60
(SAPHana_HR2_00-monitor-
interval-119)
promote interval=0s timeout=320 (SAPHana_HR2_00-promote-
interval-0s)
start interval=0s timeout=180 (SAPHana_HR2_00-start-
interval-0s)
stop interval=0s timeout=240 (SAPHana_HR2_00-stop-
interval-0s)

crm_mon -A1

....

2 nodes configured

5 resources configured

Online: [ node1.localdomain node2.localdomain ]

Active resources:

.....

Node Attributes:

* Node node1.localdomain:

+ hana_hr2_clone_state : PROMOTED

+ hana_hr2_remoteHost : node2

+ hana_hr2_roles : 4:P:primary1:primary:worker:primary

+ hana_hr2_site : DC1

+ hana_hr2_srmode : syncmem

+ hana_hr2_sync_state : PRIM

+ hana_hr2_version : 2.00.033.00.1535711040

+ hana_hr2_vhost : node1

+ lpa_hr2_lpt : 1540866498

+ primary-SAPHana_HR2_00 : 150
* Node node2.localdomain:

+ hana_hr2_clone_state : DEMOTED

+ hana_hr2_op_mode : logreplay

+ hana_hr2_remoteHost : node1

+ hana_hr2_roles : 4:S:primary1:primary:worker:primary

+ hana_hr2_site : DC2

+ hana_hr2_srmode : syncmem

+ hana_hr2_sync_state : SOK

+ hana_hr2_version : 2.00.033.00.1535711040

+ hana_hr2_vhost : node2

+ lpa_hr2_lpt : 30

+ primary-SAPHana_HR2_00 : 100

6. Create Virtual IP address resource. Cluster will contain Virtual IP address in order to
reach the Primary instance of SAP HANA. Below is example command to create
IPaddr2 resource with IP 10.7.0.84/24.

pcs resource create vip_HR2_00 IPaddr2 ip="10.7.0.84"


pcs resource show vip_HR2_00

Resource: vip_HR2_00 (class=ocf provider=heartbeat type=IPaddr2)

Attributes: ip=10.7.0.84

Operations: monitor interval=10s timeout=20s


(vip_HR2_00-monitor-interval-10s)

start interval=0s timeout=20s (vip_HR2_00-start-interval-0s)

stop interval=0s timeout=20s (vip_HR2_00-stop-interval-0s)

7. Create constraints.

For correct operation, we need to ensure that SAPHanaTopology resources


are started before starting the SAPHana resources, and also that the virtual IP
address is present on the node where the Primary resource of SAPHana is
running. To achieve this, the following 2 constraints need to be created.
pcs constraint order SAPHanaTopology_HR2_00-clone then
SAPHana_HR2_00-primary symmetrical=false
pcs constraint colocation add vip_HR2_00 with primary
SAPHana_HR2_00-primary 2000

Testing the manual move of SAPHana resource to another


node

(SAP Hana takeover by cluster)


To test out the move of the SAPHana resource from one node to another, use the
command below. Note that the option --primary should not be used when running the
following command because of how the SAPHana resource works internally.

pcs resource move SAPHana_HR2_00-primary

After each pcs resource move command invocation, the cluster creates location
constraints to achieve the move of the resource. These constraints must be removed to
allow automatic failover in the future. To remove them you can use the command
following command.

pcs resource clear SAPHana_HR2_00-primary


crm_mon -A1
Node Attributes:
* Node node1.localdomain:
+ hana_hr2_clone_state : DEMOTED
+ hana_hr2_remoteHost : node2
+ hana_hr2_roles : 2:P:primary1::worker:
+ hana_hr2_site : DC1
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : PRIM
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node1
+ lpa_hr2_lpt : 1540867236
+ primary-SAPHana_HR2_00 : 150
* Node node2.localdomain:
+ hana_hr2_clone_state : PROMOTED
+ hana_hr2_op_mode : logreplay
+ hana_hr2_remoteHost : node1
+ hana_hr2_roles : 4:S:primary1:primary:worker:primary
+ hana_hr2_site : DC2
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : SOK
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node2
+ lpa_hr2_lpt : 1540867311
+ primary-SAPHana_HR2_00 : 100

Login to HANA as verification.

demoted host:

hdbsql -i 00 -u system -p $YourPass -n 10.7.0.82

result:

* -10709: Connection failed (RTE:[89006] System call


'connect'
failed, rc=111:Connection refused (10.7.0.82:30015))

Promoted host:

hdbsql -i 00 -u system -p $YourPass -n 10.7.0.84

Welcome to the SAP HANA Database interactive terminal.

Type: \h for help with commands

\q to quit

hdbsql HR2=>

DB is online

With option the AUTOMATED_REGISTER=false , you cannot switch back and forth.

If this option is set to false, you must re-register the node:

hdbnsutil -sr_register --remoteHost=node2 --remoteInstance=00 --


replicationMode=syncmem --name=DC1
Now node2, which was the primary, acts as the secondary host.

Consider setting this option to true to automate the registration of the demoted host.

pcs resource update SAPHana_HR2_00-primary AUTOMATED_REGISTER=true


pcs cluster node clear node1

Whether you prefer automatic registering depends on the customer scenario.


Automatically reregistering the node after a takeover will be easier for the operation
team. However, you may want to register the node manually in order to first run
additional tests to make sure everything works as you expect.

References
1. Automated SAP HANA System Replication in Scale-Up in pacemaker cluster
2. Support Policies for RHEL High Availability Clusters - Management of SAP HANA in
a Cluster
3. Setting up Pacemaker on RHEL in Azure - Azure Virtual Machines
4. Azure HANA Large Instances control through Azure portal - Azure Virtual
Machines
Monitor SAP HANA (Large instances) on
Azure
Article • 02/10/2023

In this article, we'll look at monitoring SAP HANA Large Instances on Azure (otherwise
known as BareMetal Infrastructure).

SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment.
Monitoring the operating system and application is important. You'll want to know how
the applications consume the following resources:

CPU
Memory
Network bandwidth
Disk space

Monitor your SAP HANA Large Instances to see whether the above resources are
sufficient or whether they're being depleted. The following sections give more detail on
each of these resources.

CPU resource consumption


SAP defines a maximum threshold of CPU use for the SAP HANA workload. Staying
within this threshold ensures you have enough CPU resources to work through the data
stored in memory. High CPU consumption can happen when SAP HANA services
execute queries because of missing indexes or similar issues. So monitoring CPU
consumption of the HANA Large Instance and CPU consumption of specific HANA
services is critical.

Memory consumption
It's important to monitor memory consumption both within HANA and outside of HANA
on the SAP HANA Large Instance. Monitor how the data is consuming HANA-allocated
memory so you can stay within the sizing guidelines of SAP. Monitor memory
consumption on the Large Instance to make sure non-HANA software doesn't consume
too much memory. You don't want non-HANA software competing with HANA for
memory.

Network bandwidth
The bandwidth of the Azure Virtual Network (VNet) gateway is limited. Only so much
data can move into the Azure VNet. Monitor the data received by all Azure VMs within a
VNet. This way you'll know when you're nearing the limits of the Azure gateway SKU you
selected. It also makes sense to monitor incoming and outgoing network traffic on the
HANA Large Instance to track the volumes handled over time.

Disk space
Disk space consumption usually increases over time. Common causes include:

Data volume increases over time


Execution of transaction log backups
Storing trace files
Taking storage snapshots

So it's important to monitor disk space usage and manage the disk space associated
with the HANA Large Instance.

Preloaded system diagnostic tools


For the Type II SKUs of the HANA Large Instances, the server comes with the preloaded
system diagnostic tools. You can use these diagnostic tools to do the system health
check.

Run the following command to generate the health check log file at
/var/log/health_check.

/opt/sgi/health_check/microsoft_tdi.sh

When you work with the Microsoft Support team to troubleshoot an issue, you may be
asked to provide the log files by using these diagnostic tools. You can zip the file using
this command:

tar -czvf health_check_logs.tar.gz /var/log/health_check

Azure Monitor for SAP solutions


You can use Azure Monitor for SAP solutions to monitor all of the resources listed above
and more. Azure Monitor for SAP solutions is native to Azure. It allows you to collect
data from Azure infrastructure and databases into a single location and visually correlate
the data for faster troubleshooting. For more information, see Monitor SAP on Azure.

Next steps
Learn about how to monitor and troubleshoot from within SAP HANA.

Monitoring and troubleshooting from HANA side


Monitoring and troubleshooting from
HANA side
Article • 02/10/2023

In this article, we'll look at monitoring and troubleshooting your SAP HANA on Azure
(Large Instances) using resources provided by SAP HANA.

To analyze problems related to SAP HANA on Azure (Large Instances), you'll want to
narrow down the root cause of a problem. SAP has published lots of documentation to
help you. FAQs related to SAP HANA performance can be found in the following SAP
Notes:

SAP Note #2222200 – FAQ: SAP HANA Network


SAP Note #2100040 – FAQ: SAP HANA CPU
SAP Note #199997 – FAQ: SAP HANA Memory
SAP Note #200000 – FAQ: SAP HANA Performance Optimization
SAP Note #199930 – FAQ: SAP HANA I/O Analysis
SAP Note #2177064 – FAQ: SAP HANA Service Restart and Crashes

SAP HANA alerts


First, check the current SAP HANA alert logs. In SAP HANA Studio, go to Administration
Console: Alerts: Show: all alerts. This tab will show all SAP HANA alerts for values (free
physical memory, CPU use, and so on) that fall outside the set minimum and maximum
thresholds. By default, checks are automatically refreshed every 15 minutes.
CPU
For an alert triggered by improper threshold setting, reset to the default value or a more
reasonable threshold value.

The following alerts may indicate CPU resource problems:

Host CPU Usage (Alert 5)


Most recent savepoint operation (Alert 28)
Savepoint duration (Alert 54)

You may notice high CPU consumption on your SAP HANA database from:

Alert 5 (Host CPU usage) is raised for current or past CPU usage
The displayed CPU usage on the overview screen
The Load graph might show high CPU consumption, or high consumption in the past:

An alert triggered by high CPU use could be caused by several reasons:

Execution of certain transactions


Data loading
Jobs that aren't responding
Long-running SQL statements
Bad query performance (for example, with BW on HANA cubes)

For detailed CPU usage troubleshooting steps, see SAP HANA Troubleshooting: CPU
Related Causes and Solutions .

Operating system (OS)


An important check for SAP HANA on Linux is to make sure Transparent Huge Pages are
disabled. For more information, see SAP Note #2131662 – Transparent Huge Pages
(THP) on SAP HANA Servers .

You can check whether Transparent Huge Pages are enabled through the following Linux
command: cat /sys/kernel/mm/transparent_hugepage/enabled

If always is enclosed in brackets, it means that the Transparent Huge Pages are
enabled: [always] madvise never
If never is enclosed in brackets, it means that the Transparent Huge Pages are
disabled: always madvise [never]

The following Linux command should return nothing: rpm -qa | grep ulimit. If it appears
ulimit is installed, uninstall it immediately.
Memory
You may observe that the amount of memory allotted to the SAP HANA database is
higher than expected. The following alerts indicate issues with high memory usage:

Host physical memory usage (Alert 1)


Memory usage of name server (Alert 12)
Total memory usage of Column Store tables (Alert 40)
Memory usage of services (Alert 43)
Memory usage of main storage of Column Store tables (Alert 45)
Runtime dump files (Alert 46)

For detailed memory troubleshooting steps, see SAP HANA Troubleshooting: Root
Causes of Memory Problems .

Network
Refer to SAP Note #2081065 – Troubleshooting SAP HANA Network and do the
network troubleshooting steps in this SAP Note.

1. Analyzing round-trip time between server and client.

Run the SQL script HANA_Network_Clients .

2. Analyze internode communication.

Run SQL script HANA_Network_Services .

3. Run Linux command ifconfig (the output shows whether any packet losses are
occurring).

4. Run Linux command tcpdump.

Also, use the open-source IPERF tool (or similar) to measure real application network
performance.

For detailed network troubleshooting steps, see SAP HANA Troubleshooting: Network
Performance and Connectivity Problems .

Storage
Let's say there are issues with I/O performance. End users may then find applications, or
the system as a whole, runs sluggishly, is unresponsive, or can even stop responding. In
the Volumes tab in SAP HANA Studio, you can see the attached volumes and what
volumes are used by each service.

On the lower part of the screen (on the Volumes tab), you can see details of the
volumes, such as files and I/O statistics.

For I/O troubleshooting steps, see SAP HANA Troubleshooting: I/O Related Root Causes
and Solutions . For disk-related troubleshooting steps, see SAP HANA
Troubleshooting: Disk Related Root Causes and Solutions .

Diagnostic tools
Do an SAP HANA Health Check through HANA_Configuration_Minichecks. This tool
returns potentially critical technical issues that should have already been raised as alerts
in SAP HANA Studio.

1. Refer to SAP Note #1969700 – SQL statement collection for SAP HANA and
download the SQL Statements.zip file attached to that note. Store this .zip file on
the local hard drive.

2. In SAP HANA Studio, on the System Information tab, right-click in the Name
column and select Import SQL Statements.
3. Select the SQL Statements.zip file stored locally; a folder with the corresponding
SQL statements will be imported. At this point, the many different diagnostic
checks can be run with these SQL statements.

For example, to test SAP HANA System Replication bandwidth requirements, right-
click the Bandwidth statement under Replication: Bandwidth and select Open in
SQL Console.

The complete SQL statement opens allowing input parameters (modification


section) to be changed and then executed.

4. Another example is to right-click on the statements under Replication: Overview.


Select Execute from the context menu:

You'll view information helpful with troubleshooting:


5. Do the same for HANA_Configuration_Minichecks and check for any X marks in the
C (Critical) column.

Sample outputs:

HANA_Configuration_MiniChecks_Rev102.01+1 for general SAP HANA checks.

HANA_Services_Overview for an overview of which SAP HANA services are


currently running.
HANA_Services_Statistics for SAP HANA service information (CPU, memory, and
so on).

HANA_Configuration_Overview_Rev110+ for general information on the SAP


HANA instance.
HANA_Configuration_Parameters_Rev70+ to check SAP HANA parameters.

Next steps
Learn how to set up high availability on the SUSE operating system using the fencing
device.

High availability set up in SUSE using a fencing device


Install and configure SAP HANA (Large
Instances) on Azure
Article • 02/10/2023

In this article, we'll walk through validating, configuring, and installing SAP HANA Large
Instances (HLIs) on Azure (otherwise known as BareMetal Infrastructure).

Prerequisites
Before reading this article, become familiar with:

HANA Large Instances common terms


HANA Large Instances SKUs.

Also see:

Connecting Azure VMs to HANA Large Instances


Connect a virtual network to HANA Large Instances

Planning your installation


The installation of SAP HANA is your responsibility. You can start installing a new SAP
HANA on Azure (Large Instances) server after you establish the connectivity between
your Azure virtual networks and the HANA Large Instance unit(s).

7 Note

Per SAP policy, the installation of SAP HANA must be performed by a person who's
passed the Certified SAP Technology Associate exam, SAP HANA Installation
certification exam, or who is an SAP-certified system integrator (SI).

When you're planning to install HANA 2.0, see SAP support note #2235581 - SAP HANA:
Supported operating systems . Make sure the operating system (OS) is supported with
the SAP HANA release you're installing. The supported OS for HANA 2.0 is more
restrictive than the supported OS for HANA 1.0. Confirm that the OS release you're
interested in is supported for the particular HANA Large Instance. Use this list ; select
the HLI to see the details of the supported OS list for that unit.

Validate the following before you begin the HANA installation:


HLI unit(s)
Operating system configuration
Network configuration
Storage configuration

Validate the HANA Large Instance unit(s)


After you receive the HANA Large Instances from Microsoft, establish access and
connectivity to them. Then validate the following settings and adjust as necessary.

1. Check in the Azure portal whether the instance(s) are showing up with the correct
SKUs and OS. For more information, see Azure HANA Large Instances control
through Azure portal.

2. Register the OS of the instance with your OS provider. This step includes
registering your SUSE Linux OS in an instance of the SUSE Subscription
Management Tool (SMT) that's deployed in a VM in Azure.

The HANA Large Instance can connect to this SMT instance. (For more information,
see How to set up SMT server for SUSE Linux). If you're using a Red Hat OS, it
needs to be registered with the Red Hat Subscription Manager that you'll connect
to. For more information, see the remarks in What is SAP HANA on Azure (Large
Instances)?.

This step is necessary for patching the OS, which is your responsibility. For SUSE,
see the documentation on installing and configuring SMT .

3. Check for new patches and fixes of the specific OS release/version. Verify that the
HANA Large Instance has the latest patches. Sometimes the latest patches aren't
included, so be sure to check.

4. Check the relevant SAP notes for installing and configuring SAP HANA on the
specific OS release/version. Microsoft won't always configure an HLI completely.
Changing recommendations or changes to SAP notes or configurations dependent
on individual scenarios may make it impossible.

So be sure to read the SAP notes related to SAP HANA for your exact Linux release.
Also check the configurations of the OS release/version and apply the
configuration settings if you haven't already.

Specifically, check the following parameters and eventually adjust to:

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216

Starting with SLES12 SP1 and Red Hat Enterprise Linux (RHEL) 7.2, these
parameters must be set in a configuration file in the /etc/sysctl.d directory. For
example, a configuration file with the name 91-NetApp-HANA.conf must be
created. For older SLES and RHEL releases, these parameters must be set
in/etc/sysctl.conf.

For all RHEL releases starting with RHEL 6.3, keep in mind:

The sunrpc.tcp_slot_table_entries = 128 parameter must be set


in/etc/modprobe.d/sunrpc-local.conf. If the file doesn't exist, create it first by
adding the entry:
options sunrpc tcp_max_slot_table_entries=128

5. Check the system time of your HANA Large Instance. The instances are deployed
with a system time zone. This time zone represents the location of the Azure
region in which the HANA Large Instance stamp is located. You can change the
system time or time zone of the instances you own.

If you order more instances into your tenant, you need to adapt the time zone of
the newly delivered instances. Microsoft has no insight into the system time zone
you set up with the instances after the handover. So newly deployed instances
might not be set in the same time zone as the one you changed to. It's up to you
to adapt the time zone of the instance(s) that were handed over, as needed.

6. Check etc/hosts. As the blades get handed over, they have different IP addresses
assigned for different purposes. It's important to check the etc/hosts file when
units are added into an existing tenant. The etc/hosts file of the newly deployed
systems may not be maintained correctly with the IP addresses of systems
delivered earlier. Ensure that a newly deployed instance can resolve the names of
the units you deployed earlier in your tenant.

Operating system
The swap space of the delivered OS image is set to 2 GB according to the SAP support
note #1999997 - FAQ: SAP HANA memory . If you want a different setting, you must
set it yourself.
SUSE Linux Enterprise Server 12 SP1 for SAP applications is the distribution of Linux
that's installed for SAP HANA on Azure (Large Instances). This distribution provides SAP-
specific capabilities, including pre-set parameters for running SAP on SLES effectively.

For several useful resources related to deploying SAP HANA on SLES, see:

Resource library/white papers on the SUSE website.


SAP on SUSE on the SAP Community Network (SCN).

These resources include information on setting up high availability, security hardening


specific to SAP operations, and more.

Here are more resources for SAP on SUSE:

SAP HANA on SUSE Linux site


Best Practice for SAP: Enqueue replication – SAP NetWeaver on SUSE Linux
Enterprise 12
ClamSAP – SLES virus protection for SAP (including SLES 12 for SAP applications)

The following documents are SAP support notes applicable to implementing SAP HANA
on SLES 12:

SAP support note #1944799 – SAP HANA guidelines for SLES operating system
installation
SAP support note #2205917 – SAP HANA DB recommended OS settings for SLES
12 for SAP applications
SAP support note #1984787 – SUSE Linux Enterprise Server 12: installation notes
SAP support note #171356 – SAP software on Linux: General information
SAP support note #1391070 – Linux UUID solutions

Red Hat Enterprise Linux for SAP HANA is another offer for running SAP HANA on
HANA Large Instances. Releases of RHEL 7.2 and 7.3 are available and supported. For
more information on SAP on Red Hat, see SAP HANA on Red Hat Linux site .

The following documents are SAP support notes applicable to implementing SAP HANA
on Red Hat:

SAP support note #2009879 - SAP HANA guidelines for Red Hat Enterprise Linux
(RHEL) operating system
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL
7
SAP support note #1391070 – Linux UUID solutions
SAP support note #2228351 - Linux: SAP HANA Database SPS 11 revision 110 (or
higher) on RHEL 6 or SLES 11
SAP support note #2397039 - FAQ: SAP on RHEL
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and
upgrade

Time synchronization
SAP applications built on the SAP NetWeaver architecture are sensitive to time
differences for the components of the SAP system. SAP ABAP short dumps with the
error title of ZDATE_LARGE_TIME_DIFF are probably familiar. That's because these short
dumps appear when the system time of different servers or virtual machines (VMs) is
drifting too far apart.

For SAP HANA on Azure (Large Instances), time synchronization in Azure doesn't apply
to the compute units in the Large Instance stamps. It also doesn't apply to running SAP
applications in native Azure VMs, because Azure ensures a system's time is properly
synchronized.

As a result, you need to set up a separate time server. This server will be used by SAP
application servers running on Azure VMs. It will also be used by the SAP HANA
database instances running on HANA Large Instances. The storage infrastructure in
Large Instance stamps is time-synchronized with Network Time Protocol (NTP) servers.

Networking
In designing your Azure virtual networks and connecting those virtual networks to the
HANA Large Instances, be sure to follow the recommendations described in:

SAP HANA (Large Instance) overview and architecture on Azure


SAP HANA (Large Instances) infrastructure and connectivity on Azure

Here are some details worth mentioning about the networking of the single units. Every
HANA Large Instance unit comes with two or three IP addresses assigned to two or
three network interface controller (NIC) ports. Three IP addresses are used in HANA
scale-out configurations and the HANA system replication scenario. One of the IP
addresses assigned to the NIC of the unit is out of the server IP pool that's described in
SAP HANA (Large Instances) overview and architecture on Azure.

For more information about Ethernet details for your architecture, see HLI supported
scenarios.

Storage
The storage layout for SAP HANA (Large Instances) is configured by SAP HANA on
Azure Service Management using SAP recommended guidelines.

The rough sizes of the different volumes with the different HANA Large Instances SKUs
is documented in SAP HANA (Large Instances) overview and architecture on Azure.

The naming conventions of the storage volumes are listed in the following table:

Storage Mount name Volume name


usage

HANA data /hana/data/SID/mnt0000<m> Storage IP:/hana_data_SID_mnt00001_tenant_vol

HANA log /hana/log/SID/mnt0000<m> Storage IP:/hana_log_SID_mnt00001_tenant_vol

HANA log /hana/log/backups Storage


backup IP:/hana_log_backups_SID_mnt00001_tenant_vol

HANA /hana/shared/SID Storage


shared IP:/hana_shared_SID_mnt00001_tenant_vol/shared

usr/sap /usr/sap/SID Storage


IP:/hana_shared_SID_mnt00001_tenant_vol/usr_sap

SID is the HANA instance System ID.

Tenant is an internal enumeration of operations when deploying a tenant.

HANA usr/sap share the same volume. The nomenclature of the mountpoints includes
the system ID of the HANA instances and the mount number. In scale-up deployments,
there's only one mount, such as mnt00001. In scale-out deployments, you'll see as many
mounts as you have worker and primary nodes.

For scale-out environments, data, log, and log backup volumes are shared and attached
to each node in the scale-out configuration. For configurations that are multiple SAP
instances, a different set of volumes is created and attached to the HANA Large
Instance. For storage layout details for your scenario, see HLI supported scenarios.

HANA Large Instances come with generous disk volume for HANA/data and a volume
HANA/log/backup. We made the HANA/data so large because the storage snapshots
use the same disk volume. The more storage snapshots you do, the more space is
consumed by snapshots in your assigned storage volumes.

The HANA/log/backup volume isn't supposed to be the volume for database backups.
It's sized to be used as the backup volume for the HANA transaction log backups. For
more information, see SAP HANA (Large Instances) high availability and disaster
recovery on Azure.
You can increase your storage by purchasing extra capacity in 1-TB increments. This
extra storage can be added as new volumes to a HANA Large Instance.

During onboarding with SAP HANA on Azure Service Management, you'll specify a user
ID (UID) and group ID (GID) for the sidadm user and sapsys group (for example:
1000,500). During installation of the SAP HANA system, you must use these same values.
Because you want to deploy multiple HANA instances on a unit, you get multiple sets of
volumes (one set for each instance). So at deployment time, you need to define:

The SID of the different HANA instances (sidadm is derived from it).
The memory sizes of the different HANA instances. The memory size per instance
defines the size of the volumes in each individual volume set.

Based on storage provider recommendations, the following mount options are


configured for all mounted volumes (excludes boot LUN):

nfs rw, vers=4, hard, timeo=600, rsize=1048576, wsize=1048576, intr, noatime, lock
00

These mount points are configured in /etc/fstab as shown in the following screenshots:

The output of the command df -h on a S72m HANA Large Instance looks like:

The storage controller and nodes in the Large Instance stamps are synchronized to NTP
servers. Synchronizing the SAP HANA on Azure (Large Instances) and Azure VMs against
an NTP server is important. It eliminates significant time drift between the infrastructure
and the compute units in Azure or Large Instance stamps.
To optimize SAP HANA to the storage used underneath, set the following SAP HANA
configuration parameters:

max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all

For SAP HANA 1.0 versions up to SPS12, these parameters can be set during the
installation of the SAP HANA database, as described in SAP note #2267798 -
Configuration of the SAP HANA database .

You can also configure the parameters after the SAP HANA database installation by
using the hdbparam framework.

The storage used in HANA Large Instances has a file size limitation. The size limitation is
16 TB per file. Unlike file size limitations in the EXT3 file systems, HANA isn't aware
implicitly of the storage limitation enforced by the HANA Large Instances storage. As a
result, HANA won't automatically create a new data file when the file size limit of 16 TB
is reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors
and the index server will crash at the end.

) Important

To prevent HANA from trying to grow data files beyond the 16 TB file size limit of
HANA Large Instance storage, set the following parameters in the SAP HANA
global.ini configuration file:

datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285

With SAP HANA 2.0, the hdbparam framework has been deprecated. So the parameters
must be set by using SQL commands. For more information, see SAP note #2399079:
Elimination of hdbparam in HANA 2 .

Refer to HLI supported scenarios to learn more about the storage layout for your
architecture.

Next steps
Go through the steps of installing SAP HANA on Azure (Large Instances).

Install HANA on SAP HANA on Azure (Large Instances)


Azure HANA Large Instances control
through Azure portal
Article • 02/10/2023

7 Note

For Rev 4.2, follow the instructions in the Manage BareMetal Instances through
the Azure portal topic.

This document covers the way how HANA Large Instances are presented in Azure
portal and what activities can be conducted through Azure portal with HANA Large
Instance units that are deployed for you. Visibility of HANA Large Instances in Azure
portal is provided through an Azure resource provider for HANA Large Instances, which
currently is in public preview

Register HANA Large Instance Resource


Provider
Usually your Azure subscription you were using for HANA Large Instance deployments is
registered for the HANA Large Instance Resource Provider. However, if you can’t see you
deployed HANA Large Instance units, you should register the Resource Provider in your
Azure subscription. There are two ways in registering the HANA Large Instance Resource
provider

Register through CLI interface


You need to be logged into your Azure subscription, used for the HANA Large Instance
deployment via the Azure CLI interface. You can (re-)register the HANA Large Instance
Provider with this command:

Azure CLI

az provider register --namespace Microsoft.HanaOnAzure

For more information, see the article Azure resource providers and types

Register through Azure portal


You can (re-)register the HANA Large Instance Resource Provider through Azure portal.
You need to list your subscription in Azure portal and double-click on the subscription,
which was used to deploy your HANA Large Instance unit(s). One you are in the
overview page of your subscription, select "Resource providers" as shown below and
type "HANA" into the search window.

In the screenshot shown, the resource provider was already registered. In case the
resource provider is not yet registered, press "re-register" or "register".

For more information, see the article Azure resource providers and types

Display of HANA Large Instance units in the


Azure portal
When submitting an HANA Large Instance deployment request, you are asked to specify
the Azure subscription that you are connecting to the HANA Large Instances as well. It is
recommended, to use the same subscription you are using to deploy the SAP
application layer that works against the HANA Large Instance units. As your first HANA
Large Instances are getting deployed, a new Azure resource group is created in the
Azure subscription you submitted in the deployment request for your HANA Large
Instance(s). The new resource group will list all your HANA Large Instance units you have
deployed in the specific subscription.

In order to find the new Azure resource group, you list the resource group in your
subscription by navigating through the left navigation pane of the Azure portal
In the list of resource groups, you are getting listed, you might need to filter on the
subscription you used to have HANA Large Instances deployed

After filtering to the correct subscription, you still may have a long list of resource
groups. Look for one with a post-fix of -Txxx where "xxx" are three digits, like -T050.

As you found the resource group, list the details of it. The list you received could look
like:

All the units listed are representing a single HANA Large Instance unit that has been
deployed in your subscription. In this case, you look at eight different HANA Large
Instance units, which were deployed in your subscription.
If you deployed several HANA Large Instance tenants under the same Azure
subscription, you will find multiple Azure resource groups

Look at attributes of single HLI Unit


In the list of the HANA Large Instance units, you can click on a single unit and get to the
details of the single HANA Large Instance unit.

In the overview screen, after clicking 'Show more', you are getting a presentation of the
unit, which looks like:

Looking at the different attributes shown, those attributes look hardly different than
Azure VM attributes. On the left-hand side header, it shows the Resource group, Azure
region, subscription name, and ID as well as some tags that you added. By default, the
HANA Large Instance units have no tag assigned. On the right-hand side of the header,
the name of the unit is listed as assigned when the deployment was done. The
operating system is shown as well as the IP address. As with VMs the HANA Large
instance unit type with the number of CPU threads and memory is shown as well. More
details on the different HANA Large Instance units, are shown here:

Available SKUs for HLI


SAP HANA (Large Instances) storage architecture

Additional data on the right lower side is the revision of the HANA Large Instance
stamp. Possible values are:

Revision 3
Revision 4

Revision 4 is the latest architecture released of HANA Large Instances with major
improvements in network latency between Azure VMs and HANA Large instance units
deployed in Revision 4 stamps or rows. Another very important information is found in
the lower right corner of the overview with the name of the Azure Proximity Placement
Group that is automatically created for each deployed HANA Large Instance unit. This
Proximity Placement Group needs to be referenced when deploying the Azure VMs that
host the SAP application layer. By using the Azure proximity placement group
associated with the HANA Large Instance unit, you make sure that the Azure VMs are
deployed in close proximity to the HANA Large Instance unit. The way how proximity
placement groups can be used to locate the SAP application layer in the same Azure
datacenter as Revision 4 hosted HANA Large Instance units is described in Azure
Proximity Placement Groups for optimal network latency with SAP applications.

An additional field in the right column of the header informs about the power state of
the HANA Large instance unit.

7 Note

The power state describes whether the hardware unit is powered on or off. It does
not give information about the operating system being up and running. As you
restart a HANA Large Instance unit, you will experience a small time where the state
of the unit changes to Starting to move into the state of Started. Being in the state
of Started means that the OS is starting up or that the OS has been started up
completely. As a result, after a restart of the unit, you can't expect to immediately
log into the unit as soon as the state switches to Started.

If you press 'See more', additional information is shown. One additional information is
displaying the revision of the HANA Large Instance stamp, the unit got deployed in. See
the article What is SAP HANA on Azure (Large Instances) for the different revisions of
HANA Large Instance stamps

Check activities of a single HANA Large


Instance unit
Beyond giving an overview of the HANA Large Instance units, you can check activities of
the particular unit. An activity log could look like:

One of the main activities recorded are restarts of a unit. The data listed includes the
status of the activity, the time stamp the activity got triggered, the subscription ID out of
which the activity got triggered and the Azure user who triggered the activity.

Another activity that is getting recorded are changes to the unit in the Azure meta data.
Besides the restart initiated, you can see the activity of Write HANAInstances. This type
of activity performs no changes on the HANA Large Instance unit itself, but is
documenting changes to the meta data of the unit in Azure. In the case listed, we added
and deleted a tag (see next section).

Add and delete an Azure tag to a HANA Large


Instance unit
Another possibility you have is to add a tag to a HANA Large Instance unit. The way tags
are getting assigned does not differ from assigning tags to VMs. As with VMs the tags
exist in Azure meta data and, for HANA Large Instances, have the same restrictions as
tags for VMs.

Deleting tags works the same way as with VMs. Both activities, applying and deleting a
tag will be listed in the activity log of the particular HANA Large Instance unit.

Check properties of a HANA Large Instance


unit
The section Properties includes important information that you get when the instances
are handed over to you. It is a section where you get all the information that you could
require in support cases or which you need when setting up storage snapshot
configuration. As such this section is a collection of data around your instance, the
connectivity of the instance to Azure and the storage backend. The top of the section
looks like:

The first few data items, you saw in the overview screen already. But an important
portion of data is the ExpressRoute Circuit ID, which you got as the first deployed units
were handed over. In some support cases, you might get asked for that data. An
important data entry is shown at the bottom of the screenshot. The data displayed is the
IP address of the NFS storage head that isolates your storage to your tenant in the
HANA Large Instance stack. This IP address is also needed when you edit the Configure
Azure Application Consistent Snapshot tool.

As you scroll down in the property pane, you get additional data like a unique resource
ID for your HANA Large Instance unit, or the subscription ID which was assigned to the
deployment.

Restart a HANA Large Instance unit through


Azure portal
Initiating a restart of the Linux operating system, there were various situations where the
OS could not finish a restart successfully. In order to force a restart, you needed to open
a service request to have Microsoft operations perform a power restart of the HANA
Large Instance unit. The functionality of a power restart of a HANA Large Instance unit is
now integrated into the Azure portal. As you are in the overview part of the HANA Large
Instance unit, you see the button for restart on top of the data section

As you are pressing the restart button, you are asked whether you really want to restart
the unit. As you confirm by pressing the button "Yes", the unit will restart.

7 Note

In the restart process, you will experience a small time where the state of the unit
changes to Starting to move into the state of Started. Being in the state of Started
means that the OS is starting up or that the OS has been started up completely. As
a result, after a restart of the unit, you can't expect to immediately log into the unit
as soon as the state switches to Started.

) Important

Dependent on the amount of memory in your HANA Large Instance unit, a restart
and reboot of the hardware and the operating system can take up to one hour

Open a support request for HANA large


Instances
Out of the Azure portal display of HANA Large Instance units, you can create support
requests specifically for a HANA large Instance unit as well. As you follow the link New
support request

In order to get the service of SAP HANA Large Instances listed in the next screen, you
might need to select 'All Services" as shown below
In the list of services, you can find the service SAP HANA Large Instance. As you choose
that service, you can select specific problem types as shown:

Under each of the different problem types, you are offered a selection of problem
subtypes you need to select to characterize your problem further. After selecting the
subtype, you now can name the subject. Once you are done with the selection process,
you can move to next step of the creation. In the Solutions section, you are pointed to
documentation around HANA Large Instances, which might give a pointer to a solution
of your problem. If you can't find a solution for your problem in the documentation
suggested, you go to the next step. In the next step, you are going to be asked whether
the issue is with VMs or with HANA Large Instance units. This information helps to direct
the support request to the correct specialists.

As you answered the questions and provided additional details, you can go the next
step in order to review the support request and the submit it.

Next steps
How to monitor SAP HANA (large instances) on Azure
Monitoring and troubleshooting from HANA side
High availability setup in SUSE using the
fencing device
Article • 02/10/2023

In this article, we'll go through the steps to set up high availability (HA) in HANA Large
Instances on the SUSE operating system by using the fencing device.

7 Note

This guide is derived from successfully testing the setup in the Microsoft HANA
Large Instances environment. The Microsoft Service Management team for HANA
Large Instances doesn't support the operating system. For troubleshooting or
clarification on the operating system layer, contact SUSE.

The Microsoft Service Management team does set up and fully support the fencing
device. It can help troubleshoot fencing device problems.

Prerequisites
To set up high availability by using SUSE clustering, you need to:

Provision HANA Large Instances.


Install and register the operating system with the latest patches.
Connect HANA Large Instance servers to the SMT server to get patches and
packages.
Set up Network Time Protocol (NTP time server).
Read and understand the latest SUSE documentation on HA setup.

Setup details
This guide uses the following setup:

Operating system: SLES 12 SP1 for SAP


HANA Large Instances: 2xS192 (four sockets, 2 TB)
HANA version: HANA 2.0 SP1
Server names: sapprdhdb95 (node1) and sapprdhdb96 (node2)
Fencing device: iSCSI based
NTP on one of the HANA Large Instance nodes
When you set up HANA Large Instances with HANA system replication, you can request
that the Microsoft Service Management team set up the fencing device. Do this at the
time of provisioning.

If you're an existing customer with HANA Large Instances already provisioned, you can
still get the fencing device set up. Provide the following information to the Microsoft
Service Management team in the service request form (SRF). You can get the SRF
through the Technical Account Manager or your Microsoft contact for HANA Large
Instance onboarding.

Server name and server IP address (for example, myhanaserver1 and 10.35.0.1)
Location (for example, US East)
Customer name (for example, Microsoft)
HANA system identifier (SID) (for example, H11)

After the fencing device is configured, the Microsoft Service Management team will
provide you with the SBD name and IP address of the iSCSI storage. You can use this
information to configure fencing setup.

Follow the steps in the following sections to set up HA by using the fencing device.

Identify the SBD device

7 Note

This section applies only to existing customers. If you're a new customer, the
Microsoft Service Management team will give you the SBD device name, so skip
this section.

1. Modify /etc/iscsi/initiatorname.isci to:

iqn.1996-04.de.suse:01:<Tenant><Location><SID><NodeNumber>

Microsoft Service Management provides this string. Modify the file on both nodes.
However, the node number is different on each node.
2. Modify /etc/iscsi/iscsid.conf by setting node.session.timeo.replacement_timeout=5
and node.startup = automatic . Modify the file on both nodes.

3. Run the following discovery command on both nodes.

iscsiadm -m discovery -t st -p <IP address provided by Service


Management>:3260

The results show four sessions.

4. Run the following command on both nodes to sign in to the iSCSI device.

iscsiadm -m node -l

The results show four sessions.


5. Use the following command to run the rescan-scsi-bus.sh rescan script. This script
shows the new disks created for you. Run it on both nodes.

rescan-scsi-bus.sh

The results should show a LUN number greater than zero (for example: 1, 2, and so
on).

6. To get the device name, run the following command on both nodes.

fdisk –l

In the results, choose the device with the size of 178 MiB.

Initialize the SBD device


1. Use the following command to initialize the SBD device on both nodes.

sbd -d <SBD Device Name> create


2. Use the following command on both nodes to check what has been written to the
device.

sbd -d <SBD Device Name> dump

Configure the SUSE HA cluster


1. Use the following command to check whether ha_sles and SAPHanaSR-doc
patterns are installed on both nodes. If they're not installed, install them.

zypper in -t pattern ha_sles


zypper in SAPHanaSR SAPHanaSR-doc

2. Set up the cluster by using either the ha-cluster-init command or the yast2
wizard. In this example, we're using the yast2 wizard. Do this step only on the
primary node.

a. Go to yast2 > High Availability > Cluster.


b. In the dialog that appears about the hawk package installation, select Cancel
because the halk2 package is already installed.

c. In the dialog that appears about continuing, select Continue.

d. The expected value is the number of nodes deployed (in this case, 2). Select
Next.

e. Add node names, and then select Add suggested files.


f. Select Turn csync2 ON.

g. Select Generate Pre-Shared-Keys.

h. In the pop-up message that appears, select OK.

i. The authentication is performed using the IP addresses and preshared keys in


Csync2. The key file is generated with csync2 -k /etc/csync2/key_hagroup .

Manually copy the file key_hagroup to all members of the cluster after it's
created. Be sure to copy the file from node1 to node2. Then select Next.

j. In the default option, Booting was Off. Change it to On, so the pacemaker
service is started on boot. You can make the choice based on your setup
requirements.
k. Select Next, and the cluster configuration is complete.

Set up the softdog watchdog


1. Add the following line to /etc/init.d/boot.local on both nodes.

modprobe softdog

2. Use the following command to update the file /etc/sysconfig/sbd on both nodes.

SBD_DEVICE="<SBD Device Name>"


3. Load the kernel module on both nodes by running the following command.

modprobe softdog

4. Use the following command to ensure that softdog is running on both nodes.

lsmod | grep dog

5. Use the following command to start the SBD device on both nodes.

/usr/share/sbd/sbd.sh start

6. Use the following command to test the SBD daemon on both nodes.

sbd -d <SBD Device Name> list

The results show two entries after configuration on both nodes.


7. Send the following test message to one of your nodes.

sbd -d <SBD Device Name> message <node2> <message>

8. On the second node (node2), use the following command to check the message
status.

sbd -d <SBD Device Name> list

9. To adopt the SBD configuration, update the file /etc/sysconfig/sbd as follows on


both nodes.

SBD_DEVICE=" <SBD Device Name>"


SBD_WATCHDOG="yes"
SBD_PACEMAKER="yes"
SBD_STARTMODE="clean"
SBD_OPTS=""

10. Use the following command to start the pacemaker service on the primary node
(node1).

systemctl start pacemaker


If the pacemaker service fails, see the section Scenario 5: Pacemaker service fails
later in this article.

Join the node to the cluster


Run the following command on node2 to let that node join the cluster.

ha-cluster-join

If you receive an error during joining of the cluster, see the section Scenario 6: Node2
can't join the cluster later in this article.

Validate the cluster


1. Use the following commands to check and optionally start the cluster for the first
time on both nodes.

systemctl status pacemaker


systemctl start pacemaker
2. Run the following command to ensure that both nodes are online. You can run it
on any of the nodes of the cluster.

crm_mon

You can also sign in to hawk to check the cluster status: https://\<node IP>:7630 .
The default user is hacluster, and the password is linux. If needed, you can change
the password by using the passwd command.

Configure cluster properties and resources


This section describes the steps to configure the cluster resources. In this example, you
set up the following resources. You can configure the rest (if needed) by referencing the
SUSE HA guide.

Cluster bootstrap
Fencing device
Virtual IP address

Do the configuration on the primary node only.

1. Create the cluster bootstrap file and configure it by adding the following text.
sapprdhdb95:~ # vi crm-bs.txt
# enter the following to crm-bs.txt
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"

2. Use the following command to add the configuration to the cluster.

crm configure load update crm-bs.txt

3. Configure the fencing device by adding the resource, creating the file, and adding
text as follows.

# vi crm-sbd.txt
# enter the following to crm-sbd.txt
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15"

Use the following command to add the configuration to the cluster.

crm configure load update crm-sbd.txt

4. Add the virtual IP address for the resource by creating the file and adding the
following text.

# vi crm-vip.txt
primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
operations $id="rsc_ip_HA1_HDB10-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.35.0.197"

Use the following command to add the configuration to the cluster.

crm configure load update crm-vip.txt

5. Use the crm_mon command to validate the resources.

The results show the two resources.

You can also check the status at https://<node IP address>:7630/cib/live/state.

Test the failover process


1. To test the failover process, use the following command to stop the pacemaker
service on node1.

Service pacemaker stop

The resources fail over to node2.

2. Stop the pacemaker service on node2, and resources fail over to node1.
Here's the status before failover:

Here's the status after failover:

Troubleshooting
This section describes failure scenarios that you might encounter during setup.

Scenario 1: Cluster node not online


If any of the nodes don't show online in Cluster Manager, you can try this procedure to
bring it online.

1. Use the following command to start the iSCSI service.

service iscsid start

2. Use the following command to sign in to that iSCSI node.


iscsiadm -m node -l

The expected output looks like:

sapprdhdb45:~ # iscsiadm -m node -l


Logging in to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260] (multiple)
Logging in to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260] (multiple)
Login to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260] successful.
Login to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260] successful.
Login to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260] successful.
Login to [iface: default, target: iqn.1992-
08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260] successful.

Scenario 2: Yast2 doesn't show graphical view


The yast2 graphical screen is used to set up the high-availability cluster in this article. If
yast2 doesn't open with the graphical window as shown, and it throws a Qt error, take
the following steps to install the required packages. If it opens with the graphical
window, you can skip the steps.

Here's an example of the Qt error:

Here's an example of the expected output:


1. Make sure that you're logged in as user "root" and have SMT set up to download
and install the packages.

2. Go to yast > Software > Software Management > Dependencies, and then select
Install recommended packages.

7 Note

Perform the steps on both nodes, so that you can access the yast2 graphical
view from both nodes.

The following screenshot shows the expected screen.

3. Under Dependencies, select Install Recommended Packages.

4. Review the changes and select OK.


The package installation proceeds.

5. Select Next.

6. When the Installation Successfully Finished screen appears, select Finish.

7. Use the following commands to install the libqt4 and libyui-qt packages.
zypper -n install libqt4

zypper -n install libyui-qt

Yast2 can now open the graphical view.


Scenario 3: Yast2 doesn't show the high-availability
option
For the high-availability option to be visible on the yast2 control center, you need to
install the other packages.

1. Go to Yast2 > Software > Software Management. Then select Software > Online
Update.

2. Select patterns for the following items. Then select Accept.

SAP HANA server base


C/C++ compiler and tools
High availability
SAP application server base
3. In the list of packages that have been changed to resolve dependencies, select
Continue.
4. On the Performing Installation status page, select Next.

5. When the installation is complete, an installation report appears. Select Finish.


Scenario 4: HANA installation fails with gcc assemblies
error
If the HANA installation fails, you might get the following error.
To fix the problem, install the libgcc_sl and libstdc++6 libraries as shown in the following
screenshot.

Scenario 5: Pacemaker service fails


The following information appears if the pacemaker service can't start.

sapprdhdb95:/ # systemctl start pacemaker


A dependency job for pacemaker.service failed. See 'journalctl -xn' for
details.

sapprdhdb95:/ # journalctl -xn


-- Logs begin at Thu 2017-09-28 09:28:14 EDT, end at Thu 2017-09-28 21:48:27
EDT. --
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine
unloaded: corosync configuration map
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server
sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine
unloaded: corosync configuration ser
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server
sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine
unloaded: corosync cluster closed pr
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server
sockets
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine
unloaded: corosync cluster quorum se
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine
unloaded: corosync profile loading s
Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [MAIN ] Corosync Cluster
Engine exiting normally
Sep 28 21:48:27 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High
Availability Cluster Manager
-- Subject: Unit pacemaker.service has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit pacemaker.service has failed.
--
-- The result is dependency.

sapprdhdb95:/ # tail -f /var/log/messages


2017-09-28T18:44:29.675814-04:00 sapprdhdb95 corosync[57600]: [QB ]
withdrawing server sockets
2017-09-28T18:44:29.676023-04:00 sapprdhdb95 corosync[57600]: [SERV ]
Service engine unloaded: corosync cluster closed process group service v1.01
2017-09-28T18:44:29.725885-04:00 sapprdhdb95 corosync[57600]: [QB ]
withdrawing server sockets
2017-09-28T18:44:29.726069-04:00 sapprdhdb95 corosync[57600]: [SERV ]
Service engine unloaded: corosync cluster quorum service v0.1
2017-09-28T18:44:29.726164-04:00 sapprdhdb95 corosync[57600]: [SERV ]
Service engine unloaded: corosync profile loading service
2017-09-28T18:44:29.776349-04:00 sapprdhdb95 corosync[57600]: [MAIN ]
Corosync Cluster Engine exiting normally
2017-09-28T18:44:29.778177-04:00 sapprdhdb95 systemd[1]: Dependency failed
for Pacemaker High Availability Cluster Manager.
2017-09-28T18:44:40.141030-04:00 sapprdhdb95 systemd[1]:
[/usr/lib/systemd/system/fstrim.timer:8] Unknown lvalue 'Persistent' in
section 'Timer'
2017-09-28T18:45:01.275038-04:00 sapprdhdb95 cron[57995]:
pam_unix(crond:session): session opened for user root by (uid=0)
2017-09-28T18:45:01.308066-04:00 sapprdhdb95 CRON[57995]:
pam_unix(crond:session): session closed for user root

To fix it, delete the following line from the file /usr/lib/systemd/system/fstrim.timer:

Persistent=true
Scenario 6: Node2 can't join the cluster
The following error appears if there's a problem with joining node2 to the existing
cluster through the ha-cluster-join command.

ERROR: Can’t retrieve SSH keys from <Primary Node>

To fix it:

1. Run the following commands on both nodes.

ssh-keygen -q -f /root/.ssh/id_rsa -C 'Cluster Internal' -N ''


cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

2. Confirm that node2 is added to the cluster.


Next steps
You can find more information on SUSE HA setup in the following articles:

SAP HANA SR Performance Optimized Scenario (SUSE website)


Fencing and fencing devices (SUSE website)
Be Prepared for Using Pacemaker Cluster for SAP HANA – Part 1: Basics (SAP
blog)
Be Prepared for Using Pacemaker Cluster for SAP HANA – Part 2: Failure of Both
Nodes (SAP blog)
OS backup and restore
OS backup and restore
Article • 02/10/2023

This article walks through the steps to do an operating system (OS) file-level backup and
restore. The procedure differs depending on parameters like Type I or Type II, Revision 3
or above, location, and so on. Check with Microsoft operations to get the values for
these parameters for your resources.

OS backup and restore for Type II SKUs of


Revision 3 stamps
Refer this documentation: OS backup and restore for Type II SKUs of Revision 3 stamps

OS backup and restore for all other SKUs


The information below describes the steps to do an operating system file-level backup
and restore for all SKUs of all Revisions except Type II SKUs of the HANA Large
Instances of Revision 3.

Take a manual backup


Get the latest Microsoft Snapshot Tools for SAP HANA as explained in a series of articles
starting with What is Azure Application Consistent Snapshot tool. Configure and test
them as described in these articles:

Configure Azure Application Consistent Snapshot tool


Test Azure Application Consistent Snapshot tool

This review will prepare you to run backup regularly via crontab as described in Back up
using Azure Application Consistent Snapshot tool.

For more information, see these references:

Install Azure Application Consistent Snapshot tool


Configure Azure Application Consistent Snapshot tool
Test Azure Application Consistent Snapshot tool
Back up using Azure Application Consistent Snapshot tool
Obtain details using Azure Application Consistent Snapshot tool
Delete using Azure Application Consistent Snapshot tool
Restore using Azure Application Consistent Snapshot tool
Disaster recovery using Azure Application Consistent Snapshot tool
Troubleshoot Azure Application Consistent Snapshot tool
Tips and tricks for using Azure Application Consistent Snapshot tool

Restore a backup
The restore operation cannot be done from the OS itself. You'll need to raise a support
ticket with Microsoft operations. The restore operation requires the HANA Large
Instance (HLI) to be in powered off state, so schedule accordingly.

Managed OS snapshots
Azure can automatically take OS backups for your HLI resources. These backups are
taken once daily, and Azure keeps up to the latest three such backups. These backups
are enabled by default for all customers in the following regions:

West US
Australia East
Australia Southeast
South Central US
East US 2

This facility is partially available in the following regions:

East US
North Europe
West Europe

The frequency or retention period of the backups taken by this facility can't be altered. If
a different OS backup strategy is needed for your HLI resources, you may opt out of this
facility by raising a support ticket with Microsoft operations. Then configure Microsoft
Snapshot Tools for SAP HANA to take OS backups by using the instructions provided
earlier in the section, Take a manual backup.

Next steps
Learn how to enable kdump for HANA Large Instances.

kdump for SAP HANA on Azure Large Instances


OS backup and restore for Type II SKUs
of Revision 3 stamps
Article • 02/10/2023

This document describes the steps to perform an operating system file level backup and
restore for the Type II SKUs of the HANA Large Instances of Revision 3.

) Important

This article does not apply to Type II SKU deployments in Revision 4 HANA Large
Instance stamps. Boot LUNS of Type II HANA Large Instance units which are
deployed in Revision 4 HANA Large Instance stamps can be backed up with storage
snapshots as this is the case with Type I SKUs already in Revision 3 stamps

7 Note

The OS backup scripts uses xfsdump utility.


This document supports complete Root filesystem backup and no
incremental backups.
Ensure that while creating a backup, no files are being written to the same
system. Otherwise, files being written during the backup may not be included
in the backup.
ReaR backup is deprecated for Type II SKUs of the HANA Large Instances of
Revision 3.
We've tested this procedure inhouse against multiple OS corruption scenarios.
However, since you, as customer, are solely responsible for the OS, we
recommend you thoroughly test before relying on this documentation for
your scenario.
We've tested this process on SLES OS.
Major versions upgrades, such as SLES 12.x to SLES 15x, aren't supported.
To complete an OS restore with this process, you'll need Microsoft assistance
since the recovery requires console access. Please create a support ticket with
Microsoft to assist in recovery.

How to take a manual backup?


To perform a manual backup:

1. Install the backup tool.

zypper in xfsdump

2. Create a complete backup.

xfsdump -l 0 -f /data1/xfs_dump /

The following screen show shows the sample manual backup:

3. Important: Save a copy of backup in NFS volumes as well, in the scenario where
data1 partition also gets corrupted.

cp /data1/xfs_dump /osbackup/

4. For excluding regular directories and files from dump, please tag files with chattr.

chattr -R +d directory
chattr +d file
Run xfsdump with “-e” option
Note, It is not possible to exclude nfs filesystems [ntfs]
How to restore a backup?

7 Note

This step requires engaging the Microsoft operations team.


To complete an OS restore with this process, Microsoft assistance is required
since the recovery requires console access. Please create a support ticket with
Microsoft to assist in recovery.
We will be restoring the complete filesystem:

1. Mount OS iso on the system.

2. Enter rescue mode.

3. Mount data1 (or nfs volume, wherever the dump is stored) partition in read/write
mode.

mount -o rw /dev/md126p4 /mnt1

4. Mount Root in read/write mode.

mount -o rw /dev/md126p2 /mnt2

5. Restore Filesystem.

xfsrestore -f /mnt1/xfs_dump /mnt2


6. Reboot the system.

reboot

If any post checks fail, please engage the OS vendor and Microsoft for console access.

Post Restore check


1. Ensure the system has complete attributes restored.

Network is up.
NFS volumes are mounted.

2. Ensure RAID is configured; please replace with your RAID device.

mdadm -D /dev/md126

3. Ensure that RAID disks are synced and the configuration is in a clean state.

RAID disks take sometime in syncing; sync may continue for a few minutes
before it is 100% synced.

4. Start HANA DB and verify HANA is operating as expected.

5. Ensure HANA comes up and there are no errors.

hdbinfo

6. If any post checks fail, please engage OS vendor and Microsoft for console access.
Azure Large Instances high availability
for SAP on RHEL
Article • 02/10/2023

7 Note

This article contains references to the terms blacklist and slave, terms that Microsoft
no longer uses. When the term is removed from the software, we’ll remove it from
this article.

In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate
an SAP HANA database failover. You need to have a good understanding of Linux, SAP
HANA, and Pacemaker to complete the steps in this guide.

The following table includes the host names that are used throughout this article. The
code blocks in the article show the commands that need to be run, as well as the output
of those commands. Pay close attention to which node is referenced in each command.

Type Host name Node

Primary host sollabdsm35 node 1

Secondary host sollabdsm36 node 2

Configure your Pacemaker cluster


Before you can begin configuring the cluster, set up SSH key exchange to establish trust
between nodes.

1. Use the following commands to create identical /etc/hosts on both nodes.

root@sollabdsm35 ~]# cat /etc/hosts


27.0.0.1 localhost localhost.azlinux.com
10.60.0.35 sollabdsm35.azlinux.com sollabdsm35 node1
10.60.0.36 sollabdsm36.azlinux.com sollabdsm36 node2
10.20.251.150 sollabdsm36-st
10.20.251.151 sollabdsm35-st
10.20.252.151 sollabdsm36-back
10.20.252.150 sollabdsm35-back
10.20.253.151 sollabdsm36-node
10.20.253.150 sollabdsm35-node

2. Create and exchange the SSH keys.


a. Generate ssh keys.

[root@sollabdsm35 ~]# ssh-keygen -t rsa -b 1024


[root@sollabdsm36 ~]# ssh-keygen -t rsa -b 1024

b. Copy keys to the other hosts for passwordless ssh.

[root@sollabdsm35 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub


sollabdsm35
[root@sollabdsm35 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub
sollabdsm36
[root@sollabdsm36 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub
sollabdsm35
[root@sollabdsm36 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub
sollabdsm36

3. Disable selinux on both nodes.

[root@sollabdsm35 ~]# vi /etc/selinux/config

...

SELINUX=disabled

[root@sollabdsm36 ~]# vi /etc/selinux/config

...

SELINUX=disabled

4. Reboot the servers and then use the following command to verify the status of
selinux.

[root@sollabdsm35 ~]# sestatus


SELinux status: disabled

[root@sollabdsm36 ~]# sestatus

SELinux status: disabled

5. Configure NTP (Network Time Protocol). The time and time zones for both cluster
nodes must match. Use the following command to open chrony.conf and verify
the contents of the file.

a. The following contents should be added to config file. Change the actual values
as per your environment.

vi /etc/chrony.conf

Use public servers from the pool.ntp.org project.

Please consider joining the pool


(http://www.pool.ntp.org/join.html).

server 0.rhel.pool.ntp.org iburst

b. Enable chrony service.

systemctl enable chronyd

systemctl start chronyd

chronyc tracking

Reference ID : CC0BC90A (voipmonitor.wci.com)

Stratum : 3

Ref time (UTC) : Thu Jan 28 18:46:10 2021

chronyc sources

210 Number of sources = 8

MS Name/IP address Stratum Poll Reach LastRx Last sample

====================================================================
===========

^+ time.nullroutenetworks.c> 2 10 377 1007 -2241us[-2238us] +/-


33ms

^* voipmonitor.wci.com 2 10 377 47 +956us[ +958us] +/- 15ms

^- tick.srs1.ntfo.org 3 10 177 801 -3429us[-3427us] +/- 100ms

6. Update the System

a. First, install the latest updates on the system before you start to install the SBD
device.

b. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of
the resource-agents-sap-hana package installed, as documented in Support
Policies for RHEL High Availability Clusters - Management of SAP HANA in a
Cluster

c. If you don’t want a complete update of the system, even if it is recommended,


update the following packages at a minimum.
i. resource-agents-sap-hana
ii. selinux-policy
iii. iscsi-initiator-utils

node1:~ # yum update

7. Install the SAP HANA and RHEL-HA repositories.

subscription-manager repos –list

subscription-manager repos
--enable=rhel-sap-hana-for-rhel-7-server-rpms

subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms

8. Install the Pacemaker, SBD, OpenIPMI, ipmitool, and fencing_sbd tools on all
nodes.
yum install pcs sbd fence-agent-sbd.x86_64 OpenIPMI
ipmitool

Configure Watchdog
In this section, you learn how to configure Watchdog. This section uses the same two
hosts, sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

1. Make sure that the watchdog daemon is not running on any systems.

[root@sollabdsm35 ~]# systemctl disable watchdog


[root@sollabdsm36 ~]# systemctl disable watchdog
[root@sollabdsm35 ~]# systemctl stop watchdog
[root@sollabdsm36 ~]# systemctl stop watchdog
[root@sollabdsm35 ~]# systemctl status watchdog

● watchdog.service - watchdog daemon

Loaded: loaded (/usr/lib/systemd/system/watchdog.service; disabled;


vendor preset: disabled)

Active: inactive (dead)

Nov 28 23:02:40 sollabdsm35 systemd[1]: Collecting watchdog.service

2. The default Linux watchdog, that will be installed during the installation, is the
iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore,
this watchdog must be disabled.

a. The wrong watchdog is installed and loaded on the system:

sollabdsm35:~ # lsmod |grep iTCO

iTCO_wdt 13480 0

iTCO_vendor_support 13718 1 iTCO_wdt

b. Unload the wrong driver from the environment:


sollabdsm35:~ # modprobe -r iTCO_wdt iTCO_vendor_support

sollabdsm36:~ # modprobe -r iTCO_wdt iTCO_vendor_support

c. To make sure the driver is not loaded during the next system boot, the driver
must be blocklisted. To blocklist the iTCO modules, add the following to the end
of the 50-blacklist.conf file:

sollabdsm35:~ # vi /etc/modprobe.d/50-blacklist.conf

unload the iTCO watchdog modules

blacklist iTCO_wdt

blacklist iTCO_vendor_support

d. Copy the file to secondary host.

sollabdsm35:~ # scp /etc/modprobe.d/50-blacklist.conf sollabdsm35:


/etc/modprobe.d/50-blacklist.conf

e. Test if the ipmi service is started. It is important that the IPMI timer is not
running. The timer management will be done from the SBD pacemaker service.

sollabdsm35:~ # ipmitool mc watchdog get

Watchdog Timer Use: BIOS FRB2 (0x01)

Watchdog Timer Is: Stopped

Watchdog Timer Actions: No action (0x00)

Pre-timeout interval: 0 seconds

Timer Expiration Flags: 0x00

Initial Countdown: 0 sec

Present Countdown: 0 sec


3. By default the required device is /dev/watchdog will not be created.

sollabdsm35:~ # ls -l /dev/watchdog

ls: cannot access /dev/watchdog: No such file or directory

4. Configure the IPMI watchdog.

sollabdsm35:~ # mv /etc/sysconfig/ipmi /etc/sysconfig/ipmi.org

sollabdsm35:~ # vi /etc/sysconfig/ipmi

IPMI_SI=yes
DEV_IPMI=yes
IPMI_WATCHDOG=yes
IPMI_WATCHDOG_OPTIONS="timeout=20 action=reset nowayout=0
panic_wdt_timeout=15"
IPMI_POWEROFF=no
IPMI_POWERCYCLE=no
IPMI_IMB=no

5. Copy the watchdog config file to secondary.

sollabdsm35:~ # scp /etc/sysconfig/ipmi


sollabdsm36:/etc/sysconfig/ipmi

6. Enable and start the ipmi service.

[root@sollabdsm35 ~]# systemctl enable ipmi

Created symlink from


/etc/systemd/system/multi-user.target.wants/ipmi.service to
/usr/lib/systemd/system/ipmi.service.

[root@sollabdsm35 ~]# systemctl start ipmi

[root@sollabdsm36 ~]# systemctl enable ipmi

Created symlink from


/etc/systemd/system/multi-user.target.wants/ipmi.service to
/usr/lib/systemd/system/ipmi.service.
[root@sollabdsm36 ~]# systemctl start ipmi

Now the IPMI service is started and the device /dev/watchdog is created – But the
timer is still stopped. Later the SBD will manage the watchdog reset and enables
the IPMI timer.

7. Check that the /dev/watchdog exists but is not in use.

[root@sollabdsm35 ~]# ipmitool mc watchdog get


Watchdog Timer Use: SMS/OS (0x04)
Watchdog Timer Is: Stopped
Watchdog Timer Actions: No action (0x00)
Pre-timeout interval: 0 seconds
Timer Expiration Flags: 0x10
Initial Countdown: 20 sec
Present Countdown: 20 sec

[root@sollabdsm35 ~]# ls -l /dev/watchdog


crw------- 1 root root 10, 130 Nov 28 23:12 /dev/watchdog
[root@sollabdsm35 ~]# lsof /dev/watchdog

SBD configuration
In this section, you learn how to configure SBD. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

1. Make sure the iSCSI or FC disk is visible on both nodes. This example uses an FC-
based SBD device. For more information about SBD fencing, see Design Guidance
for RHEL High Availability Clusters - SBD Considerations and Support Policies for
RHEL High Availability Clusters - sbd and fence_sbd

2. The LUN-ID must be identically on all nodes.

3. Check multipath status for the sbd device.

multipath -ll
3600a098038304179392b4d6c6e2f4b62 dm-5 NETAPP ,LUN C-Mode
size=1.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 8:0:1:2 sdi 8:128 active ready running
| `- 10:0:1:2 sdk 8:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 8:0:3:2 sdj 8:144 active ready running
`- 10:0:3:2 sdl 8:176 active ready running

4. Creating the SBD discs and setup the cluster primitive fencing. This step must be
executed on first node.

sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 -4 20 -1 10 create

Initializing device /dev/mapper/3600a098038304179392b4d6c6e2f4b62


Creating version 2.1 header on device 4 (uuid:
ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce)

Initializing 255 slots on device 4

Device /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is initialized.

5. Copy the SBD config over to node2.

vi /etc/sysconfig/sbd

SBD_DEVICE="/dev/mapper/3600a09803830417934d6c6e2f4b62"
SBD_PACEMAKER=yes
SBD_STARTMODE=always
SBD_DELAY_START=no
SBD_WATCHDOG_DEV=/dev/watchdog
SBD_WATCHDOG_TIMEOUT=15
SBD_TIMEOUT_ACTION=flush,reboot
SBD_MOVE_TO_ROOT_CGROUP=auto
SBD_OPTS=

scp /etc/sysconfig/sbd node2:/etc/sysconfig/sbd

6. Check that the SBD disk is visible from both nodes.

sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 dump

==Dumping header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62

Header version : 2.1

UUID : ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce

Number of slots : 255


Sector size : 512
Timeout (watchdog) : 5
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 10

==Header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is


dumped

7. Add the SBD device in the SBD config file.

# SBD_DEVICE specifies the devices to use for exchanging sbd messages


# and to monitor. If specifying more than one path, use ";" as
# separator.
#

SBD_DEVICE="/dev/mapper/3600a098038304179392b4d6c6e2f4b62"
## Type: yesno
Default: yes
# Whether to enable the pacemaker integration.
SBD_PACEMAKER=yes

Cluster initialization
In this section, you initialize the cluster. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

1. Set up the cluster user password (all nodes).

passwd hacluster

2. Start PCS on all systems.

systemctl enable pcsd

3. Stop the firewall and disable it on (all nodes).


systemctl disable firewalld

systemctl mask firewalld

systemctl stop firewalld

4. Start pcsd service.

systemctl start pcsd

5. Run the cluster authentication only from node1.

pcs cluster auth sollabdsm35 sollabdsm36

Username: hacluster

Password:
sollabdsm35.localdomain: Authorized
sollabdsm36.localdomain: Authorized

6. Create the cluster.

pcs cluster setup --start --name hana sollabdsm35 sollabdsm36

7. Check the cluster status.

pcs cluster status

Cluster name: hana

WARNINGS:

No stonith devices and `stonith-enabled` is not false

Stack: corosync

Current DC: sollabdsm35 (version 1.1.20-5.el7_7.2-3c4c782f70) -


partition with quorum
Last updated: Sat Nov 28 20:56:57 2020

Last change: Sat Nov 28 20:54:58 2020 by hacluster via crmd on


sollabdsm35

2 nodes configured

0 resources configured

Online: [ sollabdsm35 sollabdsm36 ]

No resources

Daemon Status:

corosync: active/disabled

pacemaker: active/disabled

pcsd: active/disabled

8. If one node is not joining the cluster check if the firewall is still running.

9. Create and enable the SBD Device

pcs stonith create SBD fence_sbd


devices=/dev/mapper/3600a098038303f4c467446447a

10. Stop the cluster restart the cluster services (on all nodes).

pcs cluster stop --all

11. Restart the cluster services (on all nodes).

systemctl stop pcsd


systemctl stop pacemaker
systemctl stop corocync
systemctl enable sbd
systemctl start corosync
systemctl start pacemaker
systemctl start pcsd

12. Corosync must start the SBD service.


systemctl status sbd

● sbd.service - Shared-storage based fencing daemon

Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor


preset: disabled)

Active: active (running) since Wed 2021-01-20 01:43:41 EST; 9min ago

13. Restart the cluster (if not automatically started from pcsd).

pcs cluster start –-all

sollabdsm35: Starting Cluster (corosync)...

sollabdsm36: Starting Cluster (corosync)...

sollabdsm35: Starting Cluster (pacemaker)...

sollabdsm36: Starting Cluster (pacemaker)...

14. Enable fencing device settings.

pcs stonith enable SBD --


device=/dev/mapper/3600a098038304179392b4d6c6e2f4d65
pcs property set stonith-watchdog-timeout=20
pcs property set stonith-action=reboot

15. Check the new cluster status with now one resource.

pcs status

Cluster name: hana

Stack: corosync

Current DC: sollabdsm35 (version 1.1.16-12.el7-94ff4df) - partition


with quorum

Last updated: Tue Oct 16 01:50:45 2018


Last change: Tue Oct 16 01:48:19 2018 by root via cibadmin on
sollabdsm35

2 nodes configured

1 resource configured

Online: [ sollabdsm35 sollabdsm36 ]

Full list of resources:

SBD (stonith:fence_sbd): Started sollabdsm35

Daemon Status:

corosync: active/disabled

pacemaker: active/disabled

pcsd: active/enabled

sbd: active/enabled

[root@node1 ~]#

16. Now the IPMI timer must run and the /dev/watchdog device must be opened by
sbd.

ipmitool mc watchdog get

Watchdog Timer Use: SMS/OS (0x44)

Watchdog Timer Is: Started/Running

Watchdog Timer Actions: Hard Reset (0x01)

Pre-timeout interval: 0 seconds

Timer Expiration Flags: 0x10

Initial Countdown: 20 sec

Present Countdown: 19 sec

[root@sollabdsm35 ~] lsof /dev/watchdog

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME

sbd 117569 root 5w CHR 10,130 0t0 323812 /dev/watchdog


17. Check the SBD status.

sbd -d /dev/mapper/3600a098038304445693f4c467446447a list

0 sollabdsm35 clear

1 sollabdsm36 clear

18. Test the SBD fencing by crashing the kernel.

Trigger the Kernel Crash.

echo c > /proc/sysrq-trigger

System must reboot after 5 Minutes (BMC timeout) or the value


which is
set as panic_wdt_timeout in the /etc/sysconfig/ipmi config file.

Second test to run is to fence a node using PCS commands.

pcs stonith fence sollabdsm36

19. For the rest of the SAP HANA clustering you can disable fencing by setting:

pcs property set stonith-enabled=false


It is sometimes easier to keep fencing deactivated during setup of the cluster,
because you will avoid unexpected reboots of the system.
This parameter must be set to true for productive usage. If this parameter is not
set to true, the cluster will be not supported.
pcs property set stonith-enabled=true

HANA integration into the cluster


In this section, you integrate HANA into the cluster. This section uses the same two
hosts, sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.

The default and supported way is to create a performance optimized scenario where the
database can be switched over directly. Only this scenario is described here in this
document. In this case we recommend installing one cluster for the QAS system and a
separate cluster for the PRD system. Only in this case it is possible to test all
components before it goes into production.

This process is build of the RHEL description on page:


https://access.redhat.com/articles/3004101

Steps to follow to configure HSR

Log Description
Replication
Mode

Synchronous Synchronous in memory (mode=syncmem) means the log write is considered as


in-memory successful, when the log entry has been written to the log volume of the primary
(default) and sending the log has been acknowledged by the secondary instance after
copying to memory. When the connection to the secondary system is lost, the
primary system continues transaction processing and writes the changes only to
the local disk. Data loss can occur when primary and secondary fail at the same
time as long as the secondary system is connected or when a takeover is
executed, while the secondary system is disconnected. This option provides
better performance because it is not necessary to wait for disk I/O on the
secondary instance, but is more vulnerable to data loss.

Synchronous Synchronous (mode=sync) means the log write is considered as successful when
the log entry has been written to the log volume of the primary and the
secondary instance. When the connection to the secondary system is lost, the
primary system continues transaction processing and writes the changes only to
the local disk. No data loss occurs in this scenario as long as the secondary
system is connected. Data loss can occur, when a takeover is executed while the
secondary system is disconnected. Additionally, this replication mode can run
with a full sync option. This means that log write is successful when the log
buffer has been written to the log file of the primary and the secondary instance.
In addition, when the secondary system is disconnected (for example, because of
network failure) the primary systems suspends transaction processing until the
connection to the secondary system is reestablished. No data loss occurs in this
scenario. You can set the full sync option for system replication only with the
parameter [system_replication]/enable_full_sync). For more information on how
to enable the full sync option, see Enable Full Sync Option for System
Replication.
Log Description
Replication
Mode

Asynchronous Asynchronous (mode=async) means the primary system sends redo log buffers
to the secondary system asynchronously. The primary system commits a
transaction when it has been written to the log file of the primary system and
sent to the secondary system through the network. It does not wait for
confirmation from the secondary system. This option provides better
performance because it is not necessary to wait for log I/O on the secondary
system. Database consistency across all services on the secondary system is
guaranteed. However, it is more vulnerable to data loss. Data changes may be
lost on takeover.

1. These are the actions to execute on node1 (primary).

a. Make sure that the database log mode is set to normal.

* su - hr2adm

* hdbsql -u system -p $YourPass -i 00 "select value from


"SYS"."M_INIFILE_CONTENTS" where key='log_mode'"

VALUE

"normal"

b. SAP HANA system replication will only work after initial backup has been
performed. The following command creates an initial backup in the /tmp/
directory. Select a proper backup filesystem for the database.

* hdbsql -i 00 -u system -p $YourPass "BACKUP DATA USING FILE


('/tmp/backup')"

Backup files were created

ls -l /tmp

total 2031784
-rw-r----- 1 hr2adm sapsys 155648 Oct 26 23:31 backup_databackup_0_1

-rw-r----- 1 hr2adm sapsys 83894272 Oct 26 23:31


backup_databackup_2_1

-rw-r----- 1 hr2adm sapsys 1996496896 Oct 26 23:31


backup_databackup_3_1

c. Backup all database containers of this database.

* hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA USING


FILE ('/tmp/sydb')"

* hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA FOR


HR2
USING FILE ('/tmp/rh2')"

d. Enable the HSR process on the source system.

hdbnsutil -sr_enable --name=DC1

nameserver is active, proceeding ...

successfully enabled system as system replication source site

done.

e. Check the status of the primary system.

hdbnsutil -sr_state

System Replication State

online: true

mode: primary

operation mode: primary

site id: 1
site name: DC1

is source system: true

is secondary/consumer system: false

has secondaries/consumers attached: false

is a takeover active: false

Host Mappings:

~~~~~~~~~~~~~~

Site Mappings:

~~~~~~~~~~~~~~

DC1 (primary/)

Tier of DC1: 1

Replication mode of DC1: primary

Operation mode of DC1:

done.

2. These are the actions to execute on node2 (secondary).


a. Stop the database.

su – hr2adm

sapcontrol -nr 00 -function StopSystem

b. For SAP HANA2.0 only, copy the SAP HANA system PKI SSFS_HR2.KEY and
SSFS_HR2.DAT files from primary node to secondary node.

scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT

c. Enable secondary as the replication site.

su - hr2adm

hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=00


--replicationMode=syncmem --name=DC2

adding site ...

--operationMode not set; using default from


global.ini/[system_replication]/operation_mode: logreplay

nameserver node2:30001 not responding.

collecting information ...

updating local ini files ...

done.

d. Start the database.

sapcontrol -nr 00 -function StartSystem

e. Check the database state.

hdbnsutil -sr_state

~~~~~~~~~
System Replication State

online: true

mode: syncmem

operation mode: logreplay

site id: 2

site name: DC2


is source system: false

is secondary/consumer system: true

has secondaries/consumers attached: false

is a takeover active: false

active primary site: 1

primary primarys: node1

Host Mappings:

node2 -> [DC2] node2

node2 -> [DC1] node1

Site Mappings:

DC1 (primary/primary)

|---DC2 (syncmem/logreplay)

Tier of DC1: 1

Tier of DC2: 2

Replication mode of DC1: primary

Replication mode of DC2: syncmem

Operation mode of DC1: primary

Operation mode of DC2: logreplay

Mapping: DC1 -> DC2

done.
~~~~~~~~~~~~~~
3. It is also possible to get more information on the replication status:

~~~~~
hr2adm@node1:/usr/sap/HR2/HDB00> python
/usr/sap/HR2/HDB00/exe/python_support/systemReplicationStatus.py

| Database | Host | Port | Service Name | Volume ID | Site ID | Site


Name | Secondary | Secondary | Secondary | Secondary | Secondary |
Replication | Replication | Replication |

| | | | | | | | Host | Port | Site ID | Site Name | Active Status |


Mode | Status | Status Details |

| SYSTEMDB | node1 | 30001 | nameserver | 1 | 1 | DC1 | node2 | 30001


| 2 | DC2 | YES | SYNCMEM | ACTIVE | |

| HR2 | node1 | 30007 | xsengine | 2 | 1 | DC1 | node2 | 30007 | 2 |


DC2 | YES | SYNCMEM | ACTIVE | |

| HR2 | node1 | 30003 | indexserver | 3 | 1 | DC1 | node2 | 30003 | 2


| DC2 | YES | SYNCMEM | ACTIVE | |

status system replication site "2": ACTIVE

overall system replication status: ACTIVE

Local System Replication State

mode: PRIMARY

site id: 1

site name: DC1

Log Replication Mode Description

For more information about log replication mode, see the official SAP documentation .

Network Setup for HANA System Replication

To ensure that the replication traffic is using the right VLAN for the replication, it must
be configured properly in the global.ini . If you skip this step, HANA will use the Access
VLAN for the replication, which might be undesired.
The following examples show the host name resolution configuration for system
replication to a secondary site. Three distinct networks can be identified:

Public network with addresses in the range of 10.0.1.*

Network for internal SAP HANA communication between hosts at each site:
192.168.1.*

Dedicated network for system replication: 10.5.1.*

In the first example, the [system_replication_communication]listeninterface parameter


has been set to .global and only the hosts of the neighboring replicating site are
specified.

In the following example, the [system_replication_communication]listeninterface


parameter has been set to .internal and all hosts of both sites are specified.

For more information, see Network Configuration for SAP HANA System Replication .

For system replication, it is not necessary to edit the /etc/hosts file, internal ('virtual')
host names must be mapped to IP addresses in the global.ini file to create a
dedicated network for system replication. The syntax for this is as follows:

global.ini

[system_replication_hostname_resolution]

<ip-address_site>=<internal-host-name_site>

Configure SAP HANA in a Pacemaker cluster


In this section, you learn how to configure SAP HANA in a Pacemaker cluster. This
section uses the same two hosts, sollabdsm35 and sollabdsm36 , referenced at the
beginning of this article.

Ensure you have met the following prerequisites:

Pacemaker cluster is configured according to documentation and has proper and


working fencing

SAP HANA startup on boot is disabled on all cluster nodes as the start and stop
will be managed by the cluster

SAP HANA system replication and takeover using tools from SAP are working
properly between cluster nodes
SAP HANA contains monitoring account that can be used by the cluster from both
cluster nodes

Both nodes are subscribed to 'High-availability' and 'RHEL for SAP HANA' (RHEL
6,RHEL 7) channels

In general, please execute all pcs commands only from on node because the CIB
will be automatically updated from the pcs shell.

More info on quorum policy

Steps to configure
1. Configure pcs.

[root@node1 ~]# pcs property unset no-quorum-policy (optional – only


if it was set before)
[root@node1 ~]# pcs resource defaults resource-stickiness=1000
[root@node1 ~]# pcs resource defaults migration-threshold=5000

2. Configure corosync. For more information, see How can I configure my RHEL 7
High Availability Cluster with pacemaker and corosync .

cat /etc/corosync/corosync.conf

totem {

version: 2

secauth: off

cluster_name: hana

transport: udpu

nodelist {

node {

ring0_addr: node1.localdomain
nodeid: 1

node {

ring0_addr: node2.localdomain

nodeid: 2

quorum {

provider: corosync_votequorum

two_node: 1

logging {

to_logfile: yes

logfile: /var/log/cluster/corosync.log

to_syslog: yes

3. Create cloned SAPHanaTopology resource. SAPHanaTopology resource is


gathering status and configuration of SAP HANA System Replication on each node.
SAPHanaTopology requires following attributes to be configured.

pcs resource create SAPHanaTopology_HR2_00 SAPHanaTopology SID=HR2 op


start timeout=600 \
op stop timeout=300 \
op monitor interval=10 timeout=600 \
clone clone-max=2 clone-node-max=1 interleave=true
Attribute Description
Name

SID SAP System Identifier (SID) of SAP HANA installation. Must be the same
for all nodes.

InstanceNumber 2-digit SAP Instance Identifier.

Resource status

pcs resource show SAPHanaTopology_HR2_00

Clone: SAPHanaTopology_HR2_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_HR2_00 (class=ocf provider=heartbeat
type=SAPHanaTopology)
Attributes: InstanceNumber=00 SID=HR2
Operations: monitor interval=60 timeout=60
(SAPHanaTopology_HR2_00-monitor-interval-60)
start interval=0s timeout=180
(SAPHanaTopology_HR2_00-start-interval-0s)
stop interval=0s timeout=60
(SAPHanaTopology_HR2_00-stop-interval-0s)

4. Create Primary/Secondary SAPHana resource.

SAPHana resource is responsible for starting, stopping, and relocating the


SAP HANA database. This resource must be run as a Primary/Secondary
cluster resource. The resource has the following attributes.

Attribute Name Required? Default Description


value

SID Yes None SAP System Identifier (SID) of SAP


HANA installation. Must be same for all
nodes.

InstanceNumber Yes none 2-digit SAP Instance identifier.

PREFER_SITE_TAKEOVER no yes Should cluster prefer to switchover to


secondary instance instead of restarting
primary locally? ("no": Do prefer restart
locally; "yes": Do prefer takeover to
remote site)
Attribute Name Required? Default Description
value

AUTOMATED_REGISTER no FALSE Should the former SAP HANA primary


be registered as secondary after
takeover and
DUPLICATE_PRIMARY_TIMEOUT?
("false": no, manual intervention will be
needed; "true": yes, the former primary
will be registered by resource agent as
secondary)

DUPLICATE_PRIMARY_TIMEOUT no 7200 Time difference (in seconds) needed


between primary time stamps, if a dual-
primary situation occurs. If the time
difference is less than the time gap,
then the cluster holds one or both
instances in a "WAITING" status. This is
to give an admin a chance to react on a
failover. A failed former primary will be
registered after the time difference is
passed. After this registration to the
new primary, all data will be overwritten
by the system replication.

5. Create the HANA resource.

pcs resource create SAPHana_HR2_00 SAPHana SID=HR2 InstanceNumber=00


PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=true op start timeout=3600 \
op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 \
op demote timeout=3600 \
master meta notify=true clone-max=2 clone-node-max=1 interleave=true

pcs resource show SAPHana_HR2_00-primary

Primary: SAPHana_HR2_00-primary
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Resource: SAPHana_HR2_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=false DUPLICATE_PRIMARY_TIMEOUT=7200
InstanceNumber=00 PREFER_SITE_TAKEOVER=true SID=HR2
Operations: demote interval=0s timeout=320 (SAPHana_HR2_00-demote-
interval-0s)
monitor interval=120 timeout=60 (SAPHana_HR2_00-monitor-
interval-120)
monitor interval=121 role=Secondary timeout=60
(SAPHana_HR2_00-monitor-
interval-121)
monitor interval=119 role=Primary timeout=60
(SAPHana_HR2_00-monitor-
interval-119)
promote interval=0s timeout=320 (SAPHana_HR2_00-promote-
interval-0s)
start interval=0s timeout=180 (SAPHana_HR2_00-start-
interval-0s)
stop interval=0s timeout=240 (SAPHana_HR2_00-stop-
interval-0s)

crm_mon -A1

....

2 nodes configured

5 resources configured

Online: [ node1.localdomain node2.localdomain ]

Active resources:

.....

Node Attributes:

* Node node1.localdomain:

+ hana_hr2_clone_state : PROMOTED

+ hana_hr2_remoteHost : node2

+ hana_hr2_roles : 4:P:primary1:primary:worker:primary

+ hana_hr2_site : DC1

+ hana_hr2_srmode : syncmem

+ hana_hr2_sync_state : PRIM

+ hana_hr2_version : 2.00.033.00.1535711040

+ hana_hr2_vhost : node1

+ lpa_hr2_lpt : 1540866498

+ primary-SAPHana_HR2_00 : 150
* Node node2.localdomain:

+ hana_hr2_clone_state : DEMOTED

+ hana_hr2_op_mode : logreplay

+ hana_hr2_remoteHost : node1

+ hana_hr2_roles : 4:S:primary1:primary:worker:primary

+ hana_hr2_site : DC2

+ hana_hr2_srmode : syncmem

+ hana_hr2_sync_state : SOK

+ hana_hr2_version : 2.00.033.00.1535711040

+ hana_hr2_vhost : node2

+ lpa_hr2_lpt : 30

+ primary-SAPHana_HR2_00 : 100

6. Create Virtual IP address resource. Cluster will contain Virtual IP address in order to
reach the Primary instance of SAP HANA. Below is example command to create
IPaddr2 resource with IP 10.7.0.84/24.

pcs resource create vip_HR2_00 IPaddr2 ip="10.7.0.84"


pcs resource show vip_HR2_00

Resource: vip_HR2_00 (class=ocf provider=heartbeat type=IPaddr2)

Attributes: ip=10.7.0.84

Operations: monitor interval=10s timeout=20s


(vip_HR2_00-monitor-interval-10s)

start interval=0s timeout=20s (vip_HR2_00-start-interval-0s)

stop interval=0s timeout=20s (vip_HR2_00-stop-interval-0s)

7. Create constraints.

For correct operation, we need to ensure that SAPHanaTopology resources


are started before starting the SAPHana resources, and also that the virtual IP
address is present on the node where the Primary resource of SAPHana is
running. To achieve this, the following 2 constraints need to be created.
pcs constraint order SAPHanaTopology_HR2_00-clone then
SAPHana_HR2_00-primary symmetrical=false
pcs constraint colocation add vip_HR2_00 with primary
SAPHana_HR2_00-primary 2000

Testing the manual move of SAPHana resource to another


node

(SAP Hana takeover by cluster)


To test out the move of the SAPHana resource from one node to another, use the
command below. Note that the option --primary should not be used when running the
following command because of how the SAPHana resource works internally.

pcs resource move SAPHana_HR2_00-primary

After each pcs resource move command invocation, the cluster creates location
constraints to achieve the move of the resource. These constraints must be removed to
allow automatic failover in the future. To remove them you can use the command
following command.

pcs resource clear SAPHana_HR2_00-primary


crm_mon -A1
Node Attributes:
* Node node1.localdomain:
+ hana_hr2_clone_state : DEMOTED
+ hana_hr2_remoteHost : node2
+ hana_hr2_roles : 2:P:primary1::worker:
+ hana_hr2_site : DC1
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : PRIM
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node1
+ lpa_hr2_lpt : 1540867236
+ primary-SAPHana_HR2_00 : 150
* Node node2.localdomain:
+ hana_hr2_clone_state : PROMOTED
+ hana_hr2_op_mode : logreplay
+ hana_hr2_remoteHost : node1
+ hana_hr2_roles : 4:S:primary1:primary:worker:primary
+ hana_hr2_site : DC2
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : SOK
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node2
+ lpa_hr2_lpt : 1540867311
+ primary-SAPHana_HR2_00 : 100

Login to HANA as verification.

demoted host:

hdbsql -i 00 -u system -p $YourPass -n 10.7.0.82

result:

* -10709: Connection failed (RTE:[89006] System call


'connect'
failed, rc=111:Connection refused (10.7.0.82:30015))

Promoted host:

hdbsql -i 00 -u system -p $YourPass -n 10.7.0.84

Welcome to the SAP HANA Database interactive terminal.

Type: \h for help with commands

\q to quit

hdbsql HR2=>

DB is online

With option the AUTOMATED_REGISTER=false , you cannot switch back and forth.

If this option is set to false, you must re-register the node:

hdbnsutil -sr_register --remoteHost=node2 --remoteInstance=00 --


replicationMode=syncmem --name=DC1
Now node2, which was the primary, acts as the secondary host.

Consider setting this option to true to automate the registration of the demoted host.

pcs resource update SAPHana_HR2_00-primary AUTOMATED_REGISTER=true


pcs cluster node clear node1

Whether you prefer automatic registering depends on the customer scenario.


Automatically reregistering the node after a takeover will be easier for the operation
team. However, you may want to register the node manually in order to first run
additional tests to make sure everything works as you expect.

References
1. Automated SAP HANA System Replication in Scale-Up in pacemaker cluster
2. Support Policies for RHEL High Availability Clusters - Management of SAP HANA in
a Cluster
3. Setting up Pacemaker on RHEL in Azure - Azure Virtual Machines
4. Azure HANA Large Instances control through Azure portal - Azure Virtual
Machines
kdump for SAP HANA on Azure Large
Instances
Article • 02/10/2023

In this article, we'll walk through enabling the kdump service on Azure HANA Large
Instances (HLI) Type I and Type II.

Configuring and enabling kdump is needed to troubleshoot system crashes that don't
have a clear cause. Sometimes a system crash cannot be explained by a hardware or
infrastructure problem. In such cases, an operating system or application may have
caused the problem. kdump will allow SUSE to determine the reason for the system
crash.

Supported SKUs
Hana Large Instance type OS vendor OS package version SKU

Type I SuSE SLES 12 SP3 S224m

Type I SuSE SLES 12 SP4 S224m

Type I SuSE SLES 12 SP2 S72

Type I SuSE SLES 12 SP2 S72m

Type I SuSE SLES 12 SP3 S72m

Type I SuSE SLES 12 SP2 S96

Type I SuSE SLES 12 SP3 S96

Type I SuSE SLES 12 SP2 S192

Type I SuSE SLES 12 SP3 S192

Type I SuSE SLES 12 SP4 S192

Type I SuSE SLES 12 SP2 S192m

Type I SuSE SLES 12 SP3 S192m

Type I SuSE SLES 12 SP4 S192m

Type I SuSE SLES 12 SP2 S144

Type I SuSE SLES 12 SP3 S144


Hana Large Instance type OS vendor OS package version SKU

Type I SuSE SLES 12 SP2 S144m

Type I SuSE SLES 12 SP3 S144m

Type II SuSE SLES 12 SP2 S384

Type II SuSE SLES 12 SP3 S384

Type II SuSE SLES 12 SP4 S384

Type II SuSE SLES 12 SP2 S384xm

Type II SuSE SLES 12 SP3 S384xm

Type II SuSE SLES 12 SP4 S384xm

Type II SuSE SLES 12 SP2 S576m

Type II SuSE SLES 12 SP3 S576m

Type II SuSE SLES 12 SP4 S576m

Prerequisites
The kdump service uses the /var/crash directory to write dumps. Make sure the
partition corresponding to this directory has sufficient space to accommodate
dumps.

Setup details
The script to enable kdump can be found in the Azure sap-hana-tools on GitHub

7 Note

This script is made based on our lab setup. You will need to contact your OS vendor
for any further tuning. A separate logical unit number (LUN) will be provisioned for
new and existing servers for saving the dumps. A script will take care of configuring
the file system out of the LUN. Microsoft won't be responsible for analyzing the
dump. You will need to open a ticket with your OS vendor to have it analyzed.

Run this script on your HANA Large Instance by using the following command:
7 Note

Sudo privileges are needed to run this command.

Bash

sudo bash enable-kdump.sh

If the command's output shows kdump is successfully enabled, reboot the system
to apply the changes.

If the command's output shows an operation failed, then the kdump service isn't
enabled. Refer to a following section, Support issues.

Test kdump

7 Note

The following operation will trigger a kernel crash and system reboot.

Trigger a kernel crash

Bash

echo c > /proc/sysrq-trigger

After the system reboots successfully, check the /var/crash directory for kernel
crash logs.

If the /var/crash has a directory with the current date, kdump is successfully
enabled.

Support issues
If the script fails with an error, or kdump isn't enabled, raise a service request with the
Microsoft support team. Include the following details:

HLI subscription ID

Server name
OS vendor

OS version

Kernel version

For more information, see configuring the kdump .

Next steps
Learn about operating system upgrades on HANA Large Instances.

Operating system upgrades


Operating System Upgrade
Article • 02/10/2023

This article describes the details of operating system (OS) upgrades on HANA Large
Instances (HLI), otherwise known as BareMetal Infrastructure.

7 Note

This article contains references to the terms blacklist and slave, terms that Microsoft
no longer uses. When the term is removed from the software, we’ll remove it from
this article.

7 Note

Upgrading the OS is your responsibility. Microsoft operations support can guide


you in key areas of the upgrade, but consult your operating system vendor as well
when planning an upgrade.

During HLI provisioning, the Microsoft operations team installs the operating system.
You're required to maintain the operating system. For example, you need to do the
patching, tuning, upgrading, and so on, on the HLI. Before you make major changes to
the operating system, for example, upgrade SP1 to SP2, contact the Microsoft
Operations team by opening a support ticket. They will consult with you. We
recommend opening this ticket at least one week before the upgrade.

Include in your ticket:

Your HLI subscription ID.


Your server name.
The patch level you're planning to apply.
The date you're planning this change.

For the support matrix of the different SAP HANA versions with the different Linux
versions, see SAP Note #2235581 .

Known issues
There are a couple of known issues with the upgrade:
On SKU Type II class SKU, the software foundation software (SFS) is removed
during the OS upgrade. You'll need to reinstall the compatible SFS after the OS
upgrade is complete.
Ethernet card drivers (ENIC and FNIC) are rolled back to an older version. You'll
need to reinstall the compatible version of the drivers after the upgrade.

SAP HANA Large Instance (Type I)


recommended configuration
The OS configuration can drift from the recommended settings over time. This drift can
occur because of patching, system upgrades, and other changes you may make.
Microsoft identifies updates needed to ensure HANA Large Instances are optimally
configured for the best performance and resiliency. The following instructions outline
recommendations that address network performance, system stability, and optimal
HANA performance.

Compatible eNIC/fNIC driver versions


To have proper network performance and system stability, ensure the appropriate OS-
specific version of eNIC and fNIC drivers are installed per the following compatibility
table (This table has the latest compatible driver version). Servers are delivered to
customers with compatible versions. However, drivers can get rolled back to default
versions during OS/kernel patching. Ensure the appropriate driver version is running
post OS/kernel patching operations.

OS Vendor OS Package Version Firmware Version eNIC Driver fNIC Driver

SuSE SLES 12 SP2 3.2.3i 2.3.0.45 1.6.0.37

SuSE SLES 12 SP3 3.2.3i 2.3.0.43 1.6.0.36

SuSE SLES 12 SP4 3.2.3i 4.0.0.14 2.0.0.63

SuSE SLES 12 SP5 3.2.3i 4.0.0.14 2.0.0.63

Red Hat RHEL 7.6 3.2.3i 3.1.137.5 2.0.0.50

SuSE SLES 12 SP4 4.1.1b 4.0.0.6 2.0.0.60

SuSE SLES 12 SP5 4.1.1b 4.0.0.6 2.0.0.59

SuSE SLES 15 SP1 4.1.1b 4.0.0.8 2.0.0.60

SuSE SLES 15 SP2 4.1.1b 4.0.0.8 2.0.0.60


OS Vendor OS Package Version Firmware Version eNIC Driver fNIC Driver

Red Hat RHEL 7.6 4.1.1b 4.0.0.8 2.0.0.60

Red Hat RHEL 8.2 4.1.1b 4.0.0.8 2.0.0.60

SuSE SLES 12 SP4 4.1.3d 4.0.0.13 2.0.0.69

SuSE SLES 12 SP5 4.1.3d 4.0.0.13 2.0.0.69

SuSE SLES 15 SP1 4.1.3d 4.0.0.13 2.0.0.69

Red Hat RHEL 8.2 4.1.3d 4.0.0.13 2.0.0.69

Commands for driver upgrade and to clean old rpm


packages

Command to check existing installed drivers

rpm -qa | grep enic/fnic

Delete existing eNIC/fNIC rpm

rpm -e <old-rpm-package>

Install recommended eNIC/fNIC driver packages

rpm -ivh <enic/fnic.rpm>

Commands to confirm installation

modinfo enic
modinfo fnic
Steps for eNIC/fNIC drivers installation during OS upgrade
Upgrade OS version
Remove old rpm packages
Install compatible eNIC/fNIC drivers as per installed OS version
Reboot system
After reboot, check the eNIC/fNIC version

SuSE HLIs GRUB update failure


SAP on Azure HANA Large Instances (Type I) can be in a non-bootable state after
upgrade. The following procedure fixes this issue.

Execution Steps

Execute the multipath -ll command.


Get the logical unit number (LUN) ID or use the command: fdisk -l | grep mapper
Update the /etc/default/grub_installdevice file with line /dev/mapper/<LUN ID> .
Example: /dev/mapper/3600a09803830372f483f495242534a56

7 Note

The LUN ID varies from server to server.

Disable Error Detection And Correction


Error Detection And Correction (EDAC) modules help detect and correct memory errors.
However, the underlying HLI Type I hardware already detects and corrects memory
errors. Enabling the same feature at the hardware and OS levels can cause conflicts and
lead to unplanned shutdowns of the server. We recommend disabling the EDAC
modules from the OS.

Execution Steps

Check whether the EDAC modules are enabled. If an output is returned from the
following command, the modules are enabled.

lsmod | grep -i edac


Disable the modules by appending the following lines to the file
/etc/modprobe.d/blacklist.conf

blacklist sb_edac
blacklist edac_core

A reboot is required for the changes to take place. After reboot, execute the lsmod
command again and verify the modules aren't enabled.

Kernel parameters
Make sure the correct settings for transparent_hugepage , numa_balancing ,
processor.max_cstate , ignore_ce , and intel_idle.max_cstate are applied.

intel_idle.max_cstate=1
processor.max_cstate=1
transparent_hugepage=never
numa_balancing=disable
mce=ignore_ce

Execution Steps

Add these parameters to the GRB_CMDLINE_LINUX line in the file /etc/default/grub :

intel_idle.max_cstate=1 processor.max_cstate=1 transparent_hugepage=never


numa_balancing=disable mce=ignore_ce

Create a new grub file.

grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot your system.

Next steps
Learn to set up an SMT server for SUSE Linux.
Set up SMT server for SUSE Linux
Set up SMT server for SUSE Linux
Article • 02/10/2023

In this article, we'll walk through the steps of setting up SMT server for SAP HANA on
Azure Large Instances, otherwise known as BareMetal Infrastructure.

Large Instances of SAP HANA don't have direct connectivity to the internet. As a result,
it isn't straightforward to register such a unit with the operating system provider and to
download and apply updates. A solution for SUSE Linux is to set up an SMT server in an
Azure virtual machine (VM). You'll host the virtual machine in an Azure virtual network
connected to the HANA Large Instance (HLI). With the SMT server in place, the HANA
Large Instance can register and download updates.

For more information on SUSE, see their Subscription Management Tool for SLES 12
SP2 .

Prerequisites
To install an SMT server for HANA Large Instances, you'll first need:

An Azure virtual network connected to the HANA Large Instance ExpressRoute


circuit.
A SUSE account associated with an organization. The organization should have a
valid SUSE subscription.

Install SMT server on an Azure virtual machine


1. Sign in to the SUSE Customer Center . Go to Organization > Organization
Credentials. In that section, you should find the credentials necessary to set up the
SMT server.

2. Install a SUSE Linux VM in the Azure virtual network. To deploy the virtual machine,
take an SLES 12 SP2 gallery image of Azure (select BYOS SUSE image). In the
deployment process, don't define a DNS name, and don't use static IP addresses.
The deployed virtual machine has the internal IP address in the Azure virtual
network of 10.34.1.4. The name of the virtual machine is smtserver. After the
installation, check connectivity to the HANA Large Instances. Depending on how
you organized name resolution, you might need to configure resolution of the
HANA Large Instances in etc/hosts of the Azure virtual machine.

3. Add a disk to the virtual machine. You'll use this disk to hold the updates; the boot
disk itself could be too small. Here, the disk is mounted to /srv/www/htdocs, as
shown in the following screenshot. A 100-GB disk should suffice.

4. Sign in to the HANA Large Instances; maintain /etc/hosts. Check whether you can
reach the Azure virtual machine that will run the SMT server over the network.

5. Sign in to the Azure virtual machine that will run the SMT server. If you're using
putty to sign in to the virtual machine, run this sequence of commands in your
bash window:

cd ~
echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc

6. Restart your bash to activate the settings. Then start YAST.

7. Connect your VM (smtserver) to the SUSE site.


smtserver:~ # SUSEConnect -r <registration code> -e s<email address> --
url https://scc.suse.com
Registered SLES_SAP 12.2 x86_64
To server: https://scc.suse.com
Using E-Mail: email address
Successfully registered system.

8. After the virtual machine is connected to the SUSE site, install the SMT packages.
Use the following putty command to install the SMT packages.

smtserver:~ # zypper in smt


Refreshing service
'SUSE_Linux_Enterprise_Server_for_SAP_Applications_12_SP2_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

You can also use the YAST tool to install the SMT packages. In YAST, go to
Software Maintenance, and search for smt. Select smt, which switches
automatically to yast2-smt.

Accept the selection for installation on the smtserver.

9. After the installation completes, go to the SMT server configuration. Enter the
organizational credentials from the SUSE Customer Center you retrieved earlier.
Also enter your Azure virtual machine hostname as the SMT Server URL. In this
example, it's https://smtserver.

10. Now test whether the connection to the SUSE Customer Center works. As you see
in the following screenshot, in this example, it did work.

11. After the SMT setup starts, provide a database password. Because it's a new
installation, you should define that password as shown in the following screenshot.

12. Create a certificate.

At the end of the configuration, it might take a few minutes to run the
synchronization check. After the installation and configuration of the SMT server,
you should find the directory repo under the mount point /srv/www/htdocs/. There
are also some subdirectories under the repo.

13. Restart the SMT server and its related services with these commands.

rcsmt restart
systemctl restart smt.service
systemctl restart apache2

Download packages onto the SMT server


1. After all the services are restarted, select the appropriate packages in SMT
Management by using YAST. The package selection depends on the operating
system image of the HANA Large Instance server. The package selection doesn't
depend on the SLES release or version of the virtual machine running the SMT
server. The following screenshot shows an example of the selection screen.

2. Start the initial copy of the select packages to the SMT server you set up. This copy
is triggered in the shell by using the command, smt-mirror.

The packages should be copied into the directories created under the mount point
/srv/www/htdocs. This process can take an hour or more, depending on how many
packages you select. As this process finishes, move to the SMT client setup.

Set up the SMT client on HANA Large Instances


The client or clients in this case are the HANA Large Instances. The SMT server setup
copied the script clientSetup4SMT.sh into the Azure virtual machine.
Copy that script over to the HANA Large Instance you want to connect to your SMT
server. Start the script with the -h option, and give the name of your SMT server as a
parameter. In this example, the name is smtserver.

It's possible that the load of the certificate from the server by the client succeeds. In this
example, however, the registration fails, as shown in the following screenshot.


If the registration fails, see SUSE support document , and run the steps described
there.

) Important

For the server name, provide the name of the virtual machine (in this case,
smtserver), without the fully qualified domain name.

After running these steps, run the following command on the HANA Large Instance:

SUSEConnect –cleanup

7 Note

Wait a few minutes after that step. If you run clientSetup4SMT.sh immediately, you
might get an error.

If you find a problem you need to fix based on the steps of the SUSE article, restart
clientSetup4SMT.sh on the HANA Large Instance. Now it should finish successfully.


You configured the SMT client of the HLI to connect to the SMT server installed on the
Azure VM. Now take "zypper up" or "zypper in" to install OS updates to HANA Large
Instances, or install other packages. You can only get updates that you previously
downloaded on the SMT server.

Next steps
Learn about migrating SAP HANA on Azure Large Instance to Azure Virtual Machines.

SAP HANA on Azure Large Instance migration to Azure Virtual Machines


SAP HANA on Azure Large Instance
migration to Azure Virtual Machines
Article • 02/10/2023

This article describes possible Azure Large Instance deployment scenarios and offers
planning and migration approach with minimized transition downtime.

Overview
Azure Large Instances for SAP HANA (HLI) were first announced in September 2016.
Since then, many have adopted this hardware as a service for their in-memory compute
platform. Yet in recent years, the Azure virtual machine (VM) size extension and support
of HANA scale-out deployment has exceeded most enterprise customers’ ERP database
capacity demand. Many are expressing an interest in migrating their SAP HANA
workload from physical servers to Azure VMs.

This article isn't a step-by-step configuration document. It describes the common


deployment models and offers planning and migration advice. Our intent is to call out
necessary considerations for preparation to minimize transition downtime.

Assumptions
This article makes the following assumptions:

We'll only consider a homogenous HANA database compute service migration


from Hana Large Instance (HLI) to Azure VM without significant software upgrade
or patching. These minor updates include the use of a more recent operating
system (OS) version or HANA version explicitly stated as supported by relevant SAP
notes.
You'll do all updates/upgrades activities before or after the migration. For example,
SAP HANA MCOS converting to MDC deployment.
The migration approach offering the least downtime is SAP HANA System
Replication. Other migration methods aren't part of the scope of this document.
This guidance is applicable for both Rev3 and Rev4 SKUs of HLI.
HANA deployment architecture remains primarily unchanged during the migration.
That is, a system with single instance disaster recovery (DR) will stay the same at
the destination.
You've reviewed and understood the Service Level Agreement (SLA) of the target
(to-be) architecture.
Commercial terms between HLIs and VMs are different. Monitor the usage of your
VMs for cost management.
You understand that HLI is a dedicated compute platform while VMs run on shared
yet isolated infrastructure.
You've validated that target VMs support your intended architecture. For a list of
supported VM SKUs certified for SAP HANA deployment, see the SAP HANA
hardware directory .
You've validated the design and migration plan.
Plan for disaster recovery VM along with the primary site. You can't use the HLI as
the DR node for the primary site running on VMs after the migration.
You copied the required backup files to target VMs, based on business
recoverability and compliance requirements. With VM accessible backups, it allows
for point-in-time recovery during the transition period.
For SAP HANA system replication (HSR) high availability (HA), you need to set up
and configure the fencing device per SAP HANA HA guides for SLES and RHEL. It’s
not preconfigured like the HLI case.
This migration approach doesn't cover the HLI SKUs with Optane configuration.

Deployment scenarios
You can migrate to Azure VMs for all HLI scenarios. Common deployment models for
HLI are summarized in the following table. To benefit from complementary Azure
services, you may have to make minor architectural changes.

Scenario HLI Scenario Migrate Remark


ID to VM
verbatim?

1 Single node with Yes -


one SID

2 Single node with Yes -


Multiple
Components in One
System (MCOS)

3 Single node with No Storage replication isn't available with Azure


DR using storage virtual platform; change current DR solution to
replication either HSR or backup/restore.

4 Single node with No Storage replication isn't available with Azure


DR (multipurpose) virtual platform; change current DR solution to
using storage either HSR or backup/restore.
replication
Scenario HLI Scenario Migrate Remark
ID to VM
verbatim?

5 HSR with fencing Yes No preconfigured SBD for target VMs. Select and
for high availability deploy a fencing solution. Possible options: Azure
Fencing Agent (supported for both RHEL, SLES,
and SBD.

6 HA with HSR, DR No Replace storage replication for DR needs with


with storage either HSR or backup/restore.
replication

7 Host auto failover Yes Use Azure NetApp Files (ANF) for shared storage
(1+1) with Azure VMs.

8 Scale-out with Yes BW/4HANA with M128s, M416s, M416ms VMs


standby using ANF for storage only.

9 Scale-out without Yes BW/4HANA with M128s, M416s, M416ms VMs


standby (with or without using ANF for storage).

10 Scale-out with DR No Replace storage replication for DR needs with


using storage either HSR or backup/restore.
replication

11 Single node with Yes -


DR using HSR

12 Single node HSR to Yes -


DR (cost optimized)

13 HA and DR with Yes -


HSR

14 HA and DR with Yes -


HSR (cost
optimized)

15 Scale-out with DR Yes BW/4HANA with M128s. M416s, M416ms VMs


using HSR (with or without using ANF for storage).

Source (HLI) planning


When onboarding your HLI server, you and Microsoft Service Management went
through the planning of the compute, network, storage, and OS-specific settings for
running the SAP HANA database. Similar planning needs to take place for the migration
to Azure VM.
SAP HANA housekeeping
It’s a good operational practice to tidy up the database content so unwanted, outdated
data, or stale logs aren't migrated to the new database. Housekeeping generally
involves deleting or archiving old, expired, or inactive data. This ‘data hygiene’ should be
tested in non-production systems to validate their data trim validity before production
usage.

Allow network connectivity for new VMs and virtual


network
In your HLI deployment, the network was set up based on the information described in
the article SAP HANA (Large Instances) network architecture. Also, network traffic
routing is done in the manner outlined in the section Routing in Azure.

Is the new VM migration target placed in the existing virtual network with IP
address ranges already permitted to connect to the HLI? Then no further
connectivity update is required.
Is the new Azure VM placed in a new Microsoft Azure Virtual Network, perhaps in
another region, and peered with the existing virtual network? Then you can use the
ExpressRoute service key and Resource ID from the original HLI provisioning to
allow access for this new virtual network IP range. Coordinate with Microsoft
Service Management to enable the virtual network to HLI connectivity.

7 Note

To minimize network latency between the application and database layers,


both the application and database layers must be on the same virtual
network.

Existing app layer availability set, availability zones, and


proximity placement group (PPG)
We've designed the current deployment model to satisfy certain service level goals. In
this move, ensure the target infrastructure will meet or exceed your set goals.
More likely than not, your SAP application servers are placed in an availability set. If the
current deployment service level is satisfactory, and if the target VM assumes the
hostname of the HLI logical name, updating the domain name service (DNS) address
resolution pointing to the VM's IP will work without updating any SAP profiles.
If you’re not using PPG, be sure to place all the application and DB servers in the
same zone to minimize network latency.
If you’re using PPG, refer to a later section of this article, Availability sets,
availability zones, and proximity placement groups.

Storage replication discontinuance process (if used)


If you used storage replication as your DR solution, terminate it after the SAP application
is shut down. Before you do, be sure the last SAP HANA catalog, log file, and data
backups are replicated onto the remote DR HLI storage volumes. This replication is
important in case a disaster happens during transition from the physical server to the
Azure VM.

Data backups preservation consideration


After transitioning to SAP HANA on your Azure VM, the snapshot-based data and log
backups on the HLI won't be easily accessible or restorable to a VM. We recommend
taking file-level backups and snapshots on the HLI even weeks before cut-over. Have
these backups copied to an Azure Storage account accessible by the new SAP HANA
VM. In the early transition period as well, before the Azure-based backup builds enough
history to satisfy Point-in-Time recovery requirements, take file-level backups.

Backing up the HLI content is critical. It's also prudent to have full backups of the SAP
landscape readily accessible in case a rollback is needed.

Adjusting system monitoring


You may use many different tools to monitor and send alert notifications for systems
within your SAP landscape. Remember to take appropriate action to incorporate
changes for monitoring and update the alert notification recipients if needed.

Microsoft Operations team involvement


Open a ticket from the Azure portal based on the existing HLI instance. After the
support ticket is created, a support engineer will contact you via email.

Engage Microsoft account team


Plan migration close to the anniversary renewal time of the HLI contract to minimize
unnecessary expense for the compute resource. To decommission the HLI, coordinate
contract termination and shut-down of the unit.

Destination planning
Careful planning is essential in deploying a new infrastructure to take the place of an
existing one. Ensure the new addition will fulfill your needs in the larger scheme of
things. Here are some key points to consider.

Resource availability in the target region


The current SAP application servers' deployment region are typically close to the
associated HLIs. However, HLIs are offered in fewer locations than available Azure
regions. When migrating the physical HLI to an Azure VM, it's also a good time to fine-
tune the proximity distance of all related services for performance optimization. While
doing so, ensure the chosen region has all the required resources. For instance, you may
want to check on the availability of a certain VM family or the Azure Zones offering high
availability setup.

Virtual network
Do you want to run the new HANA database in an existing virtual network or create a
new one? The primary deciding factor is the current networking layout for the SAP
landscape. Also, when the infrastructure goes from one-zone to two-zones deployment
and uses PPG, it imposes architectural change. For more information, see the article
Azure PPG for optimal network latency with SAP application.

Security
Whether the new SAP HANA VM runs on a new or existing vnet/subnet, it's a new
service critical to your business. It deserves safeguarding. Ensure access control
compliant with your company's security policy.

VM sizing recommendation
This migration is also an opportunity to right size your HANA compute engine. You can
use HANA system views with HANA Studio to understand the system resource
consumption, which allows for right sizing to drive spending efficiency.

Storage
Storage performance is one of the factors that will affect your SAP application user
experience. There are minimum storage layouts published for given VM SKUs. For more
information, see SAP HANA Azure virtual machine storage configurations. We
recommend reviewing these specs and comparing against your existing HLI system
statistics to ensure adequate IO capacity and performance for your new HANA VM.

Will you configure PPG for the new HANA VM and its associated severs? Then submit a
support ticket to inspect and ensure the co-location of the storage and the VM. Since
your backup solution may need to change, also revisit the storage cost to avoid
operational spending surprises.

Storage replication for disaster recovery


With HLI, storage replication was the default option for disaster recovery. This feature
isn't the default option for SAP HANA on Azure VM. Consider HSR, backup/restore, or
other supported solutions that satisfy your business needs.

Availability sets, availability zones, and proximity


placement groups
You can shorten the distance between the application layer and SAP HANA to keep
network latency at a minimum. Place the new database VM and the current SAP
application servers in a PPG. For more information on how Azure availability set and
availability zones work with PPG for SAP deployments, see Proximity Placement Group.

If members of your HANA system are deployed in more than one Azure Zone, you
should be aware of the latency profile of the chosen zones. Place SAP system
components to lessen distance between the SAP application and the database. The
public domain Availability zone latency test tool helps make the measurement easier.

Backup strategy
Many of our customers are already using third-party backup solutions for SAP HANA on
HLI. If you are, then only added protected VM and HANA databases need to be
configured. Ongoing HLI backup jobs can be unscheduled if the machine is being
decommissioned after the migration.

Azure backup for SAP HANA on VM is now generally available. For more information on
SAP HANA backup in Azure VMs, see Backup, Restore, and Manage.

DR strategy
If your service level goals accommodate a longer recovery time, backup can be easy. A
backup to blob storage and restore in place or restore to a new VM is the simplest and
least expensive DR strategy.

On the large instance platform, HANA DR is typically done with HSR. On an Azure VM,
HSR is also the most natural and native SAP HANA DR solution. Whether the source
deployment is single-instance or clustered, a replica of the source infrastructure is
required in the DR region. This DR replica will be configured after the primary HLI to VM
migration is complete. The DR HANA DB will register to the primary SAP HANA on VM
instance as a secondary replication site.

SAP application server connectivity destination change


The HSR migration results in a new HANA database host and also a new database
hostname for the application layer. Modify SAP profiles to reflect the new hostname. If
the switching is done by name resolution preserving the hostname, no profile change is
required.

Operating system (OS)


The OS images for HLI and VM, despite being on the same release level (SLES 12 SP4 for
example), aren't identical. Validate the required packages, hot fixes, patches, kernel, and
security fixes on the HLI. Then install the same packages on the target. You can use HSR
to replicate from an older OS onto a VM with a newer OS version. Verify the supported
versions by reviewing SAP note 2763388 .

New SAP license request


A simple call-out to request a new SAP license for the new HANA system now that it’s
been migrated to VMs.

Service level agreement (SLA) differences


The authors like to call out the difference of availability SLA between HLI and Azure VM.
For example, clustered HLIs HA pairs offer 99.99% availability. To achieve the same SLA,
you'll need to deploy VMs in availability zones. SLA for Virtual Machines describes
availability for various VM configurations so customers can plan their target
infrastructure.

Migration strategy
In this document, we cover only the HANA System Replication approach for the
migration from HLI to Azure VM. Depends on the target storage solution deployed, the
process differs slightly. The high-level steps are described below.

VM with premium/ultra-disks for data


For VMs deployed with premium or ultra-disks, the standard SAP HANA system
replication configuration is applicable for setting up HSR. For an overview of steps in
setting up system replication, see the SAP help article . The article also covers taking
over a secondary system, failing back to the primary, and disabling system replication.
For migration, we'll only need the setup, taking over, and disabling replication steps.

VM with ANF for data and log volumes


At a high level, the latest HLI storage snapshots of the full data and log volumes need to
be copied to Azure storage. From there they're accessible and recoverable by the target
HANA VM. The copy process can be done with any native Linux copy tools.

) Important

Copying and data transfer can take hours depending on the HANA database size
and network bandwidth. The bulk of the copy process should be done in advance
of the primary HANA database downtime.

MCOS to MDC Conversion


The Multiple Components in One System (MCOS) deployment model was used by some
of our HLI customers. The motivation was to circumvent the Multiple Databases
Container (MDC) storage snapshot limitation of earlier SAP HANA versions. In the MCOS
model, several independent SAP HANA instances are stacked up in one HANA Large
Instance. Using HSR for the migration works fine, but results in multiple HANA VMs with
one tenant database each. This model makes for a busier landscape than what you
might prefer. The default deployment for SAP HANA 2.0 is MDC. An alternative is to do
HANA tenant move after the HSR migration. HANA tenant move combines these
independent HANA databases into cotenants in a single HANA container.

Application layer consideration


The database server is viewed as the center of an SAP system. All application servers
should be located near the SAP HANA database. In some cases, when you want to use a
new PPG, you may have to move existing application servers onto the PPG where the
HANA VM is located. Building new application servers may be deemed easier if you
already have deployment templates ready.

Locate existing application servers and the new HANA VM optimally. Then you won't
need to build new application servers, unless you want greater capacity.

When you build a new infrastructure to enhance service availability, your existing
application servers may become unnecessary. They can be shut down and deleted. If the
target VM hostname changed, and differs from the HLI hostname, adjust SAP
application server profiles to point to the new host. If only the HANA database IP
address has changed, update the DNS record to lead incoming connections to the new
HANA VM.

Acceptance test
Migration from HLI to VM makes no material change to the database content as
compared to a heterogeneous migration. Still, we recommend checking the
performance of the new setup.

Cutover plan
Although this migration is straightforward, it does involve the decommissioning of an
existing database. Careful planning to preserve the source system with its content and
backup images are critical in case fallback is necessary. Good planning does offer a
speedier reversal.

Post migration
The migration job isn't done until we've safely decoupled any HLI-dependent services
and connectivity to ensure data integrity. Also, we recommend shutting down
unnecessary services. This section calls out a few of the more important items.

Decommissioning the HLI


After successfully migrating the HANA database to an Azure VM, ensure no business
transactions run on the HLI database. However, keeping the HLI running for the length
of time of its local backup retention window will ensure speedier recovery if needed.
Only when the local backup retention window is past, should you decommission the
HANA Large Instance. Then conclude your contractual HLI commitments with Microsoft
by contacting their Microsoft representatives.
Remove any proxy (for example, Iptables, BIGIP)
configured for HLI
If a proxy service like the IPTables is used to route on-premises traffic to and from the
HLI, you don't need it after the successful migration to VM. Nonetheless, this
connectivity service should be kept for as long as the HLI is standing by. Only shut down
the service once the HLI is fully decommissioned.

Remove Global Reach for HLI


Global Reach is used to connect customers' ExpressRoute gateway with the HLI
ExpressRoute gateway. It allows customers' on-premises traffic to reach the HLI tenant
directly without the use of a proxy service. This connection is no longer needed in the
absence of the HLI unit after migration. Still, like the IPTables proxy service, GlobalReach
should also be kept until the HLI is fully decommissioned.

Operating system subscription – move/reuse


As the VM servers are deployed and the HLIs are decommissioned, the OS subscriptions
can be replaced or reused. There's no need to pay double for OS licenses.

Next steps
Plan your SAP deployment.

SAP workloads on Azure: planning and deployment checklist

You might also like