Professional Documents
Culture Documents
Azure Sap Cluster
Azure Sap Cluster
e OVERVIEW
p CONCEPT
Planning guidance
c HOW-TO GUIDE
e OVERVIEW
c HOW-TO GUIDE
Prepare for deployment
e OVERVIEW
p CONCEPT
f QUICKSTART
e OVERVIEW
p CONCEPT
f QUICKSTART
e OVERVIEW
p CONCEPT
c HOW-TO GUIDE
e OVERVIEW
c HOW-TO GUIDE
f QUICKSTART
Deploy an ERP extension using SAP's Cloud SDK on Azure in one click
Use free developer accounts for SAP BTP, M365 and Azure
Use SAP ABAP platform and SAP BTP, ABAP environment to integrate with Microsoft
What SAP on Azure offerings are
available?
Article • 10/27/2023
There are multiple Microsoft Azure offerings for running and managing your SAP
systems. These offerings range from traditional Azure virtual machine (VM) offerings, to
top-level Azure services, to tools that integrate with other Azure services or external
products.
For more information, see the SAP Integration with Microsoft Services documentation.
For more information, see the SAP HANA on Azure (Large Instances) documentation.
7 Note
This offering is no longer accepting new customers. For alternatives, please check
the offers of HANA certified Azure VMs in the HANA Hardware Directory .
Azure Center for SAP solutions
Azure Center for SAP solutions is a service that makes SAP a top-level workload in
Azure. This end-to-end solution allows you to create and run SAP systems as a unified
workload on Azure. You can use this service through the Azure portal, a REST API, and
the Azure CLI.
For more information, see the Azure Center for SAP solutions documentation.
For more information, see the SAP on Azure deployment automation framework
documentation.
For more information, see the Azure Monitor for SAP solutions documentation.
Next steps
SAP solutions on Azure
Get started with SAP and Azure integration scenarios
Use Azure to host and run SAP
workload scenarios
Article • 04/01/2024
When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads
and scenarios on a scalable, compliant, and enterprise-proven platform. You get the
scalability, flexibility, and cost savings of Azure. With the expanded partnership between
Microsoft and SAP, you can run SAP applications across development and test and
production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP
S/4HANA, SAP BI on Linux to Windows, and SAP HANA to SQL Server, Oracle, Db2, etc.,
we've got you covered.
Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on
Azure, you can host other SAP workload scenarios, like SAP BI on Azure. Our partnership
with SAP resulted in various integration scenarios with the overall Microsoft ecosystem.
Check out the dedicated Integration section to learn more.
We just announced our new services of Azure Center for SAP solutions and Azure
Monitor for SAP solutions 2.0 entering the public preview stage. These services give you
the possibility to deploy SAP workload on Azure in a highly automated manner in an
optimal architecture and configuration. And monitor your Azure infrastructure, OS,
DBMS, and ABAP stack deployments on one single pane of glass.
For customers and partners who are focused on deploying and operating their assets in
public cloud through Terraform and Ansible, use our SAP on Azure Deployment
Automation Framework to jump start your SAP deployments into Azure using our public
Terraform and Ansible modules on github .
Hosting SAP workload scenarios in Azure also can create requirements of identity
integration and single sign-on. This situation can occur when you use Microsoft Entra ID
to connect different SAP components and SAP software-as-a-service (SaaS) or platform-
as-a-service (PaaS) offers. A list of such integration and single sign-on scenarios with
Microsoft Entra ID and SAP entities is described and documented in the section
"Microsoft Entra SAP identity integration and single sign-on."
Is Azure accepting new customers for HANA Large Instances? HANA Large
Instance service is in sunset mode and doesn't accept new customers anymore.
Providing units for existing HANA Large Instance customers is still possible. For
alternatives, check the offers of HANA certified Azure VMs in the HANA Hardware
Directory .
Can Microsoft Entra accounts be used to run the SAP ABAP stack in Windows
guest OS. No, due to shortcomings in feature set of Microsoft Entra ID, it can't be
used for running the ABAP stack within the Windows guest OS
What Azure Services, Azure VM types and Azure storage services are available in
the different Azure regions, check the site Products available by region
Are third-party HA frameworks, besides Windows and Pacemaker supported?
Check bottom part of SAP support note #1928533
What Azure storage is best for my scenario? Read Azure Storage types for SAP
workload
Is the Red Hat kernel in Oracle Enterprise Linux supported by SAP? Read SAP SAP
support note #1565179
Why are the Azures Da(s)v4/Ea(s) VM families not certified for SAP HANA? The
Azure Das/Eas VM families are based on AMD processor-driven hardware. SAP
HANA doesn't support AMD processors, not even in virtualized scenarios
Why am I still getting the message: 'The cpu flags for the RDTSCP instruction or the
cpu flags for constant_tsc or nonstop_tsc aren't set or current_clocksource and
available_clocksource aren't correctly configured' with SAP HANA, although I'm
running the most recent Linux kernels. For the answer, check SAP support note
#2791572
Where can I find architectures for deploying SAP Fiori on Azure? Check out the
blog SAP on Azure: Application Gateway Web Application Firewall (WAF) v2 Setup
for Internet facing SAP Fiori Apps
Documentation space
In the SAP workload documentation space, you can find the following areas:
Change Log
May 21, 2024: Update timeouts and added start delay for pacemaker scheduled
events in Set up Pacemaker on RHEL in Azure and Set up Pacemaker on SUSE Linux
Enterprise Server (SLES) in Azure.
April 1, 2024: Reference the considerations section for sizing HANA shared file
system in NFS v4.1 volumes on Azure NetApp Files for SAP HANA, SAP HANA
Azure virtual machine Premium SSD storage configurations, SAP HANA Azure
virtual machine Premium SSD v2 storage configurations, and Azure Files NFS for
SAP
March 18, 2024: Added considerations for sizing the HANA shared file system in
SAP HANA Azure virtual machine storage configurations
February 07, 2024: Clarified disk allocation when using PPGs to bind availability set
in specific Availability Zone in Configuration options for optimal network latency
with SAP applications
February 01, 2024: Added guidance for SAP front-end printing to Universal Print
January 24, 2024: Split SAP RISE integration documentation into multiple segments
for improved legibility, additional overview information added.
January 22, 2024: Changes in all high availability documentation to include
guidelines for setting the “probeThreshold” property to 2 in the load balancer’s
health probe configuration.
January 21, 2024: Change recommendations around LARGEPAGES in Azure Virtual
Machines Oracle DBMS deployment for SAP workload
December 15, 2023: Change recommendations around DIRECTIO and LVM in
Azure Virtual Machines Oracle DBMS deployment for SAP workload
December 11, 2023: Add RHEL requirements to HANA third site for multi-target
replication and integrating into a Pacemaker cluster.
November 20, 2023: Add storage configuration for Mv3 medium memory VMs into
the documents SAP HANA Azure virtual machine Premium SSD storage
configurations, SAP HANA Azure virtual machine Premium SSD v2 storage
configurations, and SAP HANA Azure virtual machine Ultra Disk storage
configurations
November 20, 2023: Add supported storage matrix into the document Azure
Virtual Machines Oracle DBMS deployment for SAP workload
November 09, 2023: Change in SAP HANA infrastructure configurations and
operations on Azure to align multiple vNIC instructions with planning guide and
add /hana/shared on NFS on Azure Files
September 26, 2023: Change in SAP HANA scale-out HSR with Pacemaker on
Azure VMs on RHEL to add instructions for deploying /hana/shared (only) on NFS
on Azure Files
September 12, 2023: Adding support to handle Azure scheduled events for
Pacemaker clusters running on RHEL.
August 24, 2023: Support of priority-fencing-delay cluster property on two-node
pacemaker cluster to address split-brain situation in RHEL is updated on Setting up
Pacemaker on RHEL in Azure, High availability of SAP HANA on Azure VMs on
RHEL, High availability of SAP HANA Scale-up with ANF on RHEL, Azure VMs high
availability for SAP NW on RHEL with NFS on Azure Files, and Azure VMs high
availability for SAP NW on RHEL with Azure NetApp Files documents.
August 03, 2023: Change of recommendation to use a /25 IP range for delegated
subnet for ANF for SAP workload NFS v4.1 volumes on Azure NetApp Files for SAP
HANA
August 03, 2023: Change in support of block storage and NFS on ANF storage for
SAP HANA documented in SAP HANA Azure virtual machine storage
configurations
July 25, 2023: Adding reference to SAP Note #3074643 to Azure Virtual Machines
Oracle DBMS deployment for SAP workload
July 21, 2023: Support of priority-fencing-delay cluster property on two-node
pacemaker cluster to address split-brain situation in SLES is updated on High
availability for SAP HANA on Azure VMs on SLES, High availability of SAP HANA
Scale-up with ANF on SLES, Azure VMs high availability for SAP NetWeaver on
SLES for SAP Applications with simple mount and NFS, Azure VMs high availability
for SAP NW on SLES with NFS on Azure Files, Azure VMs high availability for SAP
NW on SLES with Azure NetApp Files document.
July 13, 2023: Clarifying differences in zonal replication between NFS on AFS and
ANF in table in Azure Storage types for SAP workload
July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2
don't show any performance difference in SAP HANA Azure virtual machine Ultra
Disk storage configurations
July 13, 2023: Replaced links in ANF section of Azure Virtual Machines Oracle
DBMS deployment for SAP workload to new ANF related documentation
July 11, 2023: Add a note about Azure NetApp Files application volume group for
SAP HANA in HA for HANA Scale-up with ANF on SLES, HANA scale-out with
standby node with ANF on SLES, HA for HANA Scale-out HA on SLES, HA for
HANA scale-up with ANF on RHEL, HANA scale-out with standby node on Azure
VMs with ANF on RHEL and HA for HANA scale-out on RHEL
June 29, 2023: Update important considerations and sizing information in HA for
HANA scale-up with ANF on RHEL, HANA scale-out with standby node on Azure
VMs with ANF on RHEL
June 26, 2023: Update important considerations and sizing information in HA for
HANA Scale-up with ANF on SLES and HANA scale-out with standby node with
ANF on SLES.
June 23, 2023: Updated Azure scheduled events for SLES in Pacemaker set up
guide.
June 22, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do
not show any performance difference in SAP HANA Azure virtual machine
Premium SSD v2 storage configurations
June 1, 2023: Included virtual machine scale set with flexible orchestration
guidelines in SAP workload planning guide.
June 1, 2023: Updated high availability guidelines in HA architecture and scenarios,
and added additional deployment option in configuring optimal network latency
with SAP applications.
June 1, 2023: Release of virtual machine scale set with flexible orchestration
support for SAP workload.
April 25, 2023: Adjust mount options in HA for HANA Scale-up with ANF on SLES,
HANA scale-out with standby node with ANF on SLES, HA for HANA Scale-out HA
on SLES, HA for HANA scale-up with ANF on RHEL, HANA scale-out with standby
node on Azure VMs with ANF on RHEL, HA for HANA scale-out on RHEL, HA for
SAP NW on SLES with ANF , HA for SAP NW on RHEL with ANF and HA for SAP
NW on SLES with simple mount and NFS
April 6, 2023: Updates for RHEL 9 in Setting up Pacemaker on RHEL in Azure
March 26, 2023: Adding recommended sector size in SAP HANA Azure virtual
machine Premium SSD v2 storage configurations
March 1, 2023: Change in HA for SAP HANA on Azure VMs on RHEL to add
configuration for cluster default properties
February 21, 2023: Correct link to HANA hardware directory in SAP HANA
infrastructure configurations and operations on Azure and fixed a bug in SAP
HANA Azure virtual machine Premium SSD v2 storage configurations
February 17, 2023: Add support and Sentinel sections, few other minor updates in
RISE with SAP integration
February 02, 2023: Add new HA provider susChkSrv for SAP HANA Scale-out HA on
SUSE and change from SAPHanaSR to SAPHanaSrMultiTarget provider, enabling
HANA multi-target replication
January 27, 2023: Mark Microsoft Entra Domain Services as supported AD solution
in SAP workload on Azure virtual machine supported scenarios after successful
testing
December 28, 2022: Update documents Azure Storage types for SAP workload and
NFS v4.1 volumes on Azure NetApp Files for SAP HANA to provide more details on
ANF deployment processes to achieve proximity and low latency. Introduction of
zonal deployment process of NFS shares on ANF
December 28, 2022: Updated the guide SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver across all topics. Also added VM configuration
examples for different sizes of databases
December 27, 2022: Introducing new configuration for SAP ASE on E96(d)s_v5 in
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
December 23, 2022: Updating Considerations for Azure Virtual Machines DBMS
deployment for SAP workload by cutting references to Azure standard HDD and
SSD. Introducing premium storage v2 and updating a few other sections to more
recent functionalities
December 20, 2022: Update article SAP workload on Azure virtual machine
supported scenarios with table around AD and Microsoft Entra ID support.
Deleting a few references to HANA Large Instances.
December 19, 2022: Update article SAP workload configurations with Azure
Availability Zones related to new functionalities like zonal replication of Azure
Premium Files
December 18, 2022: Add short description and link to intent option of PPG
creation in Azure proximity placement groups for optimal network latency with
SAP applications
December 14, 2022: Fixes in recommendations of capacity for a few VM types in
SAP HANA Azure virtual machine Premium SSD v2 storage configurations
November 30, 2022: Added storage recommendations for Premium SSD v2 into
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
November 22, 2022: Release of Disaster Recovery guidelines for SAP workload on
Azure - Disaster Recovery overview and infrastructure guidelines for SAP workload
and Disaster Recovery recommendation for SAP workload.
November 22, 2022: Update of SAP workloads on Azure: planning and deployment
checklist to add latest recommendations
November 18, 2022: Add a recommendation to use Pacemaker simple mount
configuration for new implementations on SLES 15 in Azure VMs HA for SAP NW
on SLES with simple mount and NFS, Azure VMs HA for SAP NW on SLES with NFS
on Azure File, Azure VMs HA for SAP NW on SLES with Azure NetApp Files and
Azure VMs HA for SAP NW on SLES
November 15, 2022: Change in HA for SAP HANA Scale-up with ANF on SLES, SAP
HANA scale-out with standby node on Azure VMs with ANF on SLES, HA for SAP
HANA scale-up with ANF on RHEL and SAP HANA scale-out with standby node on
Azure VMs with ANF on RHEL to add recommendation to use mount option
nconnect for workloads with higher throughput requirements
September 29, 2022: Announcing HANA Large Instances being in sunset mode in
SAP workload on Azure virtual machine supported scenarios and What is SAP
HANA on Azure (Large Instances)?. Adding some statements around Azure
VMware and Microsoft Entra ID support status in SAP workload on Azure virtual
machine supported scenarios
September 27, 2022: Minor changes in HA for SAP ASCS/ERS with NFS simple
mount on SLES 15 for SAP Applications to adjust mount instructions
September 14, 2022 Release of updated SAP on Oracle guide with new and
updated content Azure Virtual Machines Oracle DBMS deployment for SAP
workload
September 8, 2022: Change in SAP HANA scale-out HSR with Pacemaker on Azure
VMs on SLES to add instructions for deploying /hana/shared (only) on NFS on
Azure Files
September 6, 2022: Add managed identity for pacemaker fence agent Set up
Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure on SLES and Setting up
Pacemaker on RHEL in Azure RHEL
August 22, 2022: Release of cost optimization scenario Deploy PAS and AAS with
SAP NetWeaver HA cluster on RHEL
August 09, 2022: Release of scenario HA for SAP ASCS/ERS with NFS simple mount
on SLES 15 for SAP Applications
July 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in
Azure Virtual Machines Oracle DBMS deployment for SAP workload
June 29, 2022: Add recommendation and links to Pacemaker usage for Db2
versions 11.5.6 and higher in the documents IBM Db2 Azure Virtual Machines
DBMS deployment for SAP workload, High availability of IBM Db2 LUW on Azure
VMs on SUSE Linux Enterprise Server with Pacemaker, and High availability of IBM
Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
June 08, 2022: Change in HA for SAP NW on Azure VMs on SLES with ANF and HA
for SAP NW on Azure VMs on RHEL with ANF to adjust timeouts when using
NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker
configuration
June 02, 2022: Change in the SAP Deployment Guide to add a link to RHEL in-place
upgrade documentation
June 02, 2022: Change in HA for SAP NetWeaver on Azure VMs on Windows with
Azure NetApp Files(SMB), HA for SAP NW on Azure VMs on SLES with ANF and HA
for SAP NW on Azure VMs on RHEL with ANF to add sizing considerations
May 11, 2022: Change in Cluster an SAP ASCS/SCS instance on a Windows failover
cluster by using a cluster shared disk in Azure, Prepare the Azure infrastructure for
SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS and
SAP ASCS/SCS instance multi-SID high availability with Windows server failover
clustering and Azure shared disk to update instruction about the usage of Azure
shared disk for SAP deployment with PPG.
May 10, 2022: Change in HA for SAP HANA scale-up with ANF on RHEL, SAP HANA
scale-out HSR with Pacemaker on Azure VMs on RHEL, HA for SAP HANA Scale-up
with Azure NetApp Files on SLES, SAP HANA scale-out with standby node on Azure
VMs with ANF on SLES, SAP HANA scale-out HSR with Pacemaker on Azure VMs
on SLES and SAP HANA scale-out with standby node on Azure VMs with ANF on
RHEL to adjust parameters per SAP note 3024346
April 26, 2022: Changes in Setting up Pacemaker on SUSE Linux Enterprise Server in
Azure to add Azure Identity Python module to installation instructions for Azure
Fence Agent
March 30, 2022: Adding information that Red Hat Gluster Storage is being phased
out GlusterFS on Azure VMs on RHEL
March 30, 2022: Correcting DNN support for older releases of SQL Server in SQL
Server Azure Virtual Machines DBMS deployment for SAP NetWeaver
March 28, 2022: Formatting changes and reorganizing ILB configuration
instructions in: HA for SAP HANA on Azure VMs on SLES, HA for SAP HANA Scale-
up with Azure NetApp Files on SLES, HA for SAP HANA on Azure VMs on RHEL, HA
for SAP HANA scale-up with ANF on RHEL, HA for SAP NW on SLES with NFS on
Azure Files, HA for SAP NW on Azure VMs on SLES with ANF, HA for SAP NW on
Azure VMs on SLES for SAP applications, HA for NFS on Azure VMs on SLES, HA for
SAP NNW on Azure VMs on SLES multi-SID guide, HA for SAP NW on RHEL with
NFS on Azure Files, HA for SAP NW on Azure VMs on RHEL with ANF, HA for SAP
NW on Azure VMs on RHEL for SAP applications and HA for SAP NW on Azure
VMs on RHEL multi-SID guide
March 15, 2022: Corrected rsize and wsize mount option settings for ANF in IBM
Db2 Azure Virtual Machines DBMS deployment for SAP workload
March 1, 2022: Corrected note about database snapshots with multiple database
containers in SAP HANA Large Instances high availability and disaster recovery on
Azure
February 28, 2022: Added E(d)sv5 VM storage configurations to SAP HANA Azure
virtual machine storage configurations
February 13, 2022: Corrected broken links to HANA hardware directory in the
following documents: SAP Business One on Azure Virtual Machines, Available SKUs
for HANA Large Instances, Certification of SAP HANA on Azure (Large Instances),
Installation of SAP HANA on Azure virtual machines, SAP workload planning and
deployment checklist, SAP HANA infrastructure configurations and operations on
Azure, SAP HANA on Azure Large Instance migration to Azure Virtual Machines,
Install and configure SAP HANA (Large Instances) ,on Azure, High availability of
SAP HANA scale-out system on Red Hat Enterprise Linux, High availability for SAP
HANA scale-out system with HSR on SUSE Linux Enterprise Server, High availability
of SAP HANA on Azure VMs on SUSE Linux Enterprise Server, Deploy a SAP HANA
scale-out system with standby node on Azure VMs by using Azure NetApp Files on
SUSE Linux Enterprise Server, SAP workload on Azure virtual machine supported
scenarios, What SAP software is supported for Azure deployments
February 13, 2022: Change in HA for SAP NetWeaver on Azure VMs on Windows
with Azure NetApp Files(SMB) to add instructions about adding the SAP
installation user as Administrators Privilege user to avoid SWPM permission
errors
February 09, 2022: Add more information around 4K sectors usage of Db2 11.5 in
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
February 08, 2022: Style changes in SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver
February 07, 2022: Adding new functionality ANF application volume groups for
HANA in documents NFS v4.1 volumes on Azure NetApp Files for SAP HANA and
Azure proximity placement groups for optimal network latency with SAP
applications
January 30, 2022: Adding context about SQL Server proportional fill and
expectations that SQL Server data files should be the same size and should have
the same free space in SQL Server Azure Virtual Machines DBMS deployment for
SAP NetWeaver
January 24, 2022: Change in HA for SAP NW on SLES with NFS on Azure Files, HA
for SAP NW on Azure VMs on SLES with ANF, HA for SAP NW on Azure VMs on
SLES for SAP applications, HA for NFS on Azure VMs on SLES, HA for SAP NNW on
Azure VMs on SLES multi-SID guide, HA for SAP NW on RHEL with NFS on Azure
Files, HA for SAP NW on Azure VMs on RHEL for SAP applications and HA for SAP
NW on Azure VMs on RHEL with ANF and HA for SAP NW on Azure VMs on RHEL
multi-SID guide to remove cidr_netmask from Pacemaker configuration to allow
the resource agent to determine the value automatically.
January 12, 2022: Change in HA for SAP NetWeaver on Azure VMs on Windows
with Azure NetApp Files(SMB) to remove obsolete information for the SAP kernel
that supports the scenario.
December 08, 2021: Change in SQL Server Azure Virtual Machines DBMS
deployment for SAP NetWeaver to clarify Azure Load Balancer settings.
December 08, 2021: Release of scenario HA of SAP HANA Scale-up with Azure
NetApp Files on SLES
December 07, 2021: Change in Setting up Pacemaker on RHEL in Azure to clarify
that the instructions are applicable for both RHEL 7 and RHEL 8
December 07, 2021: Change in HA for SAP NW on SLES with NFS on Azure Files,
HA for SAP NW on Azure VMs on SLES with ANF and HA for SAP NW on Azure
VMs on SLES for SAP applications to adjust the instructions for configuring SWAP
file.
December 02, 2021: Introduction of new fencing method in Setting up Pacemaker
on SUSE Linux Enterprise Server in Azure using Azure shared disk SBD device
December 01, 2021: Change in SAP ASCS/SCS instance with WSFC and file share,
HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)
and HA for SAP NetWeaver on Azure VMs on Windows with Azure Files(SMB) to
update the SAP kernel version, required to support clustering SAP on Windows
with file share
November 30, 2021: Added Using Windows DFS-N to support flexible SAPMNT
share creation for SMB-based file share
November 22, 2021: Change in HA for SAP NW on SLES with NFS on Azure Files
and HA for SAP NW on RHEL with NFS on Azure Files to clarify the guidelines for
J2EE SAP systems and share consolidations per storage account.
November 16, 2021: Release of high availability guides for SAP ASCS/ERS with NFS
on Azure files HA for SAP NW on SLES with NFS on Azure Files and HA for SAP NW
on RHEL with NFS on Azure Files
November 15, 2021: Introduction of new proximity placement architecture for
zonal deployments in Azure proximity placement groups for optimal network
latency with SAP applications
November 02, 2021: Changed Azure Storage types for SAP workload and SAP ASE
Azure Virtual Machines DBMS deployment for SAP workload to declare SAP ASE
support for NFS on Azure NetApp Files.
November 02, 2021: Changed SAP workload configurations with Azure Availability
Zones to move Singapore SouthEast to regions for active/active configurations
November 02, 2021: Change in High availability of SAP HANA on Azure VMs on
Red Hat Enterprise Linux to update instructions for HANA scale-up Active/Active
(Read Enabled) configuration.
October 26, 2021: Change in SAP HANA scale-out HSR with Pacemaker on Azure
VMs on RHEL to update resource names in HANA scale-out Active/Active (Read
Enabled) configuration
October 19, 2021: Change in SAP HANA scale-out HSR with Pacemaker on Azure
VMs on RHEL to add instructions for HANA scale-out Active/Active (Read Enabled)
configuration
October 11, 2021: Change in Cluster an SAP ASCS/SCS instance on a Windows
failover cluster by using a cluster shared disk in Azure, Prepare the Azure
infrastructure for SAP HA by using a Windows failover cluster and shared disk for
SAP ASCS/SCS and SAP ASCS/SCS instance multi-SID high availability with
Windows server failover clustering and Azure shared disk to add instructions about
zone redundant storage (ZRS) for Azure shared disk support
SAP certifications and configurations running on
Microsoft Azure
Article • 02/10/2023
SAP and Microsoft have a long history of working together in a strong partnership that has mutual benefits for
their customers. Microsoft is constantly updating its platform and submitting new certification details to SAP
in order to ensure Microsoft Azure is the best platform on which to run your SAP workloads. The following
tables outline Azure supported configurations and list of growing SAP certifications. This list is an overview list
that might deviate here and there from the official SAP lists. How to get to the detailed data is documented in
the article What SAP software is supported for Azure deployments
SAP HANA certified IaaS platforms for SAP HANA support for native Azure VMs and HANA Large
Instances.
Business One on HANA SUSE Linux Enterprise SAP HANA Certified IaaS Platforms
SAP S/4 HANA Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms
Suite on HANA, OLTP Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms
HANA Enterprise for BW, OLAP Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms
SAP BW/4 HANA Red Hat Enterprise Linux, SUSE Linux Enterprise SAP HANA Certified IaaS Platforms
1928533 - SAP Applications on Azure: Supported Products and Azure VM types for all SAP NetWeaver
based applications, including SAP TREX, SAP LiveCache, and SAP Content Server. And all databases,
excluding SAP HANA.
SAP Business Windows, SUSE Linux SQL Server, Oracle (Windows 1928533 - SAP Applications on
Suite Software Enterprise, Red Hat Enterprise and Oracle Linux only), DB2, Azure: Supported Products and
Linux, Oracle Linux SAP ASE Azure VM types
SAP Business Windows, SUSE Linux SQL Server, Oracle (Windows 1928533 - SAP Applications on
All-in-One Enterprise, Red Hat Enterprise and Oracle Linux only), DB2, Azure: Supported Products and
Linux, Oracle Linux SAP ASE Azure VM types
SAP NetWeaver Windows, SUSE Linux SQL Server, Oracle (Windows 1928533 - SAP Applications on
Enterprise, Red Hat Enterprise and Oracle Linux only), DB2, Azure: Supported Products and
Linux, Oracle Linux SAP ASE Azure VM types
According to SAP over 87% of total global commerce is generated by SAP customers
and more SAP systems are running in the cloud each year. The SAP platform provides a
foundation for innovation for many companies and can handle various workloads
natively. Explore our integration section further to learn how you can combine the
Microsoft Azure ecosystem with your SAP workload to accelerate your business
outcomes. Among the scenarios are extensions with Power Platform ("keep the ABAP
core clean"), secured APIs with Azure API Management, automated business processes
with Logic Apps, enriched experiences with SAP Business Technology Platform, native
Microsoft integrations using ABAP Cloud, uniform data blending dashboards with the
Azure Data Platform and more.
For the latest news from the SAP and Azure world, follow the SAP on Microsoft
TechCommunity section and the relevant Azure tags on the SAP Community .
To learn more about the opportunities of extending SAP applications with Azure
services, see this Azure Friday episode:
https://www.youtube-nocookie.com/embed/72kbjv0GJAY
We have over thirty years of partnership between SAP and Microsoft, which is a
foundation to support common goals long-term, including a joint commitment by SAP
and Microsoft to simplify and streamline customers’ journeys to the cloud. For more
information, see:
Integration resources
Select an area for resources about how to integrate SAP and Azure in that space.
ノ Expand table
Area Description
Azure OpenAI service Learn how to integrate your SAP workloads with Azure OpenAI service.
Microsoft Copilot Learn how to integrate your SAP workloads with Microsoft Copilots.
SAP RISE managed Learn how to integrate your SAP RISE managed workloads with Azure
workloads services.
Microsoft Office Learn about Office Add-ins in Excel, doing SAP Principal Propagation
with Office 365, SAP Analytics Cloud and Data Warehouse Cloud
integration and more.
Microsoft Power Learn about the available out-of-the-box SAP applications enabling
Platform your business users to achieve more with less.
SAP Fiori Increase performance and security of your SAP Fiori applications by
integrating them with Azure services.
Microsoft Entra ID Ensure end-to-end SAP user authentication and authorization with
(formerly Azure Active Microsoft Entra ID. Single sign-on (SSO) and multifactor authentication
Directory) (MFA) are the foundation for a secure and seamless user experience.
Azure Integration Connect your SAP workloads with your end users, business partners,
Services and their systems with world-class integration services. Learn about co-
development efforts that enable SAP Event Mesh to exchange cloud
events with Azure Event Grid, understand how you can achieve high-
availability for services like SAP Cloud Integration, automate your SAP
invoice processing with Logic Apps and Azure AI services and more.
App Development in Apply best-in-class developer tooling to your SAP app developments
any language including and DevOps processes.
ABAP and DevOps
Azure Data Services Learn how to integrate your SAP data with Data Services like Azure
Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power
BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose,
tune performance, efficiently troubleshoot, and more.
Threat Monitoring and Learn how to best secure your SAP workload with Microsoft Defender
Response Automation for Cloud, the SAP certified Microsoft Sentinel solution, and
with Microsoft Security immutable vault for Azure Backup. Prevent incidents from happening,
Services for SAP detect, and respond to threats in real-time.
SAP Business Discover integration scenarios like SAP Private Link to securely and
Technology Platform efficiently connect your BTP apps to your Azure workloads.
(BTP)
Azure OpenAI service
For more information about integration with Azure OpenAI service, see the following
Azure documentation:
empower SAP RISE enterprise users with Azure OpenAI in multicloud environment
Consume OpenAI services (GPT) through CAP & SAP BTP, AI Core
SAP SuccessFactors Helps HR Solve Skills Gap with Generative AI | SAP News
Microsoft Copilot
For more information about integration with Microsoft 365 Copilot , see the following
Microsoft resources:
Microsoft Office
For more information about integration with Microsoft Office, see the following Azure
documentation:
Microsoft Teams
For more information about integration with Microsoft Teams, see Native SAP apps on
the Teams marketplace . Also see the following SAP resources.
SAP Fiori
For more information about integration with SAP Fiori, see the following resources:
Web Application Firewall Setup for Internet facing SAP Fiori Apps
For how to configure single sign-on, see the following Microsoft Entra documentation
and tutorials:
Azure Application Gateway Setup for Public and Internal SAP URLs
SAPGUI using Kerberos and Microsoft Entra Domain Services
New SAP events on Azure Event Grid with SAP Event Mesh
Expose SAP Process Orchestration on Azure securely
Connect to SAP from workflows in Azure Logic Apps
Import SAP OData metadata as an API into Azure API Management
Apply SAP Principal Propagation to your Azure hosted APIs
Using Logic Apps (Standard) to connect with SAP BAPIs and RFC
Also see the following SAP resources:
SAP BTP ABAP Environment (also known as Steampunk) integration with Microsoft
services
SAP S/4HANA Cloud, private edition – ABAP Environment (also known as
Embedded Steampunk) integration with Microsoft services
dotNET speaks OData too, how to implement Azure App Service with SAP Gateway
Apply cloud native deployment practice blue-green to SAP BTP apps with Azure
DevOps
Integrate with Azure OpenAI Service from SAP ABAP via the Microsoft SDK for AI .
For more information about integration with Azure Data Services, see the following
Microsoft and Azure resources:
Integrate SAP Data Warehouse Cloud with Power BI and Azure Synapse Analytics
Extend SAP Integrated Business Planning forecasting algorithms with Azure
Machine Learning
Use Microsoft Defender for Cloud to secure your cloud-infrastructure surrounding the
SAP system including automated responses.
Complimenting that, use the SAP certified solution Microsoft Sentinel to protect your
SAP system and SAP Business Technology Platform (BTP) instance from within using
signals from the SAP Audit Log among others.
Learn more about identity focused integration capabilities that power the analysis on
Defender and Sentinel via the Microsoft Entra ID section.
Leverage the immutable vault for Azure Backup to protect your SAP data from
ransomware attacks.
See the Microsoft Security Copilot working with an SAP Incident in action here .
See below video to experience the SAP security orchestration, automation and response
workflow with Sentinel in action:
https://www.youtube-nocookie.com/embed/b-AZnR-nQpg
The Defender product family consist of multiple products tailored to provide "cloud
security posture management" (CSPM) and "cloud workload protection" (CWPP) for the
various workload types. Below excerpt serves as entry point to start securing your SAP
system.
See SAP's recommendation to use AntiVirus software for SAP hosts and systems on both
Linux and Windows based platforms here . Be aware that the threat landscape has
evolved from file-based attacks to file-less attacks. Therefore, the protection approach
has to evolve beyond pure AntiVirus capabilities too.
For more information about using Microsoft Defender for Endpoint (MDE) via Microsoft
Defender for Server for SAP applications regarding Next-generation protection
(AntiVirus) and Endpoint Detection and Response (EDR) see the following Microsoft
resources:
SAP Applications and Microsoft Defender for Linux | Microsoft TechCommunity
SAP Applications and Microsoft Defender for Windows Server | Microsoft
TechCommunity
Enable the Microsoft Defender for Endpoint integration
Common mistakes to avoid when defining exclusions
7 Note
7 Note
Certification for the SAP Virus Scan Interface (NW-VSI) doesn't apply to MDE,
because it operates outside of the SAP system. It complements Microsoft Sentinel
for SAP, which interacts with the SAP system directly. See more details and the SAP
certification note for Sentinel below.
Tip
MDE was formerly called Microsoft Defender Advanced Threat Protection (ATP).
Older articles or SAP notes still refer to that name.
Tip
Microsoft Defender for Server includes Endpoint detection and response (EDR)
features that are provided by Microsoft Defender for Endpoint Plan 2.
SAP BTP
For more information about Azure integration with SAP Business Technology Platform
(BTP), see the following SAP resources:
Customer resources
These resources include Customer Engagement Initiatives (CEI), public BETAs, and
Customer Influence programs:
SAP S/4HANA Cloud - MS Teams Integration - Jul 2024 | SAP Customer Influence
SAP Event Mesh integration with Microsoft Azure Event Grid - Aug 2022 | SAP
Customer Influence
SAP Private Link Service GA announcement after public Beta - Jun 2022 | SAP Blogs
SAP Private Link service CEI - Jul 2022 | SAP Customer Influence
Next steps
Discover native SAP applications available on the Microsoft Teams marketplace
Browse the out-of-the-box SAP applications available on Microsoft Power Platform
Understand SAP data integration with Azure - Cloud Adoption Framework
Identify your SAP data sources - Cloud Adoption Framework
Explore joint reference architectures on the SAP Discovery Center
Secure your SAP NetWeaver email needs with Exchange Online
Migrate your legacy SAP middleware to Azure
SAP workload on Azure virtual machine
supported scenarios
Article • 02/10/2023
7 Note
HANA Large Instance service is in sunset mode and doesn't accept new customers
anymore. Providing units for existing HANA Large Instance customers is still
possible. For alternatives, check the offers of HANA certified Azure VMs in the
HANA Hardware Directory . For scenarios that were and still are supported for
existing HANA Large Instance customers with HANA Large Instances, check the
article Supported scenarios for HANA Large Instances.
Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS
service with Azure Active Directory Domain Services (traditional AD managed by
Microsoft), and Azure Active Directory. SAP components hosted on Windows OS are
often relying on the usage of Windows Active Directory. In this case the traditional
Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain
Services (still in testing). But these SAP components can't function with the native Azure
Active Directory. Reason is that there are still larger gaps in functionality between Active
Directory in its on-premises form or its SaaS form (Azure Active Directory Domain
Services) and the native Azure Active Directory. This dependency is the reason why
Azure Active Directory accounts aren't supported for applications based on SAP
NetWeaver and S/4 HANA on Windows OS. Traditional Active Directory accounts need
to be used in such scenarios.
The above doesn't affect the usage of Azure Active Directory accounts for single-sign-
on (SSO) scenarios with SAP applications.
2-Tier configuration
An SAP 2-Tier configuration is considered to be built up out of a combined layer of the
SAP DBMS and application layer that run on the same server or VM unit. The second tier
is considered to be the user interface layer. For a 2-Tier configuration, the DBMS, and
SAP application layer share the resources of the Azure VM. As a result, you need to
configure the different components in a way that these components don't compete for
resources. You also need to be careful to not oversubscribe the resources of the VM.
Such a configuration doesn't provide any high availability, beyond the Azure Service
Level agreements of the different Azure components involved.
7 Note
3-Tier configuration
In such configurations, you separate the SAP application layer and the DBMS layer into
different VMs. You usually do that for larger systems and out of reasons of being more
flexible on the resources of the SAP application layer. In the most simple setup, There's
no high availability beyond the Azure Service Level agreements of the different Azure
components involved.
7 Note
Running multiple database instances on one host, you need to make sure that the
different instances aren't competing for resources and thereby exceed the physical
resource limits of the VM. This is especially true for memory where you need to cap the
memory anyone of the instances sharing the VM can allocate. That also might be true
for the CPU resources the different database instances can consume. All the database
systems mentioned have configurations that allow limiting memory allocation and CPU
resources on an instance level. In order to have support for such a configuration for
Azure VMs, it's expected that the disks or volumes that are used for the data and
log/redo log files of the databases that are managed by the different instances are
separate. Or in other words data or log/redo log files of databases that are managed by
different DBMS instance aren't supposed to share the same disks or volumes.
7 Note
At 3-Tier configuration where multiple SAP dialog instances are run within Azure VMs
can look like:
For simplification, we didn't distinguish between SAP Central Services and SAP dialog
instances in the SAP application layer. In this simple 3-Tier configuration, there would be
no high availability protection for SAP Central Services. For production systems, it's not
recommended to leave SAP Central Services unprotected. For specifics on so called
multi-SID configurations around SAP Central Instances and high-availability of such
multi-SID configurations, see later sections of this document.
For Azure VMs, the following high availability configurations are supported on DBMS
level:
SAP HANA System Replication based on Linux Pacemaker on SUSE and Red Hat.
See the detailed articles:
High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server
High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
SAP HANA scale-out n+m configurations using Azure NetApp Files on SUSE and
Red Hat. Details are listed in these articles:
Deploy a SAP HANA scale-out system with standby node on Azure VMs by
using Azure NetApp Files on SUSE Linux Enterprise Server}
Deploy a SAP HANA scale-out system with standby node on Azure VMs by
using Azure NetApp Files on Red Hat Enterprise Linux
SQL Server Failover cluster based on Windows Scale-Out File Services. Though
recommendation for production systems is to use SQL Server Always On instead of
clustering. SQL Server Always On provides better availability using separate
storage. Details are described in this article:
Configure a SQL Server failover cluster instance on Azure virtual machines
SQL Server Always On is supported with the Windows operating system for SQL
Server on Azure. This configuration is the default recommendation for production
SQL Server instances on Azure. Details are described in these articles:
Introducing SQL Server Always On availability groups on Azure virtual machines.
Configure an Always On availability group on Azure virtual machines in different
regions.
Configure a load balancer for an Always On availability group in Azure.
Oracle Data Guard for Windows and Oracle Linux. Details for Oracle Linux can be
found in this article:
Implement Oracle Data Guard on an Azure Linux virtual machine
IBM Db2 HADR on SUSE and RHEL Detailed documentation for SUSE and RHEL
using Pacemaker is provided here:
High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
with Pacemaker
High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux
Server
SAP ASE and SAP maxDB configuration as detailed in these documents:
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
SAP MaxDB, liveCache, and Content Server deployment on Azure VMs
HANA Large Instances high availability scenarios are detailed in:
Supported scenarios for HANA Large Instances- HSR with fencing for high
availability
Supported scenarios for HANA Large Instances - Host auto failover (1+1)
) Important
For none of the scenarios described above, we support configurations of multiple
DBMS instances in one VM. Means in each of the cases, only one database instance
can be deployed per VM and protected with the described high availability
methods. Protecting multiple DBMS instances under the same Windows or
Pacemaker failover cluster is NOT supported at this point in time. Also Oracle Data
Guard is supported for single instance per VM deployment cases only.
Various database systems allow hosting multiple databases under one DBMS instance.
Like with SAP HANA, multiple databases can be hosted in multiple database containers
(MDC). For cases where these multi-database configurations are working within one
failover cluster resource, these configurations are supported. Configurations that aren't
supported are cases where multiple cluster resources would be required. As for
configurations where you would define multiple SQL Server Availability Groups, under
one SQL Server instance.
Dependent on the DBMS an/or operating systems, components like Azure load balancer
might or might not be required as part of the solution architecture.
Specifically for maxDB, the storage configuration needs to be different. With maxDB, the
data and log files needs to be located on shared storage for high availability
configurations. Only for maxDB, shared storage is supported for high availability. For all
other DBMS, separate storage stacks per node are the only supported disk
configurations.
Other high availability frameworks are known to exist and are known to run on
Microsoft Azure as well. However, Microsoft didn't test those frameworks. If you want to
build your high availability configuration with those frameworks, you need to work with
the provider of that software to:
) Important
Microsoft Azure Marketplace offers a variety of soft appliances that provide storage
solutions on top of Azure native storage. These soft appliances can be used to
create NFS shares as well that theoretically could be used in the SAP HANA scale-
out deployments where a standby node is required. Due to various reasons, none
of these storage soft appliances is supported for any of the DBMS deployments by
Microsoft and SAP on Azure. Deployments of DBMS on SMB shares isn't supported
at all at this point in time. Deployments of DBMS on NFS shares is limited to NFS
4.1 shares on Azure NetApp Files .
Windows Failover Cluster Server using Windows Scale-out File Services for sapmnt
and global transport directory. Details are described in the article:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file
share in Azure
Prepare Azure infrastructure for SAP high availability by using a Windows
failover cluster and file share for SAP ASCS/SCS instances
Windows Failover Cluster Server using SMB share based on Azure NetApp Files
for sapmnt and global transport directory. Details are listed in the article:
High availability for SAP NetWeaver on Azure VMs on Windows with Azure
NetApp Files(SMB) for SAP applications
Windows Failover Cluster Server based on SIOS Datakeeper . Though documented
by Microsoft, you need a support relationship with SIOS, so, that you can engage
with SIOS support when using this solution. Details are described in the article:
Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a
cluster shared disk in Azure
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for SAP ASCS/SCS
Pacemaker on SUSE operating system with creating a highly available NFS share
using two SUSE VMs and drdb for file replication. Details are documented in the
article
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server for SAP applications
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
Pacemaker SUSE operating system with using NFS shares provided by Azure
NetApp Files . Details are documented in
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with Azure NetApp Files for SAP applications
Pacemaker on Red Hat operating system with NFS share hosted on a glusterfs
cluster. Details can be found in the articles
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver
Pacemaker on Red Hat operating system with NFS share hosted on Azure NetApp
Files . Details are described in the article
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
Of the listed solutions, you need a support relationship with SIOS to support the
Datakeeper product and to engage with SIOS directly if problems are encountered.
Dependent on the way you licensed the Windows, Red Hat, and/or SUSE OS, you could
also be required to have a support contract with your OS provider to have full support
of the listed high availability configurations.
In the list shown, There's no mentioning of the Oracle Linux operating system. Oracle
Linux doesn't support Pacemaker as a cluster framework. If you want to deploy your SAP
system on Oracle Linux and you need a high availability framework for Oracle Linux, you
need to work with third-party suppliers. One of the suppliers is SIOS with their
Protection Suite for Linux that is supported by SAP on Azure. For more information read
SAP note #1662610 - Support details for SIOS Protection Suite for Linux for more
details.
Windows Failover Cluster Server with Windows Scale-out File Server can be
deployed on all native Azure storage types, except Azure NetApp Files. However,
recommendation is to use Premium Storage due to superior service level
agreements in throughput and IOPS.
Windows Failover Cluster Server with SMB on Azure NetApp Files is supported on
Azure NetApp Files. SMB shares hosted on Azure Premium File services are
supported for this scenario as well. Azure Standard Files isn't supported
Windows Failover Cluster Server with windows shared disk based on SIOS
Datakeeper can be deployed on all native Azure storage types, except Azure
NetApp Files. However, recommendation is to use Premium Storage due to
superior service level agreements in throughput and IOPS.
SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported.
SUSE or Red Hat Pacemaker using NFS shares on Azure Premium Files using LRS or
ZRS s supported. Azure Standard Files isn't supported
SUSE Pacemaker using a drdb configuration between two VMs is supported using
native Azure storage types, except Azure NetApp Files. However, we recommend
using one of the first party services with Azure Premium Files or Azure NetApp
Files.
Red Hat Pacemaker using glusterfs for providing NFS share is supported using
native Azure storage types, except Azure NetApp Files. However, we recommend
using one of the first party services with Azure Premium Files or Azure NetApp
Files.
) Important
Microsoft Azure Marketplace offers a variety of soft appliances that provide storage
solutions on top of Azure native storage. These storage soft appliances can be used
to create NFS or SMB shares as well that theoretically could be used in the failover
clustered SAP Central Services as well. These solutions aren't directly supported for
SAP workload by Microsoft. If you decide to use such a solution to create your NFS
or SMB share, support for the SAP Central Service configuration needs to be
provided by the third-party owning the software in the storage soft appliance.
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover
Clustering and shared disk on Azure
SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover
Clustering and file share on Azure
For SUSE, a multi-SID cluster based on Pacemaker is supported as well. So far the
configuration is supported for:
The configuration is documented in High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP applications multi-SID guide
Azure Premium Storage v1, including Azure Write accelerator for the /hana/log
volume
Azure Premium Storage v2
Ultra disk
Azure NetApp Files
SAP HANA scale-out configurations for OLAP or S/4HANA with standby node(s) are
exclusively supported with NFS shared hosted on Azure NetApp Files.
For further information on exact storage configurations with or without standby node,
check the articles:
DBMS layer
For the DBMS layer, configurations using the DBMS native replication mechanisms, like
Always On, Oracle Data Guard, Db2 HADR, SAP ASE Always-On, or HANA System
Replication are supported. it's mandatory that the replication stream in such cases is
asynchronous, instead of synchronous as in typical high availability scenarios that are
deployed within a single Azure region. A typical example of such a supported DBMS
disaster recovery configuration is described in the article SAP HANA availability across
Azure regions. The second graphic in that section describes a scenario with HANA as an
example. The main databases supported for SAP applications are all able to be deployed
in such a scenario.
it's supported to use a smaller VM as target instance in the disaster recovery region
since that VM doesn't experience the full workload traffic. Doing so, you need to keep
the following considerations in mind:
Smaller VM types don't allow that many disks attached than smaller VMs
Smaller VMs have less network and storage throughput
Resizing across VM families can be a problem when the Different VMs are
collected in one Azure Availability Set or when the resizing should happen
between the M-Series family and Mv2 family of VMs
CPU and memory consumption for the database instance being able to receive the
stream of changes with minimal delay and enough CPU and memory resources to
apply these changes with minimal delay to the data
More details on limitations of different VM sizes can be found on the VM sizes page
7 Note
Usage of Azure Site Recovery has not been tested for DBMS deployments under
SAP workload. As a result it's not supported for the DBMS layer of SAP systems at
this point in time. Other methods of replications by Microsoft and SAP that aren't
listed aren't supported. Using third party software for replicating the DBMS layer of
SAP systems between different Azure Regions, needs to be supported by the
vendor of the software and will not be supported through Microsoft and SAP
support channels.
Non-DBMS layer
For the SAP application layer and eventual shares or storage locations that are needed,
the two major scenarios are used by customers:
The disaster recovery targets in the second Azure region aren't being used for any
production or non-production purposes. In this scenario, the VMs that function as
disaster recovery target are ideally not deployed and the image and changes to
the images of the production SAP application layer is replicated to the disaster
recovery region. A functionality that can perform such a task is Azure Site
Recovery. Azure Site Recovery support an Azure-to-Azure replication scenario like
this.
The disaster recovery targets are VMs that are actually in use by non-production
systems. The whole SAP landscape is spread across two different Azure regions
with production systems usually in one region and non-production systems in
another region. In many customer deployments, the customer has a non-
production system that is equivalent to a production system. The customer has
production application instances pre-installed on the application layer non-
production systems. In a failover event, the non-production instances would be
shut down, the virtual names of the production VMs moved to the non-production
VMs (after assigning new IP addresses in DNS), and the pre-installed production
instances are getting started
Non-supported scenario
There's a list of scenarios, which aren't supported for SAP workload on Azure
architectures. Not supported means SAP and Microsoft are not able to deliver support
for these configurations and need to defer to an eventual involved third-party that
provided software to establish such architectures. Two of the categories are:
Storage soft appliances: There are various storage soft appliances in the market.
Some of the vendors offer own documentation on how to use their storage soft
appliances on Azure related to SAP software. Support of configurations or
deployments involving such storage soft appliances needs to be provided by the
vendor of the storage soft appliance. This fact is also manifested in SAP support
note #2015553
High Availability frameworks: Only Pacemaker and Windows Server Failover Cluster
are supported high availability frameworks for SAP workload on Azure. As
mentioned earlier, the solution of SIOS Datakeeper is described and documented
by Microsoft. Nevertheless, the components of SIOS Datakeeper need to be
supported through SIOS as the vendor providing those components. SAP also
listed other certified high availability frameworks in various SAP notes. Some of
them were certified by the third-party vendor for Azure as well. Nevertheless,
support for configurations using those products need to be provided by the
product vendor. Different vendors have different integration into the SAP support
processes. You should clarify what support process works best for the particular
vendor before deciding to use the product with SAP configurations deployed on
Azure.
Shared disk clusters where database files are residing on the shared disks aren't
supported, except for maxDB. For all other database, the supported solution is to
have separate storage locations instead of an SMB or NFS share or shared disk to
configure high-availability scenarios
Deployment scenarios that introduce a larger network latency between the SAP
application tier and the SAP DBMS tier as in NetWeaver, S/4HANA and e.g. Hybris .
This includes:
Deploying one of the tiers on-premises whereas the other tier is deployed in
Azure
Deploying the SAP application tier of a system in a different Azure region than
the DBMS tier
Deploying one tier in datacenters that are co-located to Azure and the other tier
in Azure, except where such an architecture pattern is provided by an Azure
native service
Deploying network virtual appliances between the SAP application tier and the
DBMS layer
Using storage that is hosted in datacenters co-located to Azure datacenter for
the SAP DBMS tier or SAP global transport directory
Deploying the two layers with two different cloud vendors. For example,
deploying the DBMS tier in Oracle Cloud Infrastructure and the application tier
in Azure
Multi-Instance HANA Pacemaker cluster configurations
Windows Cluster configurations with shared disks through SOFS or SMB on ANF
for SAP databases supported on Windows. Instead we recommend the usage of
native high availability replication of the particular databases and use separate
storage stacks
Deployment of SAP databases supported on Linux with database files that are
located in NFS shares on top of ANF except for SAP HANA, Oracle on Oracle Linux,
and Db2 on Suse and Red Hat
Deployment of Oracle DBMS on any other guest OS than Windows and Oracle
Linux. See also SAP support note #2039619
Scenario(s) that we didn't test and therefore have no experience with list like:
Azure Site Recovery replicating DBMS layer VMs. As a result, we recommend using
the database native asynchronous replication functionality for potential disaster
recovery configuration
Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP
NetWeaver
What SAP software is supported for
Azure deployments
Article • 02/10/2023
This article describes how you can find out what SAP software is supported for Azure
deployments and what the necessary operating system releases or DBMS releases are.
Evaluating, whether your current SAP software is supported and what OS and DBMS
releases are supported with your SAP software in Azure, you are going to need access
to:
The first section lists the minimum requirements for operating releases that are
supported with SAP software in Azure VMs in general. If you are not reaching those
minimum requirements and run older releases of these operating systems, you need to
upgrade your OS release to such a minimum release or even more recent releases. It is
correct that Azure in general would support older releases of some of those operating
systems. But the restrictions or minimum releases as listed are based on tests and
qualifications executed and are not going to be extended further back.
7 Note
There are some specific VM types, HANA Large Instances or SAP workloads that are
going to require more recent OS releases. Cases like that will be mentioned
throughout the document. Cases like that are clearly documented either in SAP
notes or other SAP publications.
The section following lists general SAP platforms that are supported with the releases
that are supported and more important the SAP kernels that are supported. It lists
NetWeaver/ABAP or Java stacks that are supported AND, which need minimum kernel
releases. More recent ABAP stacks are supported on Azure, but do not need minimum
kernel releases since changes for Azure got implemented from the start of the
development of the more recent stacks
Whether the SAP applications you are running, are covered by the minimum
releases stated. If not, you need to define a new target release, check in the SAP
Product Availability Matrix, what operating system builds and DBMS combinations
are supported with the new target release. So, that you can choose the right
operating system release and DBMS release
Whether you need to update your SAP kernels in a move to Azure
Whether you need to update SAP Support Packages. Especially Basis Support
Packages that can be required for cases where you are required to move to a more
recent DBMS release
The next section goes into more details on other SAP products and DBMS releases that
are supported by SAP on Azure for Windows and Linux.
7 Note
The minimum releases of the different DBMS is carefully chosen and might not
always reflect the whole spectrum of DBMS releases the different DBMS vendors
support on Azure in general. Many SAP workload related considerations were taken
into account to define those minimum releases. There is no effort to test and
qualify older DBMS releases.
7 Note
The minimum releases listed are representing older version of operating systems
and database releases. We highly encourage to use most recent operating system
releases and database releases. In a lot of cases, more recent operating system and
database releases took the usage case of running in public cloud into consideration
and adapted code to optimize for running in public cloud or more specifically
Azure
Minimum Oracle release supported on Azure VMs that are certified for NetWeaver
is Oracle 11g Release 2 Patchset 3 (11.2.0.4)
As guest operating systems only Windows and Oracle Linux qualify. Exact releases
of the OS and related minimum DBMS releases are listed in the note
The support of Oracle Linux extends to the Oracle DBMS client as well. This means
that all SAP components, like dialog instances of the ABAP or Java Stack need to
run on Oracle Linux as well. Only SAP components within such an SAP system that
would not connect to the Oracle DBMS would be allowed to run a different Linux
operating system
Oracle RAC is not supported
Oracle ASM is supported for some of the cases. Details are listed in the note
Non-Unicode SAP systems are only supported with application servers running
with Windows guest OS. The guest operating system of the DBMS can be Oracle
Linux or Windows. Reason for this restriction is apparent when checking the SAP
Product Availability Matrix (PAM). For Oracle Linux, SAP never released non-
Unicode SAP kernels
Knowing the DBMS releases that are supported with the targeted Azure infrastructure
you need to check the SAP Product Availability Matrix on whether the OS releases and
DBMS required are supported with your SAP product releases you intended to run.
Oracle Linux
Most prominent asked question around Oracle Linux is whether SAP supports the Red
Hat kernel that is integral part of Oracle Linux as well. For details read SAP support note
#1565179 .
For running SAP HANA, SAP has more and stronger conditions infrastructure needs to
meet than for running NetWeaver or other SAP applications and DBMS. As a result a
smaller number of Azure VMs qualify for running the SAP HANA DBMS. The list of
supported Azure infrastructure supported for SAP HANA can be found in the so called
SAP HANA hardware directory .
7 Note
The units starting with the letter 'S' are HANA Large Instances units.
7 Note
SAP has no specific certification dependent on the SAP HANA major releases.
Contrary to common opinion, the column Certification scenario in the HANA
certified IaaS platforms , the column makes no statement about the HANA
major or minor release certified. You need to assume that all the units listed that
can be used for HANA 1.0 and HANA 2.0 as long as the certified operating system
releases for the specific units are supported by HANA 1.0 releases as well.
For the usage of SAP HANA, different minimum OS releases may apply than for the
general NetWeaver cases. You need to check out the supported operating systems for
each unit individually since those might vary. You do so by clicking on each unit. More
details will appear. One of the details listed is the different operating systems supported
for this specific unit.
7 Note
Azure HANA Large Instance units are more restrictive with supported operating
systems compared to Azure VMs. On the other hand Azure VMs may enforce more
recent operating releases as minimum releases. This is especially true for some of
the larger VM units that required changes to Linux kernels
Knowing the supported OS for the Azure infrastructure, you need to check SAP support
note #2235581 for the exact SAP HANA releases and patch levels that are supported
with the Azure units you are targeting.
) Important
The step of checking the exact SAP HANA releases and patch levels supported is
very important. In a lot of cases, support of a certain OS release is dependent on a
specific patch level of the SAP HANA executables.
As you know the specific HANA releases you can run on the targeted Azure
infrastructure, you need to check in the SAP Product Availability Matrix to find out
whether there are restrictions with the SAP product releases that support the HANA
releases you filtered out
For Azure VMs, these SAPS throughput numbers are documented in SAP support note
#1928533 . For Azure HANA Large Instance units, the SAPS throughput numbers are
documented in SAP support note #2316233
Looking into SAP support note #1928533 , the following remarks apply:
For M-Series Azure VMs and Mv2-Series Azure VMs, different minimum OS
releases apply than for other Azure VM types. The requirement for more recent
OS releases is based on changes the different operating system vendors had to
provide in their operating system releases to either enable their operating systems
running on the specific Azure VM types or optimize performance and throughput
of SAP workload on those VM types
There are two tables that specify different VM types. The second table specifies
SAPS throughput for Azure VM types that support Azure standard Storage only.
DBMS deployment on the units specified in the second table of the note is not
supported
For Business Objects BI platform, SAP support note #2145537 gives a list of SAP
Business Objects products supported on Azure. If there are questions around
components or combinations of software releases and OS releases that seem not to be
listed or supported and which are more recent than the minimum releases listed, you
need to open an SAP support request against the component you inquire support for.
For Business Objects Data Services, SAP support note #22288344 explains minimum
support of SAP Data Services running on Azure.
7 Note
As indicated in the SAP support note, you need to check in the SAP PAM to identify
the correct support package level to be supported on Azure
Support for SAP BPC 10.1 SP08 is described in SAP support note #2451795
Support for SAP Hybris Commerce Platform on Azure is detailed in the Hybris
Documentation . As of supported DBMS for SAP Hybris Commerce Platform, it lists
like:
SQL Server and Oracle on the Windows operating system platform. Same
minimum releases apply as for SAP NetWeaver. See SAP support note #1928533
for details
SAP HANA on Red Hat and SUSE Linux. SAP HANA certified VM types are required
as documented earlier in this document. SAP (Hybris) Commerce Platform is
considered OLTP workload
SQL Azure DB as of SAP (Hybris) Commerce Platform version 1811
Next Steps
Read next steps in the Azure Virtual Machines planning and implementation for SAP
NetWeaver
SAP workloads on Azure: planning and
deployment checklist
Article • 06/14/2023
This checklist is designed for customers moving SAP applications to Azure infrastructure
as a service. SAP applications in this document represent SAP products running the SAP
kernel, including SAP NetWeaver, S/4HANA, BW and BW/4 and others. Throughout the
duration of the project, a customer and/or SAP partner should review the checklist. It's
important to note that many of the checks are completed at the beginning of the
project and during the planning phase. After the deployment is done, straightforward
changes on deployed Azure infrastructure or SAP software releases can become
complex.
Review the checklist at key milestones during your project. Doing so will enable you to
detect small problems before they become large problems. You'll also have enough time
to re-engineer and test any necessary changes. Don't consider this checklist complete.
Depending on your situation, you might need to perform additional more checks.
The checklist doesn't include tasks that are independent of Azure. For example, SAP
application interfaces change during a move to the Azure platform or to a hosting
provider. SAP documentation and support notes will also contain further tasks, which
are not Azure specific but need to be part of your overall planning checklist.
This checklist can also be used for systems that are already deployed. New features or
changed recommendations might apply to your environment. It's useful to review the
checklist periodically to ensure you're aware of new features in the Azure platform.
ノ Expand table
Production preparation Dress rehearsal / user acceptance testing / mock cut-over / go-live
phase checks
Planning phase
A block diagram for the solution showing the SAP and non-SAP applications
and services
An SAP Quicksizer project based on business document volumes. The
output of the Quicksizer is then mapped to compute, storage, and networking
components in Azure. Alternatively to SAP Quicksizer, diligent sizing based on
current workload of source SAP systems. Taking into account the available
information, such as DBMS workload reports, SAP EarlyWatch Reports,
compute and storage performance indicators.
Business continuity and disaster recovery architecture.
Detailed information about OS, DB, kernel, and SAP support pack versions. It's
not necessarily true that every OS release supported by SAP NetWeaver or
S/4HANA is supported on Azure VMs. The same is true for DBMS releases.
Check the following sources to align and if necessary, upgrade SAP releases,
DBMS releases, and OS releases to ensure SAP and Azure support. You need
to have release combinations supported by SAP and Azure to get full support
from SAP and Microsoft. If necessary, you need to plan for upgrading some
software components. More details on supported SAP, OS, and DBMS
software are documented here:
What SAP software is supported for Azure deployments
SAP note 1928533 - SAP Applications on Microsoft Azure: Supported
Products and Azure VM types . This note defines the minimum OS and
DBMS releases supported on Azure VMs. Note also provides the SAP sizing
for SAP-supported Azure VMs.
SAP note 2015553 - SAP on Microsoft Azure: Support prerequisites . This
note defines prerequisites around Azure storage. networking, monitoring,
and support relationship needed with Microsoft.
SAP note 2039619 . This note defines the Oracle support matrix for Azure.
Oracle supports only Windows and Oracle Linux as guest operating systems
on Azure for SAP workloads. This support statement also applies for the
SAP application layer that runs SAP instances, as long they contain Oracle
Client.
SAP HANA-supported Azure VMs are listed on the SAP website . Details
for each entry contain specifics and requirements, including supported OS
version. This might not match latest OS version as per SAP note 2235581 .
SAP Product Availability Matrix .
Storage Architecture high level decisions based on Azure storage types for
SAP workload
Managed Disks attached to each VM
Filesystem layouts and sizing
SMB and/or NFS volume layout and sizes, mount points where applicable
High availability, backup and disaster recovery architecture
Based on RTO and RPO, define what the high availability and disaster
recovery architecture needs to look like.
Understand the use of different deployment types for optimal protection.
Considerations for Azure Virtual Machines DBMS deployment for SAP
workloads and related documents. In Azure, using a shared disk
configuration for the DBMS layer as, for example, described for SQL Server,
isn't supported. Instead, use solutions like:
SQL Server Always On
HANA System Replication
Oracle Data Guard
IBM Db2 HADR
For disaster recovery across Azure regions, review the solutions offered by
different DBMS vendors. Most of them support asynchronous replication or
log shipping.
For the SAP application layer, determine whether you'll run your business
regression test systems, which ideally are replicas of your production
deployments, in the same Azure region or in your DR region. In the second
case, you can target that business regression system as the DR target for
your production deployments.
Look into Azure Site Recovery as a method for replicating the SAP
application layer into the Azure DR region. For more information, see a set-
up disaster recovery for a multi-tier SAP NetWeaver app deployment.
For projects required to remain in a single region for compliance reasons,
consider a combined HADR configuration by using Azure Availability Zones.
An inventory of all SAP interfaces and the connected systems (SAP and non-
SAP).
Design of foundation services. This design should include the following items,
many of which are covered by the landing zone accelerator for SAP:
Network topology within Azure and assignment of different SAP
environment
Active Directory and DNS design.
Identity management solution for both end users and administration
Azure role-based access control (Azure RBAC) structure for teams that
manage infrastructure and SAP applications in Azure.
Azure resource naming strategy
Security operations for Azure resources and workloads within
Security concept for protecting your SAP workload. This should include all
aspects – networking and perimeter monitoring, application and database
security, operating systems securing, and any infrastructure measures
required, such as encryption. Identify the requirements with your compliance
and security teams.
Microsoft recommends either Professional Direct, Premier or Unified Support
contract. Identify your escalation paths and contacts for support with
Microsoft. For SAP support requirements, see SAP note 2015553 .
The number of Azure subscriptions and core quota for the subscriptions. Open
support requests to increase quotas of Azure subscriptions as needed.
Data reduction and data migration plan for migrating SAP data into Azure. For
SAP NetWeaver systems, SAP has guidelines on how to limit the volume of
large amounts of data. See this SAP guide about data management in SAP
ERP systems. Some of the content also applies to NetWeaver and S/4HANA
systems in general.
An automated deployment approach. Many customers start with scripts, using
a combination of PowerShell, CLI, Ansible and Terraform. Microsoft developed
solutions for SAP deployment automation are:
Azure Center for SAP solutions – Azure service to deploy and operate a SAP
system’s infrastructure
SAP on Azure Deployment Automation, an open-source orchestration tool
for deploying and maintaining SAP environments
7 Note
Define a regular design and deployment review cadence between you as the
customer, the system integrator, Microsoft, and other involved parties.
Tip
Same quality checks and additional insights are executed regularly when SAP
systems are deployed or registered with Azure Center for SAP solution as well and
are part of the service.
Further tools to allow easier deployment checks and document findings, plan next
remediation steps and generally optimize your SAP on Azure landscape are:
Next steps
See these articles:
In Azure, organizations can get the cloud resources and services they need without
completing a lengthy procurement cycle. But running your SAP workload in Azure
requires knowledge about the available options and careful planning to choose the
Azure components and architecture to power your solution.
Azure offers a comprehensive platform for running your SAP applications. Azure
infrastructure as a service (IaaS) and platform as a service (PaaS) offerings combine to
give you optimal choices for a successful deployment of your entire SAP enterprise
landscape.
This article complements SAP documentation and SAP Notes, the primary sources for
information about how to install and deploy SAP software on Azure and other platforms.
Definitions
Throughout this article, we use the following terms:
SAP component: An individual SAP application like SAP S/4HANA, SAP ECC, SAP
BW, or SAP Solution Manager. An SAP component can be based on traditional
Advanced Business Application Programming (ABAP) or Java technologies, or it
can be an application that's not based on SAP NetWeaver, like SAP
BusinessObjects.
SAP environment: Multiple SAP components that are logically grouped to perform
a business function, such as development, quality assurance, training, disaster
recovery, or production.
SAP landscape: The entire set of SAP assets in an organization's IT landscape. The
SAP landscape includes all production and nonproduction environments.
SAP system: The combination of a database management system (DBMS) layer
and an application layer. Two examples are an SAP ERP development system and
an SAP BW test system. In an Azure deployment, these two layers can't be
distributed between on-premises and Azure. An SAP system must be either
deployed on-premises or deployed in Azure. However, you can operate different
systems within an SAP landscape in either Azure or on-premises.
Resources
The entry point for documentation that describes how to host and run an SAP workload
on Azure is Get started with SAP on an Azure virtual machine. In the article, you find
links to other articles that cover:
) Important
For prerequisites, the installation process, and details about specific SAP
functionality, it's important to read the SAP documentation and guides carefully.
This article covers only specific tasks for SAP software that's installed and operated
on an Azure virtual machine (VM).
The following SAP Notes form the base of the Azure guidance for SAP deployments:
2233094 DB6: SAP Applications on Azure Using IBM Db2 for Linux, UNIX, and Windows
For general default and maximum limitations of Azure subscriptions and resources, see
Azure subscription and service limits, quotas, and constraints.
Scenarios
SAP services often are considered among the most mission-critical applications in an
enterprise. The applications' architecture and operations are complex, and it's important
to ensure that all requirements for availability and performance are met. An enterprise
typically thinks carefully about which cloud provider to choose to run such business-
critical business processes.
Azure is the ideal public cloud platform for business-critical SAP applications and
business processes. Most current SAP software, including SAP NetWeaver and SAP
S/4HANA systems, can be hosted in the Azure infrastructure today. Azure offers more
than 800 CPU types and VMs that have many terabytes of memory.
For descriptions of supported scenarios and some scenarios that aren't supported, see
SAP on Azure VMs supported scenarios. Check these scenarios and the conditions that
are indicated as not supported as you plan the architecture that you want to deploy to
Azure.
To successfully deploy SAP systems to Azure IaaS or to IaaS in general, it's important to
understand the significant differences between the offerings of traditional private clouds
and IaaS offerings. A traditional host or outsourcer adapts infrastructure (network,
storage, and server type) to the workload that a customer wants to host. In an IaaS
deployment, it's the customer's or partner's responsibility to evaluate their potential
workload and choose the correct Azure components of VMs, storage, and network.
To gather data for planning your deployment to Azure, it's important to:
Details about supported SAP components on Azure, Azure infrastructure units, and
related operating system releases and DBMS releases are explained in SAP software that
is supported for Azure deployments. The knowledge that you gain from evaluating
support and dependencies between SAP releases, operating system releases, and DBMS
releases has a substantial impact on your efforts to move your SAP systems to Azure.
You learn whether significant preparation efforts are involved, for example, whether you
need to upgrade your SAP release or switch to a different operating system.
The first steps to plan a deployment are to work with compliance and security teams in
your organization to determine what the boundary conditions are for deploying which
type of SAP workload or business process in a public cloud. The process can be time-
consuming, but it's critical groundwork to complete.
If your organization has already deployed software in Azure, the process might be easy.
If your company is more at the beginning of the journey, larger discussions might be
necessary to figure out the boundary conditions, security conditions, and enterprise
architecture that allows certain SAP data and SAP business processes to be hosted in a
public cloud.
A naming convention that you use for each Azure resource, such as for VMs and
resource groups.
A subscription and management group design for your SAP workload, such as
whether multiple subscriptions should be created per workload, per deployment
tier, or for each business unit.
Enterprise-wide usage of Azure Policy for subscriptions and management groups.
To help you make the right decisions, many details of enterprise architecture are
described in the Azure Cloud Adoption Framework.
Don't underestimate the initial phase of the project in your planning. Only when you
have agreements and rules in place for compliance, security, and Azure resource
organization should you advance your deployment planning.
The next steps are planning geographical placement and the network architecture that
you deploy in Azure.
For a list of Azure regions, see Azure geographies . For an interactive map, see Azure
global infrastructure .
Not all Azure regions offer the same services. Depending on the SAP product you want
to run, your sizing requirements, and the operating system and DBMS you need, it's
possible that a particular region doesn't offer the VM types that are required for your
scenario. For example, if you're running SAP HANA, you usually need VMs of the various
M-series VM families. These VM families are deployed in only a subset of Azure regions.
As you start to plan and think about which regions to choose as primary region and
eventually secondary region, you need to investigate whether the services that you need
for your scenarios are available in the regions you're considering. You can learn exactly
which VM types, Azure storage types, and other Azure services are available in each
region in Products available by region .
Data replication in a region pair is tied to types of Azure storage that you can configure
to replicate into a paired region. For details, see Storage redundancy in a secondary
region.
The storage types that support paired region data replication are storage types that
aren't suitable for SAP components and a DBMS workload. The usability of the Azure
storage replication is limited to Azure Blob Storage (for backup purposes), file shares
and volumes, and other high-latency storage scenarios.
As you check for paired regions and the services that you want to use in your primary or
secondary regions, it's possible that the Azure services or VM types that you intend to
use in your primary region aren't available in the paired region that you want to use as a
secondary region. Or you might determine that an Azure paired region isn't acceptable
for your scenario because of data compliance reasons. For those scenarios, you need to
use a nonpaired region as a secondary or disaster recovery region, and you need to set
up some of the data replication yourself.
Availability zones
Many Azure regions use availability zones to physically separate locations within an
Azure region. Each availability zone is made up of one or more datacenters that are
equipped with independent power, cooling, and networking. An example of using an
availability zone to enhance resiliency is deploying two VMs in two separate availability
zones in Azure. Another example is to implement a high-availability framework for your
SAP DBMS system in one availability zone and deploy SAP (A)SCS in another availability
zone, so you get the best SLA in Azure.
For more information about VM SLAs in Azure, check the latest version of Virtual
Machines SLAs . Because Azure regions develop and extend rapidly, the topology of
the Azure regions, the number of physical datacenters, the distance between
datacenters, and the distance between Azure availability zones evolves. Network latency
changes as infrastructure changes.
Follow the guidance in SAP workload configurations with Azure availability zones when
you choose a region that has availability zones. Also determine which zonal deployment
model is best suited for your requirements, the region you choose, and your workload.
Fault domains
Fault domains represent a physical unit of failure. A fault domain is closely related to the
physical infrastructure that's contained in datacenters. Although a physical blade or rack
can be considered a fault domain, there isn't a direct one-to-one mapping between a
physical computing element and a fault domain.
When you deploy multiple VMs as part of one SAP system, you can indirectly influence
the Azure fabric controller to deploy your VMs to different fault domains, so that you
can meet requirements for availability SLAs. However, you don't have direct control of
the distribution of fault domains over an Azure scale unit (a collection of hundreds of
compute nodes or storage nodes and networking) or the assignment of VMs to a
specific fault domain. To maneuver the Azure fabric controller to deploy a set of VMs
over different fault domains, you need to assign an Azure availability set to the VMs at
deployment time. For more information, see Availability sets.
Update domains
Update domains represent a logical unit that sets how a VM in an SAP system that
consists of multiple VMs is updated. When a platform update occurs, Azure goes
through the process of updating these update domains one by one. By spreading VMs
at deployment time over different update domains, you can protect your SAP system
from potential downtime. Similar to fault domains, an Azure scale unit is divided into
multiple update domains. To maneuver the Azure fabric controller to deploy a set of
VMs over different update domains, you need to assign an Azure availability set to the
VMs at deployment time. For more information, see Availability sets.
Availability sets
Azure VMs within one Azure availability set are distributed by the Azure fabric controller
over different fault domains. The distribution over different fault domains is to prevent
all VMs of an SAP system from being shut down during infrastructure maintenance or if
a failure occurs in one fault domain. By default, VMs aren't part of an availability set. You
can add a VM in an availability set only at deployment time or when a VM is redeployed.
To learn more about Azure availability sets and how availability sets relate to fault
domains, see Azure availability sets.
) Important
Availability zones and availability sets in Azure are mutually exclusive. You can
deploy multiple VMs to a specific availability zone or to an availability set. But not
both the availability zone and the availability set can be assigned to a VM.
You can combine availability sets and availability zones if you use proximity
placement groups.
As you define availability sets and try to mix various VMs of different VM families within
one availability set, you might encounter problems that prevent you from including a
specific VM type in an availability set. The reason is that the availability set is bound to a
scale unit that contains a specific type of compute host. A specific type of compute host
can run only on certain types of VM families.
For example, you create an availability set, and you deploy the first VM in the availability
set. The first VM that you add to the availability set is in the Edsv5 VM family. When you
try to deploy a second VM, a VM that's in the M family, this deployment fails. The reason
is that Edsv5 family VMs don't run on the same host hardware as the VMs in the M
family.
The same problem can occur if you're resizing VMs. If you try to move a VM out of the
Edsv5 family and into a VM type that's in the M family, the deployment fails. If you
resize to a VM family that can't be hosted on the same host hardware, you must shut
down all the VMs that are in your availability set and resize them all to be able to run on
the other host machine type. For information about SLAs of VMs that are deployed in an
availability set, see Virtual Machines SLAs .
When deploying a high availability SAP workload on Azure, it's important to take into
account the various deployment types available, and how they can be applied across
different Azure regions (such as across zones, in a single zone, or in a region with no
zones). For more information, see High availability deployment options for SAP
workload.
Tip
When default placement doesn't meet network roundtrip requirements within an SAP
system, proximity placement groups can address this need. You can use proximity
placement groups with the location constraints of Azure region, availability zone, and
availability set to increase resiliency. With a proximity placement group, combining both
availability zone and availability set while setting different update and failure domains is
possible. A proximity placement group should contain only a single SAP system.
Although deployment in a proximity placement group can result in the most latency-
optimized placement, deploying by using a proximity placement group also has
drawbacks. Some VM families can't be combined in one proximity placement group, or
you might run into problems if you resize between VM families. The constraints of VM
families, regions, and availability zones might not support colocation. For details, and to
learn about the advantages and potential challenges of using a proximity placement
group, see Proximity placement group scenarios.
VMs that don't use proximity placement groups should be the default deployment
method in most situations for SAP systems. This default is especially true for zonal (a
single availability zone) and cross-zonal (VMs that are distributed between two
availability zones) deployments of an SAP system. Using proximity placement groups
should be limited to SAP systems and Azure regions when required only for
performance reasons.
Azure networking
Azure has a network infrastructure that maps to all scenarios that you might want to
implement in an SAP deployment. In Azure, you have the following capabilities:
Access to Azure services and access to specific ports in VMs that applications use.
Direct access to VMs via Secure Shell (SSH) or Windows Remote Desktop (RDP) for
management and administration.
Internal communication and name resolution between VMs and by Azure services.
On-premises connectivity between an on-premises network and Azure networks.
Communication between services that are deployed in different Azure regions.
Designing networking usually is the first technical activity that you undertake when you
deploy to Azure. Supporting a central enterprise architecture like SAP frequently is part
of the overall networking requirements. In the planning stage, you should document the
proposed networking architecture in as much detail as possible. If you make a change at
a later point, like changing a subnet network address, you might have to move or delete
deployed resources.
A virtual network acts as a network boundary. Part of the design that's required when
you plan your deployment is to define the virtual network, subnets, and private network
address ranges. You can't change the virtual network assignment for resources like
network interface cards (NICs) for VMs after the VMs are deployed. Making a change to
a virtual network or to a subnet address range might require you to move all deployed
resources to a different subnet.
Your network design should address several requirements for SAP deployment:
For examples of network architecture for SAP deployment, see the following articles:
Network virtual appliances in communication paths can easily double the network
latency between two communication partners. They also can restrict throughput in
critical paths between the SAP application layer and the DBMS layer. In some
scenarios, network virtual appliances can cause Pacemaker Linux clusters to fail.
The communication path between the SAP application layer and the DBMS layer
must be a direct path. The restriction doesn't include ASG and NSG rules if the ASG
and NSG rules allow a direct communication path.
Segregating the SAP application layer and the DBMS layer into different Azure
virtual networks isn't supported. We recommend that you segregate the SAP
application layer and the DBMS layer by using subnets within the same Azure
virtual network instead of by using different Azure virtual networks.
If you set up an unsupported scenario that segregates two SAP system layers in
different virtual networks, the two virtual networks must be peered.
Network traffic between two peered Azure virtual networks is subject to transfer
costs. Each day, a huge volume of data that consists of many terabytes is
exchanged between the SAP application layer and the DBMS layer. You can incur
substantial cost if the SAP application layer and the DBMS layer are segregated
between two peered Azure virtual networks.
Resolving host name to IP address through DNS is often a crucial element for SAP
networking. You have many options to configure name and IP resolution in Azure.
Often, an enterprise has a central DNS solution that's part of the overall architecture.
Several options for implementing name resolution in Azure natively, instead of by
setting up your own DNS servers, are described in Name resolution for resources in
Azure virtual networks.
As with DNS services, there might be a requirement for Windows Server Active Directory
to be accessible by the SAP VMs or services.
IP address assignment
An IP address for a NIC remains claimed and used throughout the existence of a VM's
NIC. The rule applies to both dynamic and static IP assignment. It remains true whether
the VM is running or is shut down. Dynamic IP assignment is released if the NIC is
deleted, if the subnet changes, or if the allocation method changes to static.
It's possible to assign fixed IP addresses to VMs within an Azure virtual network. IP
addresses often are reassigned for SAP systems that depend on external DNS servers
and static entries. The IP address remains assigned, either until the VM and its NIC is
deleted or until the IP address is unassigned. You need to take into account the overall
number of VMs (running and stopped) when you define the range of IP addresses for
the virtual network.
For more information, see Create a VM that has a static private IP address.
7 Note
You should decide between static and dynamic IP address allocation for Azure VMs
and their NICs. The guest operating system of the VM will obtain the IP that's
assigned to the NIC when the VM boots. You shouldn't assign static IP addresses in
the guest operating system to a NIC. Some Azure services like Azure Backup rely on
the fact that at least the primary NIC is set to DHCP and not to static IP addresses
inside the operating system. For more information, see Troubleshoot Azure VM
backup.
Each Azure VM's NIC can have multiple IP addresses assigned to it. A secondary IP can
be used for an SAP virtual host name, which is mapped to a DNS A record or DNS PTR
record. A secondary IP address must be assigned to the Azure NIC's IP configuration. A
secondary IP also must be configured within the operating system statically because
secondary IPs often aren't assigned through DHCP. Each secondary IP must be from the
same subnet that the NIC is bound to. A secondary IP can be added and removed from
an Azure NIC without stopping or deallocating the VM. To add or remove the primary IP
of a NIC, the VM must be deallocated.
7 Note
The standard load balancer modifies the default outbound access path because its
architecture is secure by default. VMs that are behind a standard load balancer might no
longer be able to reach the same public endpoints. Some examples are an endpoint for
an operating system update repository or a public endpoint of Azure services. For
options to provide outbound connectivity, see Public endpoint connectivity for VMs by
using the Azure standard load balancer.
Tip
The basic load balancer should not be used with any SAP architecture in Azure. The
basic load balancer is scheduled to be retired.
You can define multiple virtual network interface cards (vNICs) for an Azure VM, with
each vNIC assigned to any subnet in the same virtual network as the primary vNIC. With
the ability to have multiple vNICs, you can start to set up network traffic separation, if
necessary. For example, client traffic is routed through the primary vNIC and some
admin or back-end traffic is routed through a second vNIC. Depending on the operating
system and the image you use, traffic routes for NICs inside the operating system might
need to be set up for correct routing.
The type and size of a VM determines how many vNICs a VM can have assigned. For
information about functionality and restrictions, see Assign multiple IP addresses to VMs
by using the Azure portal.
2 Warning
If you use multiple vNICs on a VM, we recommend that you use a primary NIC's
subnet to handle user network traffic.
Accelerated networking
To further reduce network latency between Azure VMs, we recommend that you confirm
that Azure accelerated networking is enabled on every VM that runs an SAP workload.
Although accelerated networking is enabled by default for new VMs, per the
deployment checklist, you should verify the state. The benefits of accelerated
networking are greatly improved networking performance and latencies. Use it when
you deploy Azure VMs for SAP workloads on all supported VMs, especially for the SAP
application layer and the SAP DBMS layer. The linked documentation contains support
dependencies on operating system versions and VM instances.
On-premises connectivity
SAP deployment in Azure assumes that a central, enterprise-wide network architecture
and communication hub are in place to enable on-premises connectivity. On-premises
network connectivity is essential to allow users and applications to access the SAP
landscape in Azure to access other central organization services, such as the central
DNS, domain, and security and patch management infrastructure.
You have many options to provide on-premises connectivity for your SAP on Azure
deployment. The networking deployment most often is a hub-spoke network topology,
or an extension of the hub-spoke topology, a global virtual WAN.
For on-premises SAP deployments, we recommend that you use a private connection
over Azure ExpressRoute. For smaller SAP workloads, remote regions, or smaller offices,
VPN on-premises connectivity is available. Using ExpressRoute with a VPN site-to-site
connection as a failover path is a possible combination of both services.
Secure your virtual network by using NSG rules, by using network service tags for known
services, and by establishing routing and IP addressing to your firewall or other network
virtual appliance. All of these tasks or considerations are part of the architecture.
Resources in private networks need to be protected by network Layer 4 and Layer 7
firewalls.
Communication paths with the internet are the focus of a best practices architecture.
Beyond looking only at the selection of supported VM types, you need to check whether
those VM types are available in a specific region based on Products available by
region . At least as important is to determine whether the following capabilities for a
VM fit your scenario:
To get this information for a specific FM family and type, see Sizes for virtual machines
in Azure.
To get detailed information about VM pricing for different Azure services, operating
systems, and regions, see Virtual machines pricing .
To learn about the pricing and flexibility of one-year and three-year savings plans and
reserved instances, see these articles:
For more information about spot pricing, see Azure Spot Virtual Machines .
Pricing for the same VM type might vary between Azure regions. Some customers
benefit from deploying to a less expensive Azure region, so information about pricing
by region can be helpful as you plan.
Azure also offers the option to use a dedicated host. Using a dedicated host gives you
more control of patching cycles for Azure services. You can schedule patching to
support your own schedule and cycles. This offer is specifically for customers who have a
workload that doesn't follow the normal cycle of a workload. For more information, see
Azure dedicated hosts.
Using an Azure dedicated host is supported for an SAP workload. Several SAP customers
who want to have more control over infrastructure patching and maintenance plans use
Azure dedicated hosts. For more information about how Microsoft maintains and
patches the Azure infrastructure that hosts VMs, see Maintenance for virtual machines in
Azure.
Operating system for VMs
When you deploy new VMs for an SAP landscape in Azure, either to install or to migrate
an SAP system, it's important to choose the correct operation system for your workload.
Azure offers a large selection of operating system images for Linux and Windows and
many suitable options for SAP systems. You also can create or upload custom images
from your on-premises environment, or you can consume or generalize from image
galleries.
For details and information about the options that are available:
Find Azure Marketplace images by using the Azure CLI or Azure PowerShell.
Create custom images for Linux or Windows.
Use VM Image Builder.
Plan for an operating system update infrastructure and its dependencies for your SAP
workload, if needed. Consider using a repository staging environment to keep all tiers of
an SAP landscape (sandbox, development, preproduction, and production) in sync by
using the same versions of patches and updates during your update time period.
When you deploy a VM, the operating system image that you choose determines
whether the VM will be a generation 1 or a generation 2 VM. The latest versions of all
operating system images for SAP that are available in Azure (Red Hat Enterprise Linux,
SuSE Enterprise Linux, and Windows or Oracle Enterprise Linux) are available in both
generation 1 and generation 2. It's important to carefully select an image based on the
image description to deploy the correct generation of VM. Similarly, you can create
custom operating system images as generation 1 or generation 2, and they affect the
VM's generation when the VM is deployment.
7 Note
We recommend that you use generation 2 VMs in all your SAP deployments in
Azure, regardless of VM size. All the latest Azure VMs for SAP are generation 2-
capable or are limited to only generation 2. Some VM families currently support
only generation 2 VMs. Some VM families that will be available soon might support
only generation 2.
Some VM families, like the Mv2-series, support only generation 2. The same
requirement might be true for new VM families in the future. In that scenario, an existing
generation 1 VM can't be resized to work with the new VM family. In addition to the
Azure platform's generation 2 requirements, your SAP components might have
requirements that are related to a VM's generation. To learn about any generation 2
requirements for the VM family you choose, see SAP Note 1928533 .
Each VM has a different quota on disk and network throughput, the number of disks
that can be attached, whether it has local temporary storage that has its own
throughput and IOPS limits, memory size, and how many vCPUs are available.
7 Note
When you make decisions about VM size for an SAP solution on Azure, you must
consider the performance limits for each VM size. The quotas that are described in
the documentation represent the theoretical maximum attainable values. The
performance limit of IOPS per disk might be achieved with small input/output (I/O)
values (for example, 8 KB), but it might not be achieved with large I/O values (for
example, 1 MB).
Like VMs, the same performance limits exist for each storage type for an SAP workload
and for all other Azure services.
When you plan for and choose VMs to use in your SAP deployment, consider these
factors:
Start with the memory and CPU requirements. Separate out the SAPS requirements
for CPU power into the DBMS part and the SAP application parts. For existing
systems, the SAPS related to the hardware that you use often can be determined
or estimated based on existing SAP Standard Application Benchmarks . For newly
deployed SAP systems, complete a sizing exercise to determine the SAPS
requirements for the system.
For existing systems, the I/O throughput and IOPS on the DBMS server should be
measured. For new systems, the sizing exercise for the new system also should give
you a general idea of the I/O requirements on the DBMS side. If you're unsure, you
eventually need to conduct a proof of concept.
Compare the SAPS requirement for the DBMS server with the SAPS that the
different VM types of Azure can provide. The information about the SAPS of the
different Azure VM types is documented in SAP Note 1928533 . The focus should
be on the DBMS VM first because the database layer is the layer in an SAP
NetWeaver system that doesn't scale out in most deployments. In contrast, the
SAP application layer can be scaled out. Individual DBMS guides describe the
recommended storage configurations.
7 Note
The HANA Large Instances service is in sunset mode and doesn't accept new
customers. Providing units for existing HANA Large Instances customers is still
possible.
You can choose from multiple storage options for SAP workloads and for specific SAP
components. For more information, see Azure storage for SAP workloads. The article
covers the storage architecture for every part of SAP: operating system, application
binaries, configuration files, database data, log and trace files, and file interfaces with
other applications, whether stored on disk or accessed on file shares.
Some VMs don't offer a temporary drive. If you plan to use these VM sizes for SAP, you
might need to increase the size of the operating system disk. For more information, see
SAP Note 1928533 . For VMs that have a temporary disk, get information about the
temporary disk size and the IOPS and throughput limits for each VM series in Sizes for
virtual machines in Azure.
You can't directly resize between a VM series that has temporary disks and a VM series
that doesn't have temporary disks. Currently, a resize between two such VM families
fails. A resolution is to re-create the VM that doesn't have a temporary disk in the new
size by using an operating system disk snapshot. Keep all other data disks and the
network interface. Learn how to resize a VM size that has a local temporary disk to a VM
size that doesn't.
In these scenarios, we recommend that you use an Azure service, such as Azure Files or
Azure NetApp Files. If these services aren't available in the regions you choose, or if they
aren't available for your solution architecture, alternatives are to provide NFS or SMB file
shares from self-managed, VM-based applications or from third-party services. See SAP
Note 2015553 about limitations to SAP support if you use third-party services for
storage layers in an SAP system in Azure.
Due to the often critical nature of network shares, and because they often are a single
point of failure in a design (for high availability) or process (for the file interface), we
recommend that you rely on each Azure native service for its own availability, SLA, and
resiliency. In the planning phase, it's important to consider these factors:
NFS or SMB share design, including which shares to use per SAP system ID (SID),
per landscape, and per region.
Subnet sizing, including the IP requirement for private endpoints or dedicated
subnets for services like Azure NetApp Files.
Network routing to SAP systems and connected applications.
Use of a public or private endpoint for Azure Files.
For information about requirements and how to use an NFS or SMB share in a high-
availability scenario, see High availability.
7 Note
If you use Azure Files for your network shares, we recommend that you use a
private endpoint. In the unlikely event of a zonal failure, your NFS client
automatically redirects to a healthy zone. You don't have to remount the NFS or
SMB shares on your VMs.
Network segmentation and the security of each subnet and network interface.
Encryption on each layer within the SAP landscape.
Identity solution for end-user and administrative access and single sign-on
services.
Threat and operation monitoring.
The topics in this chapter aren't an exhaustive list of all available services, options, and
alternatives. It does list several best practices that should be considered for all SAP
deployments in Azure. There are other aspects to cover depending on your enterprise or
workload requirements. For more information about security design, see the following
resources for general Azure guidance:
NSGs don't restrict performance with the rules that you define for the NSG. For
monitoring traffic flow, you can optionally activate NSG flow logging with logs evaluated
by an information event management (SIEM) or intrusion detection system (IDS) of your
choice to monitor and act on suspicious network activity.
Tip
Activate NSGs only on the subnet level. Although NSGs can be activated on both
the subnet level and the NIC level, activation on both is very often a hindrance in
troubleshooting situations when analyzing network traffic restrictions. Use NSGs on
the NIC level only in exceptional situations and when required.
Private endpoints for services
Many Azure PaaS services are accessed by default through a public endpoint. Although
the communication endpoint is located on the Azure back-end network, the endpoint is
exposed to the public internet. Private endpoints are a network interface inside your
own private virtual network. Through Azure Private Link, the private endpoint projects
the service into your virtual network. Selected PaaS services are then privately accessed
through the IP inside your network. Depending on the configuration, the service can
potentially be set to communicate through private endpoint only.
Using a private endpoint increases protection against data leakage, and it often
simplifies access from on-premises and peered networks. In many situations, the
network routing and process to open firewall ports, which often are needed for public
endpoints, is simplified. The resources are inside your network already because they're
accessed by a private endpoint.
To learn which Azure services offer the option to use a private endpoint, see Private Link
available services. For NFS or SMB with Azure Files, we recommend that you always use
private endpoints for SAP workloads. To learn about charges that are incurred by using
the service, see Private endpoint pricing . Some Azure services might optionally
include the cost with the service. This information is included in a service's pricing
information.
Encryption
Depending on your corporate policies, encryption beyond the default options in Azure
might be required for your SAP workloads.
7 Note
For SAP deployments on Linux systems, don't use Azure Disk Encryption. Azure Disk
Encryption entails encryption running inside the SAP VMs by using CMKs from Azure
Key Vault. For Linux, Azure Disk Encryption doesn't support the operating system images
that are used for SAP workloads. Azure Disk Encryption can be used on Windows
systems with SAP workloads, but don't combine Azure Disk Encryption with database
native encryption. We recommend that you use database native encryption instead of
Azure Disk Encryption. For more information, see the next section.
Similar to managed disk encryption, Azure Files encryption at rest (SMB and NFS) is
available with PMKs or CMKs.
For SMB network shares, carefully review Azure Files and operating system
dependencies with SMB versions because the configuration affects support for in-transit
encryption.
) Important
The importance of a careful plan to store and protect the encryption keys if you use
customer-managed encryption can't be overstated. Without encryption keys,
encrypted resources like disks are inaccessible and can lead to data loss. Carefully
consider protecting the keys and access to the keys to only privileged users or
services.
DBMS encryption
Transport encryption
For DBMS encryption, each database that's supported for an SAP NetWeaver or an SAP
S/4HANA deployment supports native encryption. Transparent database encryption is
entirely independent of any infrastructure encryption that's in place in Azure. You can
use SSE and database encryption at the same time. When you use encryption, the
location, storage, and safekeeping of encryption keys is critically important. Any loss of
encryption keys leads to data loss because you won't be able to start or recover your
database.
Some databases might not have a database encryption method or might not require a
dedicated setting to enable. For other databases, DBMS backups might be encrypted
implicitly when database encryption is activated. See the following SAP documentation
to learn how to enable and use transparent database encryption:
Contact SAP or your DBMS vendor for support on how to enable, use, or troubleshoot
software encryption.
) Important
It can't be overstated how important it is to have a careful plan to store and protect
your encryption keys. Without encryption keys, the database or SAP software might
be inaccessible and you might lose data. Carefully consider how to protect the keys.
Allow access to the keys only by privileged users or services.
High availability
SAP high availability in Azure has two components:
SAP application high availability: How it can be combined with the Azure
infrastructure high availability by using service healing. An example that uses high
availability in SAP software components:
An SAP (A)SCS and SAP ERS instance
The database server
For more information about high availability for SAP in Azure, see the following articles:
Pacemaker on Linux, and Windows Server failover clustering are the only high-
availability frameworks for SAP workloads that are directly supported by Microsoft on
Azure. Any other high-availability framework isn't supported by Microsoft and will need
design, implementation details, and operations support from the vendor. For more
information, see Supported scenarios for SAP in Azure.
Disaster recovery
Often, SAP applications are among the most business-critical processes in an enterprise.
Based on their importance and the time required to be operational again after an
unforeseen interruption, business continuity and disaster recovery (BCDR) scenarios
should be carefully planned.
To learn how to address this requirement, see Disaster recovery overview and
infrastructure guidelines for SAP workload.
Backup
As part of your BCDR strategy, backup for your SAP workload must be an integral part
of any planned deployment. The backup solution must cover all layers of an SAP
solution stack: VM, operating system, SAP application layer, DBMS layer, and any shared
storage solution. Backup for Azure services that are used by your SAP workload, and for
other crucial resources like encryption and access keys also must be part of your backup
and BCDR design.
Alternatively, if you deploy Azure NetApp Files, backup options are available on the
volume level, including SAP HANA and Oracle DBMS integration with a scheduled
backup.
Use the recommendations in Azure best practices to protect and validate against
ransomware attacks.
Tip
Ensure that your backup strategy includes protecting your deployment automation,
encryption keys for Azure resources, and transparent database encryption if used.
Cross-region backup
For any cross-region backup requirement, determine the Recovery Time Objective (RTO)
and Recovery Point Objective (RPO) that's offered by the solution and whether it
matches your BCDR design and needs.
Use Azure services for SAP migration. Some VM-based workloads are migrated
without change to Azure by using services like Azure Migrate or Azure Site
Recovery, or a third-party tool. Diligently confirm that the operating system version
and the SAP workload it runs are supported by the service.
Often, any database workload is intentionally not supported because a service
can't guarantee database consistency. If the DBMS type is supported by the
migration service, the database change or churn rate often is too high. Most busy
SAP systems won't meet the change rate that migration tools allow. Issues might
not be seen or discovered until production migration. In many situations, some
Azure services aren't suitable for migrating SAP systems. Azure Site Recovery and
Azure Migrate don't have validation for a large-scale SAP migration. A proven SAP
migration methodology is to rely on DBMS replication or SAP migration tools.
Optimize network and data copy. Migrating an SAP system to Azure always
involves moving a large amount of data. The data might be database and file
backups or replication, an application-to-application data transfer, or an SAP
migration export. Depending on the migration process you use, you need to
choose the correct network path to move the data. For many data move
operations, using the internet instead of a private network is the quickest path to
copy data securely to Azure storage.
Tip
In the planning stage, it's important to consider any dedicated migration networks
that you'll use for large data transfers to Azure. Examples include backups or
database replication or using a public endpoint for data transfers to Azure storage.
The impact of the migration on network paths for your users and applications
should be expected and mitigated. As part of your network planning, consider all
phases of the migration and the cost of a partially productive workload in Azure
during migration.
Azure Virtual Machines is the solution for organizations that need compute and storage
resources, in minimal time, and without lengthy procurement cycles. You can use Azure
Virtual Machines to deploy classical applications, like SAP NetWeaver-based
applications, in Azure. Extend an application's reliability and availability without
additional on-premises resources. Azure Virtual Machines supports cross-premises
connectivity, so you can integrate Azure Virtual Machines into your organization's on-
premises domains, private clouds, and SAP system landscape.
In this article, we cover the steps to deploy SAP applications on virtual machines (VMs)
in Azure, including alternate deployment options and troubleshooting. This article builds
on the information in Azure Virtual Machines planning and implementation for SAP
NetWeaver. It also complements SAP installation documentation and SAP Notes, which
are the primary resources for installing and deploying SAP software.
Prerequisites
Setting up an Azure virtual machine for SAP software deployment involves multiple
steps and resources. Before you start, make sure that you meet the prerequisites for
installing SAP software on virtual machines in Azure.
Local computer
To manage Windows or Linux VMs, you can use a PowerShell script and the Azure portal.
For both tools, you need a local computer running Windows 7 or a later version of
Windows. If you want to manage only Linux VMs and you want to use a Linux computer
for this task, you can use Azure CLI.
Internet connection
To download and run the tools and scripts that are required for SAP software
deployment, you must be connected to the Internet. The Azure VM that is running the
Azure Extension for SAP also needs access to the Internet. If the Azure VM is part of an
Azure virtual network or on-premises domain, make sure that the relevant proxy settings
are set, as described in Configure the proxy.
Microsoft Azure subscription
You need an active Azure account.
Create and configure Azure storage accounts (if required) or Azure virtual networks
before you begin the SAP software deployment process. For information about how to
create and configure these resources, see Azure Virtual Machines planning and
implementation for SAP NetWeaver.
SAP sizing
Know the following information, for SAP sizing:
Projected SAP workload, for example, by using the SAP Quick Sizer tool, and the
SAP Application Performance Standard (SAPS) number
Required CPU resource and memory consumption of the SAP system
Required input/output (I/O) operations per second
Required network bandwidth of eventual communication between VMs in Azure
Required network bandwidth between on-premises assets and the Azure-deployed
SAP system
Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application
resources in your Azure subscription. For more information, see Azure Resource
Manager overview.
Resources
SAP resources
When you are setting up your SAP software deployment, you need the following SAP
resources:
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 1409604 has the required SAP Host Agent version for Windows in
Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 2002167 has general information about Red Hat Enterprise Linux 7.x.
SAP Note 2069760 has general information about Oracle Linux 7.x.
SAP Note 1999351 has additional troubleshooting information for the Azure
Extension for SAP.
SAP Note 1597355 has general information about swap-space for Linux.
SAP on Azure SCN page has news and a collection of useful resources.
SAP Community WIKI has all required SAP Notes for Linux.
Windows resources
These Microsoft articles cover SAP deployments in Azure:
The following flowchart shows the SAP-specific sequence of steps for deploying a VM
from the Azure Marketplace:
1. Navigate to Create a resource in the Azure portal . Or, in the Azure portal menu,
select + New.
2. Select Compute, and then select the type of operating system you want to deploy.
For example, Windows Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12),
Red Hat Enterprise Linux 7.2 (RHEL 7.2), or Oracle Linux 7.2. The default list view
does not show all supported operating systems. Select see all for a full list. For
more information about supported operating systems for SAP software
deployment, see SAP Note 1928533 .
3. On the next page, review terms and conditions.
4. In the Select a deployment model box, select Resource Manager.
5. Select Create.
The wizard guides you through setting the required parameters to create the virtual
machine, in addition to all required resources, like network interfaces and storage
accounts. Some of these parameters are:
1. Basics:
2. Size:
For a list of supported VM types, see SAP Note 1928533 . Be sure you select the
correct VM type if you want to use Azure Premium Storage. Not all VM types
support Premium Storage. For more information, see Azure storage for SAP
workloads.
3. Settings:
Storage
Disk Type: Select the disk type of the OS disk. If you want to use Premium
Storage for your data disks, we recommend using Premium Storage for the
OS disk as well.
Use managed disks: If you want to use Managed Disks, select Yes. For
more information about Managed Disks, see chapter Managed Disks in the
planning guide.
Storage account: Select an existing storage account or create a new one.
Not all storage types work for running SAP applications. For more
information about storage types, see Storage structure of a VM for RDBMS
Deployments.
Network
Virtual network and Subnet: To integrate the virtual machine with your
intranet, select the virtual network that is connected to your on-premises
network.
Public IP address: Select the public IP address that you want to use, or
enter parameters to create a new public IP address. You can use a public IP
address to access your virtual machine over the Internet. Make sure that
you also create a network security group to help secure access to your
virtual machine.
Network security group: For more information, see Control network traffic
flow with network security groups.
Extensions: You can install virtual machine extensions by adding them to the
deployment. You do not need to add extensions in this step. The extensions
required for SAP support are installed later. See chapter Configure the Azure
Extension for SAP in this guide.
High Availability: Select an availability set, or enter the parameters to create a
new availability set. For more information, see Azure availability sets.
Monitoring
Boot diagnostics: You can select Disable for boot diagnostics.
Guest OS diagnostics: You can select Disable for monitoring diagnostics.
4. Summary:
To create a two-tier system by using only one virtual machine, use this template.
To create a two-tier system by using only one virtual machine and Managed Disks,
use this template.
To create a three-tier system by using multiple virtual machines, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
2. Settings:
OS type: The operating system you want to deploy, for example, Windows
Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise
Linux 7.2 (RHEL 7.2), or Oracle Linux 7.2.
The list view does not show all supported operating systems. For more
information about supported operating systems for SAP software
deployment, see SAP Note 1928533 .
For larger systems, we highly recommend using Azure Premium Storage. For
more information about storage types, see these resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine
workloads
Introduction to Microsoft Azure Storage
Subnet ID: If you want to deploy the VM into an existing VNet where you
have a subnet defined the VM should be assigned to, name the ID of that
specific subnet. The ID usually looks like this: /subscriptions/<subscription
id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
4. Select Purchase.
The Azure VM Agent is deployed by default when you use an image from the Azure
Marketplace.
Configure VM Extension
To be sure SAP supports your environment, set up the Azure Extension for SAP as
described in Configure the Azure Extension for SAP.
Post-deployment steps
After you create the VM and the VM is deployed, you need to install the required
software components in the VM. Because of the deployment/software installation
sequence in this type of VM deployment, the software to be installed must already be
available, either in Azure, on another VM, or as a disk that can be attached. Or, consider
using a cross-premises scenario, in which connectivity to the on-premises assets
(installation shares) is given.
After you deploy your VM in Azure, follow the same guidelines and tools to install the
SAP software on your VM as you would in an on-premises environment. To install SAP
software on an Azure VM, both SAP and Microsoft recommend that you upload and
store the SAP installation media on Azure VHDs or Managed Disks, or that you create an
Azure VM that works as a file server that has all the required SAP installation media.
Windows
To prepare a Windows image that you can use to deploy multiple virtual machines,
the Windows settings (like Windows SID and hostname) must be abstracted or
generalized on the on-premises VM. You can use sysprep to do this.
Linux
To prepare a Linux image that you can use to deploy multiple virtual machines,
some Linux settings must be abstracted or generalized on the on-premises VM. You
can use waagent -deprovision to do this. For more information, see Capture a Linux
virtual machine running on Azure and the Azure Linux agent user guide.
You can prepare and create a custom image, and then use it to create multiple new VMs.
This is described in Azure Virtual Machines planning and implementation for SAP
NetWeaver. Set up your database content either by using SAP Software Provisioning
Manager to install a new SAP system (restores a database backup from a disk that's
attached to the virtual machine) or by directly restoring a database backup from Azure
storage, if your DBMS supports it. For more information, see Azure Virtual Machines
DBMS deployment for SAP NetWeaver. If you have already installed an SAP system on
your on-premises VM (especially for two-tier systems), you can adapt the SAP system
settings after the deployment of the Azure VM by using the System Rename procedure
supported by SAP Software Provisioning Manager (SAP Note 1619720 ). Otherwise,
you can install the SAP software after you deploy the Azure VM.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM
from a custom image:
The easiest way to create a new virtual machine from a Managed Disk image is by using
the Azure portal. For more information on how to create a Manage Disk Image, read
Capture a managed image of a generalized VM in Azure
1. Navigate to Images in the Azure portal . Or, in the Azure portal menu, select
Images.
2. Select the Managed Disk image you want to deploy and click on Create VM
The wizard guides you through setting the required parameters to create the virtual
machine, in addition to all required resources, like network interfaces and storage
accounts. Some of these parameters are:
1. Basics:
2. Size:
For a list of supported VM types, see SAP Note 1928533 . Be sure you select the
correct VM type if you want to use Azure Premium Storage. Not all VM types
support Premium Storage. For more information, see Azure storage for SAP
workloads.
3. Settings:
Storage
Disk Type: Select the disk type of the OS disk. If you want to use Premium
Storage for your data disks, we recommend using Premium Storage for the
OS disk as well.
Use managed disks: If you want to use Managed Disks, select Yes. For
more information about Managed Disks, see chapter Managed Disks in the
planning guide.
Network
Virtual network and Subnet: To integrate the virtual machine with your
intranet, select the virtual network that is connected to your on-premises
network.
Public IP address: Select the public IP address that you want to use, or
enter parameters to create a new public IP address. You can use a public IP
address to access your virtual machine over the Internet. Make sure that
you also create a network security group to help secure access to your
virtual machine.
Network security group: For more information, see Control network traffic
flow with network security groups.
Extensions: You can install virtual machine extensions by adding them to the
deployment. You do not need to add extension in this step. The extensions
required for SAP support are installed later. See chapter Configure the Azure
Extension for SAP in this guide.
High Availability: Select an availability set, or enter the parameters to create a
new availability set. For more information, see Azure availability sets.
Monitoring
Boot diagnostics: You can select Disable for boot diagnostics.
Guest OS diagnostics: You can select Disable for monitoring diagnostics.
4. Summary:
To create a deployment by using a private OS image from the Azure portal, use one of
the following SAP templates. These templates are published in the azure-quickstart-
templates GitHub repository . You also can manually create a virtual machine, by using
PowerShell.
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disk
Image (sap-2-tier-user-image-md)
To create a two-tier system by using only one virtual machine and a Managed Disk
image, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
2. Settings:
OS type: The operating system type you want to deploy (Windows or Linux).
The number of SAPS the new system provides. If you are not sure how many
SAPS the system requires, ask your SAP Technology Partner or System
Integrator.
For larger systems, we highly recommend using Azure Premium Storage. For
more information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure virtual machine
workloads
Introduction to Microsoft Azure Storage
User image VHD URI (unmanaged disk image template only): The URI of the
private OS image VHD, for example,
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.
User image storage account (unmanaged disk image template only): The
name of the storage account where the private OS image is stored, for
example, <accountname> in
https://<accountname>.blob.core.windows.net/vhds/userimage.vhd.
Subnet ID: If you want to deploy the VM into an existing VNet where you
have a subnet defined the VM should be assigned to, name the ID of that
specific subnet. The ID usually looks like this: /subscriptions/<subscription
id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network
name>/subnets/<subnet name>
4. Select Purchase.
Depending on how your on-premises network is configured, you might need to set up
the proxy on your VM. If your VM is connected to your on-premises network via VPN or
ExpressRoute, the VM might not be able to access the Internet, and won't be able to
download the required VM extensions or collect Azure infrastructure information for the
SAP Host agent via the SAP extension for Azure, see Configure the proxy.
For more information about the Azure VM Agent, see the following resources.
Windows
Linux
The following flowchart shows the sequence of steps for moving an on-premises VM by
using a non-generalized Azure VHD:
If the disk is already uploaded and defined in Azure (see Azure Virtual Machines
planning and implementation for SAP NetWeaver), do the tasks described in the next
few sections.
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disk (sap-
2-tier-user-disk-md)
To create a two-tier system by using only one virtual machine and a Managed Disk,
use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
2. Settings:
OS type: The operating system type you want to deploy (Windows or Linux).
The number of SAPS the new system provides. If you are not sure how many
SAPS the system requires, ask your SAP Technology Partner or System
Integrator.
For larger systems, we highly recommend using Azure Premium Storage. For
more information about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Storage structure of a VM for RDBMS Deployments
Premium Storage: High-performance storage for Azure Virtual Machine
workloads
Introduction to Microsoft Azure Storage
OS disk VHD URI (unmanaged disk template only): The URI of the private OS
disk, for example,
https://<accountname>.blob.core.windows.net/vhds/osdisk.vhd.
4. Select Purchase.
To use the templates described in the preceding section, the VM Agent must be
installed on the OS disk, or the deployment will fail. Download and install the VM Agent
in the VM, as described in Download, install, and enable the Azure VM Agent.
If you don't use the templates described in the preceding section, you can also install
the VM Agent afterwards.
Depending on how your on-premises network is configured, you might need to set up
the proxy on your VM. If your VM is connected to your on-premises network via VPN or
ExpressRoute, the VM might not be able to access the Internet, and won't be able to
download the required VM extensions or collect Azure infrastructure information for the
SAP Host agent via the SAP extension for Azure, see Configure the proxy.
In this scenario, you also need to make sure that if Internet proxy settings are forced
when a VM joins a domain in your environment, the Windows Local System Account (S-
1-5-18) in the Guest VM has the same proxy settings. The easiest option is to force the
proxy by using a domain Group Policy, which applies to systems in the domain.
If you deploy a VM from the Azure Marketplace, this step is not required. Images from
the Azure Marketplace already have the Azure VM Agent.
Windows
Linux
Use the following commands to install the VM Agent for Linux:
Console
Console
If the agent is already installed, to update the Azure Linux Agent, do the steps described
in Update the Azure Linux Agent on a VM to the latest version from GitHub.
Windows
Proxy settings must be set up correctly for the Local System account to access the
Internet. If your proxy settings are not set by Group Policy, you can configure the
settings for the Local System account.
Linux
Configure the correct proxy in the configuration file of the Microsoft Azure Guest Agent,
which is located at \etc\waagent.conf.
Console
HttpProxy.Host=<proxy host>
Console
Console
If you want to use the Azure repositories, make sure that the traffic to these repositories
is not going through your on-premises intranet. If you created user-defined routes to
enable forced tunneling, make sure that you add a route that routes traffic to the
repositories directly to the Internet, and not through your site-to-site VPN connection.
The VM Extension for SAP also needs to be able to access the internet. Please make sure
to install the new VM Extension for SAP and follow the steps in Configure the Azure VM
extension for SAP solutions with Azure CLI in the VM Extension for SAP installation
guide to configure the proxy.
SLES
You also need to add routes for the IP addresses listed in \etc\regionserverclnt.cfg.
The following figure shows an example:
RHEL
You also need to add routes for the IP addresses of the hosts listed in
\etc\yum.repos.d\rhui-load-balancers. For an example, see the preceding figure.
Oracle Linux
There are no repositories for Oracle Linux on Azure. You need to configure your
own repositories for Oracle Linux or use the public repositories.
For more information about user-defined routes, see User-defined routes and IP
forwarding.
7 Note
When you've prepared the VM as described in Deployment scenarios of VMs for SAP on
Azure, the Azure VM Agent is installed on the virtual machine. The next step is to deploy
the Azure Extension for SAP, which is available in the Azure Extension Repository in the
global Azure datacenters. For more information, see Configure the Azure Extension for
SAP.
Next steps
Learn about RHEL for SAP in-place upgrade
SAP Business One on Azure Virtual
Machines
Article • 02/10/2023
This document provides guidance to deploy SAP Business One on Azure Virtual
Machines. The documentation is not a replacement for installation documentation of
Business one for SAP. The documentation should cover basic planning and deployment
guidelines for the Azure infrastructure to run Business One applications on.
SQL Server - see SAP Note #928839 - Release Planning for Microsoft SQL Server
SAP HANA - for exact SAP Business One support matrix for SAP HANA, checkout
the SAP Product Availability Matrix
Regarding SQL Server, the basic deployment considerations as documented in the Azure
Virtual Machines DBMS deployment for SAP NetWeaver applies. for SAP HANA,
considerations are mentioned in this document.
Prerequisites
To use this guide, you need basic knowledge of the following Azure components:
Even if you are interested in business One only, the document Azure Virtual Machines
planning and implementation for SAP NetWeaver can be a good source of information.
The assumption is that you as the instance deploying SAP Business One are:
528296 - General Overview Note for SAP Business One Releases and Related
Products
2216195 - Release Updates Note for SAP Business One 9.2, version for SAP
HANA
2483583 - Central Note for SAP Business One 9.3
2483615 - Release Updates Note for SAP Business One 9.3
2483595 - Collective Note for SAP Business One 9.3 General Issues
2027458 - Collective Consulting Note for SAP HANA-Related Topics of SAP
Business One, version for SAP HANA
A better overview which components are running in the client part and which parts are
running in the server part is documented in SAP Business One Administrator's Guide
Since there is heavy latency critical interaction between the client tier and the DBMS tier,
both tiers need to be located in Azure when deploying in Azure. it is usual that the users
then RDS into one or multiple VMs running an RDS service for the Business One client
components.
As Azure virtual machines for hosting the Business One client components and the
DBMS host, only VMs that are SAP NetWeaver supported are allowed. To find the list of
SAP NetWeaver supported Azure VMs, read SAP Note #1928533 .
Running SAP HANA as DBMS backend for Business One, only VMs, which are listed for
Business on HANA in the HANA certified IaaS platform list are supported for HANA.
The Business One client components are not affected by this stronger restriction for the
SAP HANA as DBMS system.
The simplified configuration presented introduces several security instances that allow
to control and limit routing. It starts with
The router/firewall on the customer on-premises side.
The next instance is the Azure Network Security Group that you can use to
introduce routing and security rules for the Azure VNet that you run your SAP
Business one configuration in.
In order to avoid that users of Business One client can as well see the server that
runs the Business One server, which runs the database, you should separate the
VM hosting the Business one client and the business one server in two different
subnets within the VNet.
You would use Azure NSG assigned to the two different subnets again in order to
limit access to the Business one server.
For cases where the users are connecting through the internet without any private
connectivity into Azure, the design of the network in Azure should be aligned with the
principles documented in the Azure reference architecture for DMZ between Azure and
the Internet.
Though emphasized in the specific and generic database documents already, you
should make yourself familiar with:
Manage the availability of Windows virtual machines in Azure and Manage the
availability of Linux virtual machines in Azure
SLA for Virtual Machines
These documents should help you to decide on the selection of storage types and high
availability configuration.
Use Premium SSDs over Standard HDDs. To learn more about the available disk
types, see our article Select a disk type
Use Azure Managed disks over unmanaged disks
Make sure that you have sufficient IOPS and I/O throughput configured with your
disk configuration
Combine /hana/data and /hana/log volume in order to have a cost efficient
storage configuration
Rough sizing estimates for the DBMS side for SQL Server are:
up to 20 4 16 GB D4s_v3, E4s_v3
up to 40 8 32 GB D8s_v3, E8s_v3
up to 80 16 64 GB D16s_v3, E16s_v3
The sizing listed above should give an idea where to start with. It may be that you need
less or more resources, in which case an adaption on Azure is easy. A change between
VM types is possible with just a restart of the VM.
For high availability and disaster recovery configurations around SAP HANA as database
for Business One in Azure, you should read the documentation SAP HANA high
availability for Azure virtual machines and the documentation pointed to from that
document.
For SAP HANA backup and restore strategies, you should read the document Backup
guide for SAP HANA on Azure Virtual Machines and the documentation pointed to from
that document.
Many customers use SAP Landscape Management (LaMa) to operate and monitor their
SAP landscape. Since version 3.0 SP05, SAP LaMa includes a connector to Azure by
default. You can use this connector to deallocate and start virtual machines (VMs), copy
and relocate managed disks, and delete managed disks. With these basic operations,
you can relocate, copy, clone, and refresh SAP systems by using SAP LaMa.
This guide describes how to set up the SAP LaMa connector for Azure. It also describes
how to create and configure virtual machines that you can use to install adaptive SAP
systems.
7 Note
Resources
The following SAP Notes are related to the topic of SAP LaMa on Azure:
You can find more information in the SAP Help Portal for SAP LaMa .
7 Note
If you need support for SAP LaMa or the connector for Azure, open an incident with
SAP on component BC-VCM-LVM-HYPERV.
General remarks
Be sure to enable Automatic Mountpoint Creation in Setup > Settings > Engine.
If you use dynamic IP address allocation in the subnet that SAP LaMa also uses,
preparing an SAP system with SAP LaMa might fail. If an SAP system is unprepared,
the IP addresses are not reserved and might get allocated to other virtual
machines.
If you sign in to managed hosts, don't block file systems from being unmounted.
If you sign in to a Linux virtual machine and change the working directory to a
directory in a mount point (for example, /usr/sap/AH1/ASCS00/exe), the volume
can't be unmounted and a relocate or unprepare operation fails.
The connector for Azure uses the Azure Resource Manager API to manage your Azure
resources. SAP LaMa can use a service principal or a managed identity to authenticate
against this API. If your SAP LaMa instance is running on an Azure VM, we recommend
using a managed identity.
By default, the managed identity doesn't have permissions to access your Azure
resources. Assign the Contributor role to the VM identity at resource group scope for all
resource groups that contain SAP systems that SAP LaMa should manage. For detailed
steps, see Assign Azure roles using the Azure portal.
In your configuration of the SAP LaMa connector for Azure, select Use Managed
Identity to enable the use of the managed identity. If you want to use a system-
assigned identity, leave the User Name field empty. If you want to use a user-assigned
identity, enter its ID in the User Name field.
User Name: Enter the service principal application ID or the ID of the user-
assigned identity of the virtual machine.
Password: Enter the service principal key/password. You can leave this field empty
if you use a system-assigned or user-assigned identity.
Proxy host: Enter the host name of the proxy if SAP LaMa needs a proxy to
connect to the internet.
Change Storage Type to save costs: Enable this setting if the Azure adapter should
change the storage type of the managed disks to save costs when the disks are not
in use.
For data disks that are referenced in an SAP instance configuration, the adapter
changes the disk type to Standard Storage during an instance unprepare operation
and back to the original storage type during an instance prepare operation.
If you stop a virtual machine in SAP LaMa, the adapter changes the storage type of
all attached disks, including the OS disk, to Standard Storage. If you start a virtual
machine in SAP LaMa, the adapter changes the storage type back to the original
storage type.
Select Test Configuration to validate your input. You should see the following message
at the bottom of the website:
We recommend using a separate subnet for all virtual machines that you want to
manage with SAP LaMa. We also recommend that you don't use dynamic IP addresses
to prevent IP address "stealing" when you're deploying new virtual machines and SAP
instances are unprepared.
7 Note
If possible, remove all virtual machine extensions. They might cause long runtimes
for detaching disks from a virtual machine.
Make sure that the user <hanasid>adm, the user <sapsid>adm, and the group sapsys
exist on the target machine with the same ID and group ID, or use LDAP. Enable and
start the Network File Sharing (NFS) server on the virtual machines that should be used
to run SAP NetWeaver ABAP Central Services (ASCS) or SAP Central Services (SCS).
Manual deployment
SAP LaMa communicates with the virtual machine by using the SAP Host Agent. If you
deploy the virtual machines manually or are not using the Azure Resource Manager
template from the quickstart repository, be sure to install the latest SAP Host Agent and
the SAP Adaptive Extensions. For more information about the required patch levels for
Azure, see SAP Note 2343511 .
Create a new virtual machine with one of the supported operating systems listed in SAP
Note 2343511 . Add more IP configurations for the SAP instances. Each instance needs
at least one IP address and must be installed using a virtual host name.
Create a new virtual machine with one of the supported operating systems for SAP
HANA, as listed in SAP Note 2343511 . Add one extra IP configuration for SAP HANA
and one per HANA tenant.
SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log.
Manual deployment for Oracle Database on Linux
Create a new virtual machine with one of the supported operating systems for Oracle
databases, as listed in SAP Note 2343511 . Add one extra IP configuration for the
Oracle database.
The Oracle database needs disks for /oracle, /home/oraod1, and /home/oracle.
The SQL Server database server needs disks for the database data and log files. It also
needs disks for c:\usr\sap.
Be sure to install a supported Microsoft ODBC driver for SQL Server on a virtual machine
that you want to use as a target for relocating an SAP NetWeaver application server or
as a system copy/clone target. SAP LaMa can't relocate SQL Server itself, so a virtual
machine that you want to use for these purposes needs SQL Server preinstalled.
SAPCAR 7.21
SAP Host Agent 7.21
SAP Adaptive Extension 1.0 EXT
Also download the following components from the Microsoft Download Center :
The components are required for template deployment. The easiest way to make them
available to the template is to upload them to an Azure storage account and create a
shared access signature (SAS).
sapSystemId : The SAP system ID (SID). It's used to create the disk layout (for
example, /usr/sap/<sapsid>).
computerName : The computer name of the new virtual machine. SAP LaMa also uses
this parameter. When you use this template to provision a new virtual machine as
part of a system copy, SAP LaMa waits until the host with this computer name can
be reached.
osType : The type of the operating system that you want to deploy.
dbtype : The type of the database. This parameter is used to determine how many
extra IP configurations need to be added and how the disk layout should look.
sapSystemSize : The size of the SAP system that you want to deploy. It's used to
adminPassword : The password for the virtual machine. You can also provide a public
sshKeyData : The public SSH key for the virtual machine. It's supported only for
deployEmptyTarget : An empty target that you can deploy if you want to use the
sapcarLocation : The location for the SAPCAR application that matches the
operating system that you deploy. SAPCAR is used to extract the archives that you
provide in other parameters.
sapHostAgentArchiveLocation : The location of the SAP Host Agent archive. The SAP
odbcDriverLocation : The location of the ODBC driver that you want to install. Only
the Microsoft ODBC driver for SQL Server is supported.
sapsysGid : The Linux group ID of the sapsys group. It's not required for Windows.
_artifactsLocation : The base URI, which contains artifacts that this template
requires. When you deploy the template by using the accompanying scripts, a
private location in the subscription is used and this value is automatically
generated. You need this URI only if you don't deploy the template from GitHub.
scripts, an SAS token is automatically generated. You need this token only if you
don't deploy the template from GitHub.
SAP HANA
The following examples assume that you install the SAP HANA system with SID HN1 and
the SAP NetWeaver system with SID AH1. The virtual host names are:
Linux
Bash
Windows
Bash
Linux
Add the following profile parameter to the SAP Host Agent profile, which is located at
/usr/sap/hostctrl/exe/host_profile. For more information, see SAP Note 2628497 .
Bash
acosprep/nfs_paths=/home/ah1adm,/usr/sap/trans,/sapmnt/AH1,/usr/sap/AH1
Install SAP NetWeaver ASCS for SAP HANA on Azure NetApp Files
Azure NetApp Files provides NFS for Azure. In the context of SAP LaMa, this simplifies
the creation of the ASCS instances and the subsequent installation of application
servers. Previously, the ASCS instance also had to act as an NFS server, and the
parameter acosprep/nfs_paths had to be added to the host profile of the SAP Host
Agent.
Network requirements
Azure NetApp Files requires a delegated subnet, which must be part of the same virtual
network as the SAP servers. Here's an example for such a configuration:
Because one pool might contain volumes for multiple systems, choose a self-
explaining naming scheme. Adding the SID helps to group related volumes
together.
For the ASCS and AS instances, you need the following mounts: /sapmnt/<SID>,
/usr/sap/<SID>, and /home/<sid>adm. Optionally, you need /usr/sap/trans for the
central transport directory, which is at least used by all systems of one landscape.
5. Repeat the preceding steps for the other volumes.
6. Mount the volumes to the systems where the initial installation with SAP SWPM is
performed:
a. Create the mount points. In this case, the SID is AN1, so you run the following
commands:
Bash
mkdir -p /home/an1adm
mkdir -p /sapmnt/AN1
mkdir -p /usr/sap/AN1
mkdir -p /usr/sap/trans
b. Mount the Azure NetApp Files volumes by using the following commands:
Bash
You can also look up the mount commands from the portal. The local mount
points need to be adjusted.
c. Run the df -h command. Check the output to verify that you mounted the
volumes correctly.
7. Perform the installation with SWPM. The same steps must be performed for at
least one AS instance.
After the successful installation, the system must be discovered within SAP LaMa.
The mount points should look like the following screenshot for the ASCS and AS
instances.
7 Note
This is an example. The IP addresses and export path are different from the
ones that you used before.
If you install SAP HANA by using the SAP HANA database lifecycle manager (HDBLCM)
command-line tool, use the --hostname parameter to provide a virtual host name.
Add the IP address of the virtual host name of the database to a network interface. The
recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT,
be sure to remount the IP address after a reboot.
Add another virtual host name and IP address for the name that the application servers
use to connect to the HANA tenant:
Bash
Run the database instance installation of SWPM on the application server VM, not on
the HANA VM. In the Database for SAP System dialog, for Database Host, use ah1-db.
Before you start SWPM, you need to mount the IP address of the virtual host name of
the application server. The recommended way is to use SAPACEXT. If you mount the IP
address by using SAPACEXT, be sure to remount the IP address after a reboot.
Linux
Bash
Windows
Bash
user store (hdbuserstore). You can add this parameter manually after the database
instance installation with SWPM or run SWPM with the following code:
Bash
# from https://blogs.sap.com/2015/04/14/sap-hana-client-software-different-
ways-to-set-the-connectivity-data/
/sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_USE_IDENT=SYSTEM_COO
If you set it manually, you also need to create new hdbuserstore entries:
Bash
# run as <sapsid>adm
/usr/sap/AH1/hdbclient/hdbuserstore LIST
# reuse the port that was listed from the command above, in this example
35041
/usr/sap/AH1/hdbclient/hdbuserstore SET DEFAULT ah1-db:35041@AH1 SAPABAP1
<password>
In the Primary Application Server Instance dialog, for PAS Instance Host Name, use
ah1-di-0.
Back up SYSTEMDB and all tenant databases before you try to copy a tenant, move a
tenant, or create a system replication.
Microsoft SQL Server
The following examples assume that you install the SAP NetWeaver system with SID
AS1. The virtual host names are:
as1-db for the SQL Server instance that the SAP NetWeaver system uses
as1-ascs for SAP NetWeaver ASCS
as1-di-0 for the first SAP NetWeaver application server
Bash
Bash
Run the database instance installation of SWPM on the SQL Server virtual machine. Use
SAPINST_USE_HOSTNAME=as1-db to override the host name that's used to connect to SQL
Server. If you deployed the virtual machine by using the Azure Resource Manager
template, set the directory that's used for the database data files to C:\sql\data, and set
the database log file to C:\sql\log.
Make sure that the user NT AUTHORITY\SYSTEM has access to the SQL Server instance
and has the server role sysadmin. For more information, see SAP Notes 1877727 and
2562184 .
Bash
In the Primary Application Server Instance dialog, for PAS Instance Host Name, use
as1-di-0.
Troubleshooting
Error:
Solution: Make sure that NT AUTHORITY\SYSTEM can access the SQL Server
instance. See SAP Note 2562184 .
RuntimeValidationException
Error:
HAOperationException
An error occurred in the system copy Start step of the database instance.
Error:
permission to alter database 'AS2', the database does not exist, or the
Solution: Make sure that NT AUTHORITY\SYSTEM can access the SQL Server
instance. See SAP Note 2562184 .
Error:
Solution: Make sure that the sapmnt share on ASCS/SCS has full access for
SAP_AS1_GlobalAdmin.
Error:
Solution: The computer account of the application server needs write access to
the profile.
Error:
HAOperationException
An error occurred when full copy was not enabled in the storage step.
Error:
Solution: Ignore warnings in the step and try again. This issue was fixed in a
support package/patch of SAP LaMa.
Command output:
Solution: Make sure that the NFS server service is enabled on the target virtual
machine for relocation.
Error:
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|
NW_readProfileDir|ind|ind|ind|ind|readProfile|0|getProfileDir' reported an
Solution: Make sure that SWPM is running with a user who has access to the
profile. You can configure this user in the Application Server Installation wizard.
Error:
Last error reported by the step: Caught ESAPinstException in module call:
Validator of step
'|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|
Solution: If you use a recent SAP kernel, SWPM can't determine whether the
system is a Unicode system anymore by using the message server of ASCS. See
SAP Note 2445033 .
Until this issue is fixed in a new support package/patch of SAP LaMa, work
around it by setting the profile parameter OS_UNICODE=uc in the default profile
of your SAP system.
Error:
Solution: Make sure that SWPM is running with a user who has access to the
profile. You can configure this user in the Application Server Installation wizard.
Error:
Solution: Make sure that the Microsoft ODBC driver for SQL Server is installed
on the virtual machine on which you want to install the application server.
Error:
Last error reported by the step: System call failed. DETAILS: Error 13
(494) in file
(\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/filesystem/syx
xcfstrm2.cpp), stack trace: CThrThread.cpp: 85:
syxxcfstrm.cpp: 29:
CSyFileStreamImpl::CSyFileStreamImpl(CSyFileStream*,iastring,ISyFile::eFile
CSyFileStream2Impl::open()
Solution: Make sure that SWPM is running with a user who has access to the
profile. You can configure this user in the Application Server Installation wizard.
Error:
Last error reported by the step: System call failed. DETAILS: Error 5
(0x00000005) (Access is denied.) in execution of system call
Solution: Add a host rule in the isolation step to allow communication from the
VM to the domain controller.
Next steps
SAP HANA on Azure operations guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP Cloud Appliance Library
Article • 04/09/2024
SAP Cloud Appliance Library offers a quick and easy way to create SAP workloads in
Azure. You can set up a fully configured demo environment from an Appliance Template
or deploy a standardized system for an SAP product based on default or custom SAP
software installation stacks. This page lists the latest Appliance Templates and below the
latest SAP S/4HANA stacks for production-ready deployments.
To deploy an appliance template, you'll need to authenticate with your S-User or P-User.
You can create a P-User free of charge via the SAP Community .
For details on Azure account creation, see the SAP learning video and description
You will also find detailed answers to your questions related to SAP Cloud Appliance
Library on Azure SAP CAL FAQ
The online library is continuously updated with Appliances for demo, proof of concept
and exploration of new business cases. For the most recent ones, select “Create
Appliance” here from the list – or visit cal.sap.com for further templates.
SAP S/4HANA December This appliance contains SAP S/4HANA 2023 Create
2023, Fully- 14 2023 (SP00) with pre-activated SAP Best Practices for Appliance
Activated SAP S/4HANA core functions, and further
Appliance scenarios for Service, Master Data Governance
(MDG), Portfolio Mgmt. (PPM), Human Capital
Management (HCM), Analytics, and more. User
access happens via SAP Fiori, SAP GUI, SAP
HANA Studio, Windows remote desktop, or the
backend operating system for full
administrative access.
Appliance Date Description Creation
Template Link
SAP S/4HANA July 16 This appliance contains SAP S/4HANA 2022 Create
2022 FPS02, Fully- 2023 (FPS02) with pre-activated SAP Best Practices Appliance
Activated for SAP S/4HANA core functions, and further
Appliance scenarios for Service, Master Data Governance
(MDG), Portfolio Mgmt. (PPM), Human Capital
Management (HCM), Analytics, and more. User
access happens via SAP Fiori, SAP GUI, SAP
HANA Studio, Windows remote desktop, or the
backend operating system for full
administrative access.
SAP BW/4HANA April 07 This solution offers you an insight of SAP Create
2023 Developer 2024 BW/4HANA 2023. SAP BW/4HANA is the next Appliance
Edition generation Data Warehouse optimized for SAP
HANA. Beside the basic BW/4HANA options,
the solution offers a bunch of SAP HANA
optimized BW/4HANA Content and the next
step of Hybrid Scenarios with SAP Datasphere.
SAP S/4HANA December This appliance contains SAP S/4HANA 2022 Create
2022, Fully- 15 2022 (SP00) with pre-activated SAP Best Practices for Appliance
Activated SAP S/4HANA core functions, and further
Appliance scenarios for Service, Master Data Governance
(MDG), Portfolio Mgmt. (PPM), Human Capital
Management (HCM), Analytics, and more. User
access happens via SAP Fiori, SAP GUI, SAP
HANA Studio, Windows remote desktop, or the
backend operating system for full
administrative access.
SAP Focused Run December SAP Focused Run is designed specifically for Create
4.0 FP02, 07 2023 businesses that need high-volume system and Appliance
unconfigured application monitoring, alerting, and analytics.
It's a powerful solution for service providers,
who want to host all their customers in one
central, scalable, safe, and automated
environment. It also addresses customers with
advanced needs regarding system
management, user monitoring, integration
Appliance Date Description Creation
Template Link
ノ Expand table
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
This solution comes as a standard S/4HANA system installation including High Details
Availability capabilities to ensure higher system uptime for productive usage. The
system parameters can be customized during initial provisioning according to the
requirements for the target system.
Within a few hours, a healthy SAP S/4HANA appliance or product is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through SAP
CAL on Azure. The support queue is BC-VCM-CAL.
Deploy SAP IDES EHP7 SP3 for SAP ERP
6.0 on Azure
Article • 02/10/2023
This article describes how to deploy an SAP IDES system running with SQL Server and
the Windows operating system on Azure via the SAP Cloud Appliance Library (SAP CAL)
3.0. The screenshots show the step-by-step process. To deploy a different solution,
follow the same steps.
To start with the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also
has a blog about the new SAP Cloud Appliance Library 3.0 .
7 Note
As of May 29, 2017, you can use the Azure Resource Manager deployment model in
addition to the less-preferred classic deployment model to deploy the SAP CAL. We
recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.
If you already created an SAP CAL account that uses the classic model, you need to
create another SAP CAL account. This account needs to exclusively deploy into Azure by
using the Resource Manager model.
After you sign in to the SAP CAL, the first page usually leads you to the Solutions page.
The solutions offered on the SAP CAL are steadily increasing, so you might need to
scroll quite a bit to find the solution you want. The highlighted Windows-based SAP
IDES solution that is available exclusively on Azure demonstrates the deployment
process:
Create an account in the SAP CAL
1. To sign in to the SAP CAL for the first time, use your SAP S-User or other user
registered with SAP. Then define an SAP CAL account that is used by the SAP CAL
to deploy appliances on Azure. In the account definition, you need to:
b. Enter your Azure subscription. An SAP CAL account can be assigned to one
subscription only. If you need more than one subscription, you need to create
another SAP CAL account.
c. Give the SAP CAL permission to deploy into your Azure subscription.
7 Note
The next steps show how to create an SAP CAL account for Resource Manager
deployments. If you already have an SAP CAL account that is linked to the
classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource
Manager model.
2. To create a new SAP CAL account, the Accounts page shows two choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer
preferred.
3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click
Authorize. The following page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be
the coadministrator of the Azure subscription you selected. The following page
appears in the browser tab:
6. Click Accept. If the authorization is successful, the SAP CAL account definition
displays again. After a short time, a message confirms that the authorization
process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in
the text box on the right and click Add.
8. To associate your account with the user that you use to sign in to the SAP CAL,
click Review.
9. To create the association between your user and the newly created SAP CAL
account, click Create.
7 Note
Before you can deploy the SAP IDES solution based on Windows and SQL Server,
you might need to sign up for an SAP CAL subscription. Otherwise, the solution
might show up as Locked on the overview page.
Deploy a solution
1. After you set up an SAP CAL account, select The SAP IDES solution on Windows
and SQL Server solution. Click Create Instance, and confirm the usage and terms
conditions.
3. Click Create. After some time, depending on the size and complexity of the
solution (the SAP CAL provides an estimate), the status is shown as active and
ready for use:
4. To find the resource group and all its objects that were created by the SAP CAL, go
to the Azure portal. The virtual machine can be found starting with the same
instance name that was given in the SAP CAL.
5. On the SAP CAL portal, go to the deployed instances and click Connect. The
following pop-up window appears:
6. Before you can use one of the options to connect to the deployed systems, click
Getting Started Guide. The documentation names the users for each of the
connectivity methods. The passwords for those users are set to the master
password you defined at the beginning of the deployment process. In the
documentation, other more functional users are listed with their passwords, which
you can use to sign in to the deployed system.
If you bought an SAP CAL subscription, SAP fully supports deployments through the
SAP CAL on Azure. The support queue is BC-VCM-CAL.
SAP Information Lifecycle Management
(ILM) with Microsoft Azure Blob Storage
Article • 02/10/2023
SAP Information Lifecycle Management (ILM) provides a broad range of capabilities for
managing data volumes, Retention Management as well as the decommissioning of
legacy systems, while balancing the total cost of ownership, risk, and legal compliance.
SAP ILM Store (a component of ILM) would enable storing of these archive files and
attachments from SAP system into Microsoft Azure Blob storage, thus enabling cloud
storage.
How to
This document covers creation and configuration of Azure blob storage account to be
used with SAP ILM. This account will be used to store archive data from S/4HANA
System.
7 Note
Steps 2, 3 and 4 can either be done manually or by using the Microsoft Quickstart
template.
QuickStart template approach:
This is an automated approach to create the Azure account. You can find the template in
the Azure Quickstart Templates library .
7 Note
Make sure that Client secret is added as per the section Add Credentials –
Add a Client Secret
7 Note
Ensure no other user has access to this storage account apart from the
registered application.
During the process of the account setup and configuration, it is recommended to refer
to Security recommendations for Blob Storage With the completion of this setup, we are
ready to use this blob storage account with SAP ILM to store archive files from S/4
HANA System.
Next steps
SAP ILM on the SAP help portal
Integrating Azure with SAP RISE
managed workloads
Article • 01/15/2024
For customers with SAP solutions such as RISE with SAP Enterprise Cloud Services (ECS)
and SAP S/4HANA Cloud, private edition (PCE) deployed in Azure, integrating the SAP
managed environment with their own Azure ecosystem and third party applications is of
particular importance. The following articles explain the concepts and best practices to
follow for a performant and secure solution.
There might be some circumstances when an initial request needs to be placed with SAP
RISE for enablement. However, most Azure scenarios depend on open network
communication to available SAP interfaces and activities entirely within customer's
responsibility. Diagram shown doesn't replace or extends an existing responsibility
matrix between the customer and SAP RISE/ECS.
First steps
Review the specifics within this document and then jump to individual documents for
your scenario. From the integration table, some examples are listed.
Azure support
SAP RISE customers in Azure have the SAP landscape run by SAP in an Azure
subscription owned by SAP. The subscription and all Azure resources of your SAP
environment are visible to and managed by SAP only. In turn, the customer's own Azure
environment contains applications that interact with the SAP systems. Elements such as
virtual networks, network security groups, firewalls, routing, Azure services such as Azure
Data Factory and others running inside the customer subscription access the SAP
managed landscape. When you engage with Azure support, only resources in your own
subscriptions are in scope. Contact SAP for issues with any resources operated in SAP's
Azure subscriptions for your RISE workload.
As part of your RISE project, document the interfaces and transfer points between your
own Azure environment, SAP workload managed by SAP RISE and on-premises. Such
document needs to include network information such as address space, firewall(s) and
routing, file shares, Azure services, DNS and others. Document ownership of any
interface partner and where any resource is running, to access this information quickly in
a troubleshooting and support situation. Contact SAPs support organization for services
running in SAP's Azure subscriptions.
) Important
For all details about RISE with SAP and SAP S/4HANA Cloud private edition, contact
your SAP representative.
RISE architecture
SAP creates and manages the entire SAP RISE architecture running in SAP's subscription
and Azure tenant. SAP also decides, validates and deploys all technical elements and
details used by SAP for RISE in Azure. Microsoft and SAP are continuously working
together to create the Azure infrastructure architectures optimized to support the RISE
SLAs, to apply Azure best practices as documented by Microsoft, and adapt these best
practices to the unique challenges of the RISE managed services. The cooperation on
Azure architecture as experienced by RISE customers includes continuous optimizations
and adoption of new Azure functionalities to provide added value for RISE customers.
Microsoft documents the integration part with SAP RISE in these documents, however
not the details about SAP's used architecture, which is intellectual property of SAP. From
Microsoft's recommended architecture SAP might use modifications and optimizations
in their employed architecture, to fulfill RISE SLAs and expectations by customers. Work
with SAP on configuration and customization of the deployed RISE landscape, to fit your
organization's requirements.
Next steps
Check out the documentation:
With your SAP landscape operated within RISE and running in a separate virtual
network, in this article we provide available connectivity options.
For SAP RISE/ECS deployments, virtual peering is the preferred way to establish
connectivity with customer’s existing Azure environment. Primary benefits are:
Virtual network peering can be set up within the same region as your SAP managed
environment, but also through global virtual network peering between any two Azure
regions. With SAP RISE/ECS available in many Azure regions , the region should match
with workload running in customer virtual networks due to latency and peering cost
considerations. However, some of the scenarios (for example, central S/4HANA
deployment for a globally present company) also require to peer networks globally. For
such globally distributed SAP landscape, we recommend to use multi-region network
architecture within your own Azure environment, with SAP RISE peering locally in each
geography to your network hubs.
Both the SAP and customer virtual network(s) are protected with network security
groups (NSG), permitting communication on SAP and database ports through the
peering. Communication between the peered virtual networks is secured through these
NSGs, limiting communication to and from customer’s SAP environment.
Since SAP RISE/ECS runs in SAP’s Azure tenant and subscriptions, set up the virtual
network peering between different tenants. You accomplish this configuration by setting
up the peering with the SAP provided network’s Azure resource ID and have SAP
approve the peering. Add a user from the opposite Microsoft Entra tenant as a guest
user, accept the guest user invitation and follow process documented at Create a virtual
network peering - different subscriptions. Contact your SAP representative for the exact
steps required. Engage the respective team(s) within your organization that deal with
network, user administration and architecture to enable this process to be completed
swiftly.
VPN vnet-to-vnet
As an alternative to virtual network peering, virtual private network (VPN) connection
can be established between VPN gateways, deployed both in the SAP RISE/ECS
subscription and customers own. You can establish a vnet-to-vnet connection between
these two VPN gateways, enabling fast communication between the two separate virtual
networks. The respective networks and gateways can reside in different Azure regions.
While virtual network peering is the recommended and more typical deployment model,
a VPN vnet-to-vnet can potentially simplify a complex virtual peering between customer
and SAP RISE/ECS virtual networks. The VPN Gateway acts as only point of entry into the
customer’s network and is managed and secured by a central team. Network
throughput is limited by the chosen gateway SKU on both sides. To address resiliency
requirements, ensure zone-redundant virtual network gateways are used for such
connection.
Network Security Groups are in effect on both customer and SAP virtual network,
identically to peering architecture enabling communication to SAP NetWeaver and
HANA ports as required. For details how to set up the VPN connection and which
settings should be used, contact your SAP representative.
7 Note
A virtual network can have only have one gateway, local or remote. With virtual
network peering established between SAP RISE using remote gateway transit, no
gateways can be added in the SAP RISE/ECS virtual network. A combination of
virtual network peering with remote gateway transit together with another virtual
network gateway in the SAP RISE/ECS virtual network isn't possible.
The vWAN network hub is deployed and managed by customer in own subscription.
Customer also manages entirely the on-premises connection and routing through
vWAN network hub, with access to SAP RISE peered spoke virtual network.
During your migration planning to SAP RISE, plan how in each phase SAP systems are
reachable for your user base and how data transfer to RISE/ECS virtual network is
routed. Often multiple locations and parties are involved, such as existing service
provider and data centers with own connection to your corporate network. Make sure
no temporary solutions with VPN connections are created without considering how in
later phases SAP data gets migrated for the most business critical systems.
Two VMs inside the RISE/PCE Azure virtual network host DNS servers
DNS zone transfer from SAP DNS server to customer’s DNS servers is the primary
method to replicate DNS entries from RISE/PCE environment.
Customer's Azure virtual networks are also using custom DNS configuration
referring to customer DNS servers located in Azure hub virtual network.
Optionally, customers can set up a private DNS forwarder within their Azure virtual
networks. Such forwarder then pushes DNS requests coming from Azure services
to SAP DNS servers that are targeted to the delegated zone (example
ecs.contoso.com).
DNS zone transfer is applicable for the designs when customers operate custom DNS
solution (for example, AD DS or BIND servers) within their hub virtual network.
7 Note
Both Azure provided DNS and Azure private zones do not support DNS zone
transfer capability, hence, can't be used to accept DNS replication from SAP
RISE/PCE/ECS DNS servers. Additionally, SAP typically does not support external
DNS service providers for the delegated zone.
SAP published a blog post on the DNS implementation with SAP RISE in Azure, see here
for details .
To further read about the usage of Azure DNS for SAP, outside the usage with SAP
RISE/ECS see details in following blog post .
Should you enable Internet bound or incoming traffic with SAP RISE, the network
communication is protected through various Azure technologies such as NSGs, ASGs,
Application Gateway with Web Application Firewall (WAF), proxy servers and others
depending on use and network protocols. These services are entirely managed through
SAP within the SAP RISE/ECS virtual network and subscription. The network path SAP
RISE to and from Internet remains typically within the SAP RISE/ECS virtual network only
and doesn't transit into/from customer’s own vnet(s).
Applications within a customer’s own virtual network connect to the Internet directly
from respective virtual network or through customer’s centrally managed services such
as Azure Firewall, Azure Application Gateway, NAT Gateway and others. Connectivity to
SAP BTP from non-SAP RISE/ECS applications takes the same network Internet bound
path on your side. Should an SAP Cloud Connecter be needed for such integration, run
it on customer’s VMs. In other words, SAP BTP or any public endpoint communication is
on a network path managed by customers themselves if SAP RISE workload is not
involved.
SAP offers Private Link Service for customers using SAP BTP on Azure. The SAP Private
Link Service connects SAP BTP services through a private IP range into customer’s Azure
network and thus accessible privately through the private link service instead of through
the Internet. Contact SAP for availability of this service for SAP RISE/ECS workloads.
See SAP's documentation and a series of blog posts on the architecture of the SAP
BTP Private Link Service and private connectivity methods, dealing with DNS and
certificates in following SAP blog series Getting Started with BTP Private Link Service for
Azure .
See further document Integrating Azure services with SAP RISE how the available
connectivity allows you to extend your SAP landscape with Azure services.
Next steps
Check out the documentation:
Your SAP landscape running within SAP RISE can easily integrate with additional
applications on Azure. With the information about available interfaces to the SAP
RISE/ECS landscape, many scenarios with Azure Services are possible.
Data integration scenarios with Azure Data Factory or Synapse Analytics require a
self-hosted integration runtime or Azure Integration Runtime. For details see the
next chapter.
App integration scenarios with Microsoft services using ABAP with the ABAP SDK
for Azure and the Microsoft AI SDK for SAP . Installation requires prior setup of
abapGit . See this SAP blog post for more information about ABAP Platform
and ABAP Cloud environment.
SAP legacy protocols remote function calls (RFC) support with built-in connectors
for Azure Logic Apps, Power Apps, Power BI through the Microsoft on-premises
data gateway between the SAP RISE system and Azure service. See below chapters
for more details.
Find a comprehensive overview of all the available SAP and Microsoft integration
scenarios here.
For data connectors using the Azure IR, this IR accesses your SAP environment through
a public IP address. SAP RISE/ECS provides this endpoint through an application
gateway for use and the communication and data movement is through https.
Data connectors within the self-hosted integration runtime communicate with the SAP
system within SAP RISE/ECS subscription and vnet through the established vnet peering
and private network address only. The established network security group rules limit
which application can communicate with the SAP system.
Learn more about the overall support on SAP data integration scenario from our Cloud
Adoption Framework with detailed introduction on each SAP connector, comparison
and guidance. The whitepaper SAP data integration using Azure Data Factory
whitepaper completes the picture.
With SAP RISE, the on-premises data gateway can connect to Azure Services running in
customer’s Azure subscription. This VM running the data gateway is deployed and
operated by the customer. Following high-level architecture serves as overview, similar
method can be used for either service.
The SAP RISE environment here provides access to the SAP ports for RFC and https
described earlier. The communication ports are accessed by the private network address
through the vnet peering or VPN site-to-site connection. The on-premises data gateway
VM running in customer’s Azure subscription uses the SAP .NET connector to run RFC,
BAPI or IDoc calls through the RFC connection. Additionally, depending on service and
way the communication is set up, a way to connect to public IP of the SAP systems REST
API through https might be required. The https connection to a public IP can be
exposed through SAP RISE/ECS managed application gateway. This high level
architecture shows the possible integration scenario. Alternatives to it such as using
Logic Apps single tenant and private endpoints to secure the communication and other
can be seen as extension and aren't described here in.
SAP RISE/ECS exposes the communication ports for these applications to use but has no
knowledge about any details of the connected application or service running in a
customer’s subscription.
SAP RISE/ECS exposes the communication ports for these applications to use but has no
knowledge about any details of the connected application or service running in a
customer’s subscription. Contact SAP for any SAP license details for any implications
accessing SAP data through Azure service connecting to the SAP system or database.
Next steps
Check out the documentation:
This article details integration of Azure identity and security services with an SAP RISE
workload. Additionally use of some Azure monitoring services are explained for an SAP
RISE landscape.
Tutorial: Microsoft Entra Single sign-on (SSO) integration with SAP NetWeaver
Tutorial: Microsoft Entra single sign-on (SSO) integration with SAP Fiori
Tutorial: Microsoft Entra integration with SAP HANA
ノ Expand table
SAML/OAuth Microsoft Entra ID SAP Fiori, Web GUI, Portal, Configuration by customer
HANA
SPNEGO Active Directory Web GUI, SAP Enterprise Configuration by customer and
(AD) Portal SAP
SSO against Active Directory (AD) of your Windows domain for ECS/RISE managed SAP
environment, with SAP SSO Secure Login Client requires AD integration for end user
devices. With SAP RISE, any Windows systems are not integrated with the customer's
active directory domain. The domain integration isn't necessary for SSO with
AD/Kerberos as the domain security token is read on the client device and exchanged
securely with SAP system. Contact SAP if you require any changes to integrate AD based
SSO or using third party products other than SAP SSO Secure Login Client, as some
configuration on RISE managed systems might be required.
Microsoft Sentinel with SAP RISE
The SAP RISE certified Microsoft Sentinel solution for SAP applications allows you to
monitor, detect, and respond to suspicious activities. Microsoft Sentinel guards your
critical data against sophisticated cyberattacks for SAP systems hosted on Azure, other
clouds, or on-premises infrastructure.
The solution allows you to gain visibility to user activities on SAP RISE/ECS and the SAP
business logic layers and apply Sentinel’s built-in content.
Use a single console to monitor all your enterprise estate including SAP instances
in SAP RISE/ECS on Azure and other clouds, SAP Azure native and on-premises
estate
Detect and automatically respond to threats: detect suspicious activity including
privilege escalation, unauthorized changes, sensitive transactions, data exfiltration
and more with out-of-the-box detection capabilities
Correlate SAP activity with other signals: more accurately detect SAP threats by
cross-correlating across endpoints, Microsoft Entra data and more
Customize based on your needs - build your own detections to monitor sensitive
transactions and other business risks
Visualize the data with built-in workbooks
For SAP RISE/ECS, the Microsoft Sentinel solution must be deployed in customer's Azure
subscription. All parts of the Sentinel solution are managed by customer and not by
SAP. Private network connectivity from customer's vnet is needed to reach the SAP
landscapes managed by SAP RISE/ECS. Typically, this connection is over the established
vnet peering or through alternatives described in this document.
To enable the solution, only an authorized RFC user is required and nothing needs to be
installed on the SAP systems. The container-based SAP data collection agent included
with the solution can be installed either on VM or AKS/any Kubernetes environment. The
collector agent uses an SAP service user to consume application log data from your SAP
landscape through RFC interface using standard RFC calls.
The following log fields/source require an SAP transport change request: Client IP
address information from SAP security audit log, DB table logs (preview), spool
output log. Sentinel's built-in content (detections, workbooks and playbooks)
provides extensive coverage and correlation without those log sources.
SAP infrastructure and operating system logs aren't available to Sentinel in RISE,
including VMs running SAP, SAPControl data sources, network resources placed
within ECS. SAP monitors elements of the Azure infrastructure and operation
system independently.
Use prebuilt playbooks for security, orchestration, automation and response capabilities
(SOAR) to react to threats quickly. A popular first scenario is SAP user blocking with
intervention option from Microsoft Teams. The integration pattern can be applied to any
incident type and target service spanning towards SAP Business Technology Platform
(BTP) or Microsoft Entra ID with regard to reducing the attack surface.
For more information on Microsoft Sentinel and SOAR for SAP, see the blog series From
zero to hero security coverage with Microsoft Sentinel for your critical SAP security
signals .
For more information on Microsoft Sentinel and SAP, including a deployment guide, see
Sentinel product documentation.
SAP RISE/ECS is a fully managed service for your SAP landscape and thus Azure
Monitoring for SAP is not intended to be utilized for such managed environment. SAP
RISE/ECS doesn't support any integration with Azure Monitor for SAP solutions. SAP's
own monitoring and reporting is used and provided to the customer as defined by your
service description with SAP.
Next steps
Check out the documentation:
Virtual machine scale sets offer two orchestration modes that enable improved
virtual machine management. For SAP workloads, the Virtual Machines Scale Set
with flexible orchestration is the recommended and only supported option, as it
offers the ability to use different virtual machine SKUs and operating systems
within a single scale set.
The flexible orchestration of virtual machine scale set provides the option to create
the scale set within a region or span it across availability zones. On creating, the
flexible scale set within a region with platformFaultDomainCount>1 (FD>1), the
VMs deployed in the scale set would be distributed across specified number of
fault domains in the same region. On the other hand, creating the flexible scale set
across availability zones with platformFaultDomainCount=1 (FD=1) would
distribute the virtual machines across specified zone and the scale set would also
distribute VMs across different fault domains within the zone on a best effort basis.
For SAP workload only flexible scale set with FD=1 is supported. The advantage
of using flexible scale sets with FD=1 for cross zonal deployment, instead of
traditional availability zone deployment is that the VMs deployed with the scale set
would be distributed across different fault domains within the zone in a best-effort
manner.
There are two ways to configure flexible virtual machine scale sets: with or without
a scaling profile. However, for SAP workload, we recommend creating a flexible
virtual machine scale set without a scaling profile. It is because the autoscaling
feature of scale set with a scaling profile doesn't work out of the box for SAP
workload. So, currently flexible virtual machine scale set is solely used as a
deployment framework for SAP.
) Important
After the creation of the scale set, the orchestration mode and configuration type
(with or without scaling profile) cannot be modified or updated at a later time.
Reference architecture of SAP workload
deployed with Flexible Virtual Machine Scale
Sets
When creating virtual machine scale set with flexible orchestration across availability
zones, it's important to mention all the availability zones where you would be deploying
your SAP system. It's worth noting that the availability zones must be specified while
creating the scale set, as they can't be modified at a later stage.
By default, when configuring flexible scale set across availability zones, the fault domain
count is set to 1. It means that the VM instances belonging to the scale set would be
spread across different fault domains on a best-effort basis in each zone.
The diagram illustrates architecture for deploying three separate systems using a flexible
virtual machine scale set with FD=1. Three flexible virtual machine scale sets are created,
one for each system, with a platform fault domain count set to 1. The first flexible scale
set is created for high availability SAP system with two availability zones for (zone 1, and
2). The second scale set is created to configure SBD device across three availability
zones (zone 1, 2, and 3), and third scale set is created for nonproduction or non-HA SAP
system with one availability zone (zone 1).
The virtual machines for each system are then manually deployed in their corresponding
availability zone within the scale set. For SAP System #1, high availability components,
such as primary and secondary databases and ASCS/ERS instances, are deployed across
multiple zones. For application tier VMs, the scale set would distribute them across
different fault domains within a single zone, on a best-effort basis. Take note that it
wouldn't be feasible to include more VMs for SAP System #1 in availability zone 3 at a
later stage. It is because the flexible scale set is limited to only two availability zones,
which are zone 1 and 2. For more information on high availability deployment for SAP
workload, see High-availability architecture and scenarios for SAP NetWeaver.
For SBD devices, VMs are manually deployed in each availability zone within the scale
set. For SAP system #3, which is a nonproduction or non-HA environment, all the
components of SAP systems are deployed in a single zone.
7 Note
When creating a flexible scale set for zonal deployment, it's not possible to set
platformFaultDomainCount to a value higher than 1.
Azure portal
To set up a virtual machine scale set without scaling profile using Azure portal,
proceed as follows -
7 Note
For SAP workload only flexible scale set with FD=1 is supported. So, do not
configure scale set with "fixed spreading" as the allocation policy.
Once you have created the flexible virtual machine scale set, you can create a virtual
machine by following the quick start guide. When configuring the virtual machine, be
sure to select "virtual machine scale set" under availability options and choose the
flexible scale set you created. The portal would list all the zones that you included when
creating the flexible scale set, so you can select the desired availability zone for your VM.
Follow the remaining instructions in the quick start guide to complete the virtual
machine configuration.
FAQs for Virtual Machine Scale Set for
SAP workload
Article • 03/21/2024
Get answers to frequently asked questions about Virtual Machine Scale Sets for SAP
workload.
Keep in mind that the availability zone volume placement feature is still in preview.
Therefore, it's essential to thoroughly review the documentation on managing
availability zone volume placement for Azure NetApp Files for additional consideration.
7 Note
General Support Statement: Support for the Azure Extension for SAP is provided
through SAP support channels. If you need assistance with the Azure VM extension
for SAP solutions, please open a support case with SAP Support.
When you've prepared the VM as described in Deployment scenarios of VMs for SAP on
Azure, the Azure VM Agent is installed on the virtual machine. The next step is to deploy
the Azure Extension for SAP, which is available in the Azure Extension Repository in the
global Azure datacenters.
To be sure SAP supports your environment, enable the Azure VM extension for SAP
solutions as described in Configure the Azure Extension for SAP.
SAP resources
When you are setting up your SAP software deployment, you need the following SAP
resources:
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 1409604 has the required SAP Host Agent version for Windows in
Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure
Extension for SAP.
You want to install the VM extension with Terraform, Azure Resource Manager
Templates or with other means than Azure CLI or Azure PowerShell
You want to install the extension on SUSE SLES 15 or higher.
You want to install the extension on Red Hat Enterprise Linux 8.1 or higher.
You want to use Azure Ultra Disk or Standard Managed Disks
Microsoft or SAP support asks you to install the new extension
Recommendation
We currently recommend using the standard version of the extension for each
installation where none of the use cases for the new version of the extension applies. We
are currently working on improving the new version of the VM extension to be able to
make it the default and deprecate the standard version of the extension. During this
time, you can use the new version. However, you need to make sure the VM Extension
can access management.azure.com.
7 Note
Make sure to uninstall the VM Extension before switching between the two
versions.
Next steps
Standard Version of Azure VM extension for SAP solutions
New Version of Azure VM extension for SAP solutions
Standard Version of Azure VM extension
for SAP solutions
Article • 04/13/2023
Prerequisites
7 Note
General Support Statement: Support for the Azure Extension for SAP is provided
through SAP support channels. If you need assistance with the Azure VM extension
for SAP solutions, please open a support case with SAP Support
7 Note
Make sure to uninstall the VM extension before switching between the standard
and the new version of the Azure Extension for SAP.
7 Note
There are two versions of the VM extension. This article covers the standard version
of the Azure VM extension for SAP. For guidance on how to install the new version,
see New Version of Azure VM extension for SAP solutions.
Check frequently for updates to the PowerShell cmdlets, which usually are updated
monthly. Follow the steps described in this article. Unless stated otherwise in SAP Note
1928533 or SAP Note 2015553 , we recommend that you work with the latest
version of Azure PowerShell cmdlets.
To check the version of the Azure PowerShell cmdlets that are installed on your
computer, run this PowerShell command:
PowerShell
(Get-Module Az.Compute).Version
Check frequently for updates to Azure CLI, which usually is updated monthly.
To check the version of Azure CLI that is installed on your computer, run this command:
Console
az --version
1. Make sure that you have installed the latest version of the Azure PowerShell
cmdlet. For more information, see Deploying Azure PowerShell cmdlets
2. Run the following PowerShell cmdlet. For a list of available environments, run
cmdlet Get-AzEnvironment . If you want to use global Azure, your environment is
AzureCloud. For Azure China 21Vianet, select AzureChinaCloud.
PowerShell
After you enter your account data, the script deploys the required extensions and
enables the required features. This can take several minutes. For more information
about Set-AzVMAEMExtension , see Set-AzVMAEMExtension.
The Set-AzVMAEMExtension configuration does all the steps to configure host data
collection for SAP.
Confirmation that data collection for the OS disk and all additional data disks has
been configured.
The next two messages confirm the configuration of Storage Metrics for a specific
storage account.
One line of output gives the status of the actual update of the VM Extension for
SAP configuration.
Another line of output confirms that the configuration has been deployed or
updated.
The last line of output is informational. It shows your options for testing the VM
Extension for SAP configuration.
To check that all steps of Azure VM Extension for SAP configuration have been
executed successfully, and that the Azure Infrastructure provides the necessary
data, proceed with the readiness check for the Azure Extension for SAP, as
described in Readiness check.
Wait 15-30 minutes for Azure Diagnostics to collect the relevant data.
1. Make sure that you have installed the latest version of the Azure CLI. For more
information, see Deploy Azure CLI
Azure CLI
az login
3. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.
Azure CLI
Azure CLI
5. Verify that the Azure Extension for SAP is active on the Azure Linux VM. Check
whether the file /var/lib/AzureEnhancedMonitor/PerfCounters exists. If it exists, at a
command prompt, run this command to display information collected by the Azure
Extension for SAP:
Console
cat /var/lib/AzureEnhancedMonitor/PerfCounters
Output
...
2;cpu;Current Hw Frequency;;0;2194.659;MHz;60;1444036656;saplnxmon;
2;cpu;Max Hw Frequency;;0;2194.659;MHz;0;1444036656;saplnxmon;
...
The joint Microsoft/SAP team extends the capabilities of the VM extension and
requests more or fewer counters.
Microsoft introduces a new version of the underlying Azure infrastructure that
delivers the data, and the Azure Extension for SAP needs to be adapted to those
changes.
You mount additional data disks to your Azure VM or you remove a data disk. In
this scenario, update the collection of storage-related data. Changing your
configuration by adding or deleting endpoints or by assigning IP addresses to a
VM does not affect the extension configuration.
You change the size of your Azure VM, for example, from size A5 to any other VM
size.
You add new network interfaces to your Azure VM.
To update settings, update configuration of Azure Extension for SAP by following the
steps in Configure the Azure VM extension for SAP solutions with Azure CLI or
Configure the Azure VM extension for SAP solutions with PowerShell.
Run the readiness check for the Azure Extension for SAP as described in Readiness
check. If all readiness check results are positive and all relevant performance counters
appear OK, Azure Extension for SAP has been set up successfully. You can proceed with
the installation of SAP Host Agent as described in the SAP Notes in SAP resources. If the
readiness check indicates that counters are missing, run the health check for the Azure
Extension for SAP, as described in Health check for the Azure Extension for SAP
configuration. For more troubleshooting options, see Troubleshooting for Windows or
Troubleshooting for Linux.
Readiness check
This check makes sure that all performance metrics that appear inside your SAP
application are provided by the underlying Azure Extension for SAP.
3. At the command prompt, change the directory to the installation folder of the
Azure Extension for SAP:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExte
nsionHandler\<version>\drop
The version in the path to the extension might vary. If you see folders for multiple
versions of the extension in the installation folder, check the configuration of the
AzureEnhancedMonitoring Windows service, and then switch to the folder
indicated as Path to executable.
7 Note
If the Azure Extension for SAP is not installed, or the AzureEnhancedMonitoring service
is not running, the extension has not been configured correctly. For detailed information
about how to troubleshoot the extension, see Troubleshooting for Windows or
Troubleshooting for Linux.
7 Note
Azperflib.exe output shows all populated Azure performance counters for SAP. At the
bottom of the list of collected counters, a summary and health indicator show the status
of Azure Extension for SAP.
Check the result returned for the Counters total output, which is reported as empty, and
for Health status, shown in the preceding figure.
API Calls - not Counters that are not available might be either not applicable to the virtual
available machine configuration, or are errors. See Health status.
Counters total - The following two Azure storage counters can be empty:
empty Storage Read Op Latency Server msec
Storage Read Op Latency E2E msec
Expected result: Returns list of performance counters. The file should not be
empty.
Expected result: Returns one line where the error is none, for example,
3;config;Error;;0;0;none;0;1456416792;tst-servercs;
If the preceding check was not successful, run these additional checks:
Expected result: Displays one entry similar to: python /usr/sbin/waagent -daemon
2. Make sure that the Azure Extension for SAP is installed and running.
Expected result: Lists the content of the Azure Extension for SAP directory.
3. Install SAP Host Agent as described in SAP Note 1031096 , and check the output
of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d
If you already have an SAP NetWeaver ABAP application server installed, open
transaction ST06 and check whether monitoring is enabled.
If any of these checks fail, and for detailed information about how to redeploy the
extension, see Troubleshooting for Linux or Troubleshooting for Windows.
Health checks
If some of the infrastructure data is not delivered correctly as indicated by the tests
described in Readiness check, run the health checks described in this chapter to check
whether the Azure infrastructure and the Azure Extension for SAP are configured
correctly.
2. Run the following PowerShell cmdlet. For a list of available environments, run the
cmdlet Get-AzEnvironment . To use global Azure, select the AzureCloud
environment. For Azure China 21Vianet, select AzureChinaCloud.
PowerShell
3. The script tests the configuration of the virtual machine you select.
Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Wait 15 minutes, and repeat the checks described in Readiness check and this chapter. If
the checks still indicate a problem with some or all counters, see Troubleshooting for
Linux or Troubleshooting for Windows.
7 Note
You can experience some warnings in cases where you use Managed Standard
Azure Disks. Warnings will be displayed instead of the tests returning "OK". This is
normal and intended in case of that disk type. See also Troubleshooting for Linux
or Troubleshooting for Windows.
1. Install Azure CLI 2.0. Ensure that you use at least version 2.19.1 or later (use the
latest version).
Azure CLI
az login
3. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.
Azure CLI
az extension add --name aem
azurecliw
The script tests the configuration of the virtual machine you select.
Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Wait 15 minutes, and repeat the checks described in Readiness check and this chapter. If
the checks still indicate a problem with some or all counters, see Troubleshooting for
Linux or Troubleshooting for Windows.
Issue
Solution
The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine or rerun the Set-AzVMAEMExtension
configuration script.
Service for Azure Extension for SAP does not exist
Issue
Solution
If the service does not exist, the Azure Extension for SAP has not been installed correctly.
Redeploy the extension as described in Configure the Azure VM extension for SAP
solutions with Azure CLI or Configure the Azure VM extension for SAP solutions with
PowerShell.
After you deployed the extension, check again whether the Azure performance counters
are provided in the Azure VM.
Service for Azure Extension for SAP exists, but fails to start
Issue
The AzureEnhancedMonitoring Windows service exists and is enabled, but fails to start.
For more information, check the application event log.
Solution
The configuration is incorrect. Restart the Azure Extension for SAP in the VM, as
described in Configure the Azure Extension for SAP.
Issue
The directory \var\lib\waagent\ does not have a subdirectory for the Azure Extension for
SAP.
Solution
The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine and/or rerun the Set-AzVMAEMExtension
configuration script.
Issue
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be
installed but no disk metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be
installed but no disk metrics will be available.
WARNING: [WARN] Standard Managed Disks are not supported. Extension will be
installed but no disk metrics will be available.
Executing azperfli.exe as described earlier you can get a result that is indicating a non-
healthy state.
Solution
The messages are caused by the fact that Standard Managed Disks are not delivering
the APIs used by the SAP Extension for SAP to check on statistics of the Standard Azure
Storage Accounts. This is not a matter of concern. Reason for introducing the collecting
data for Standard Disk Storage accounts was throttling of inputs and outputs that
occurred frequently. Managed disks will avoid such throttling by limiting the number of
disks in a storage account. Therefore, not having that type of that data is not critical.
For a complete and up-to-date list of known issues, see SAP Note 1999351 , which has
additional troubleshooting information for Azure Extension for SAP.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, rerun the
Set-AzVMAEMExtension configuration script as described in Configure the Azure VM
extension for SAP solutions with Azure CLI or Configure the Azure VM extension for SAP
solutions with PowerShell. You might have to wait for an hour because storage analytics
or diagnostics counters might not be created immediately after they are enabled. If the
problem persists, open an SAP customer support message on the component BC-OP-
NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.
cfg/038 The metric 'Disk type' is missing in the extension configuration file run setup
config.xml. 'Disk type' along with some other counters was introduced in script
v2.2.0.68 12/16/2015. If you deployed the extension prior to 12/16/2015,
it uses the old configuration file. The Azure extension framework
automatically upgrades the extension to a newer version, but the
config.xml remains unchanged. To update the configuration, download
and execute the latest PowerShell setup script.
cfg/017 Due to sysprep of the VM your Windows SID has changed. redeploy
after
sysprep
wad/003 Cannot read WAD table. There is no connection to WAD table. There can run setup
be several causes of this: script
fix internet
1) outdated configuration connection
2) no network connection to Azure contact
3) issues with WAD setup support
Contact Support
Unexpected error or there is no known solution. Collect the
AzureEnhancedMonitoring_service.log file located in the folder
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtension
Handler\<version>\drop (Windows) or
/var/log/azure/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux (Linux) and
contact SAP support for further assistance.
Redeploy after sysprep
If you plan to build a generalized sysprepped OS image (which can include SAP
software), it is recommended that this image does not include the Azure extension for
SAP. You should install the Azure extension for SAP after the new instance of the
generalized OS image has been deployed.
However, if your generalized and sysprepped OS image already contains the Azure
Extension for SAP, you can apply the following workaround to reconfigure the extension,
on the newly deployed VM instance:
On the newly deployed VM instance delete the content of the following folders:
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExte
nsionHandler\<version>\RuntimeSettings
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExte
nsionHandler\<version>\Status
Follow the steps in chapter Configure the Azure Extension for SAP in this guide to
install the extension again.
In addition, if you need to set a static IP address for your Azure VM, do not set it
manually inside the Azure VM, but set it using Azure PowerShell, Azure CLI Azure portal.
The static IP is propagated via the Azure DHCP service.
Manually setting a static IP address inside the Azure VM is not supported, and might
lead to problems with the Azure extension for SAP.
Next steps
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines planning and implementation for SAP NetWeaver
New Version of Azure VM extension for
SAP solutions
Article • 03/14/2023
Prerequisites
7 Note
General Support Statement: Support for the Azure Extension for SAP is provided
through SAP support channels. If you need assistance with the Azure VM extension
for SAP solutions, please open a support case with SAP Support
7 Note
Make sure to uninstall the VM extension before switching between the standard
and the new version of the Azure Extension for SAP.
7 Note
There are two versions of the VM extension. This article covers the new version of
the Azure VM extension for SAP. For guidance on how to install the standard
version, see Standard Version of Azure VM extension for SAP solutions.
Check frequently for updates to the PowerShell cmdlets, which usually are updated
monthly. Follow the steps described in this article. Unless stated otherwise in SAP Note
1928533 or SAP Note 2015553 , we recommend that you work with the latest
version of Azure PowerShell cmdlets.
To check the version of the Azure PowerShell cmdlets that are installed on your
computer, run this PowerShell command:
PowerShell
(Get-Module Az.Compute).Version
Check frequently for updates to Azure CLI, which usually is updated monthly.
To check the version of Azure CLI that is installed on your computer, run this command:
Console
az --version
7 Note
The following steps require Owner privileges over the resource group or individual
resources (virtual machine, data disks, and network interfaces)
2. Make sure to uninstall the standard version of the VM Extension for SAP. It is not
supported to install both versions of the VM Extension for SAP on the same virtual
machine.
3. Make sure that you have installed the latest version of the Azure PowerShell
cmdlet (at least 4.3.0). For more information, see Deploying Azure PowerShell
cmdlets.
4. Run the following PowerShell cmdlet. For a list of available environments, run
cmdlet Get-AzEnvironment . If you want to use global Azure, your environment is
AzureCloud. For Azure China 21Vianet, select AzureChinaCloud.
The VM Extension for SAP supports configuring a proxy that the extension should
use to connect to external resources, for example the Azure Resource Manager API.
Please use parameter -ProxyURI to set the proxy.
PowerShell
Log on to the virtual machine on which you enabled the VM Extension for SAP and
restart the SAP Host Agent if it was already installed. SAP Host Agent does not use
the VM Extension until it is restarted. It currently cannot detect that an extension
was installed after it was started.
7 Note
The following steps require Owner privileges over the resource group or individual
resources (virtual machine, data disks, and so on)
2. Ensure that you uninstall the current version of the VM Extension for SAP. You can't
install both versions of the VM Extension for SAP on the same VM.
3. Install the latest version of Azure CLI 2.0 (version 2.19.1 or later).
4. Sign in with your Azure account:
Azure CLI
az login
5. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.
Azure CLI
The VM Extension for SAP supports configuring a proxy that the extension should
use to connect to external resources, for example the Azure Resource Manager API.
Please use parameter --proxy-uri to set the proxy.
Azure CLI
Log on to the virtual machine on which you enabled the VM Extension for SAP and
restart the SAP Host Agent if it was already installed. SAP Host Agent does not use
the VM Extension until it is restarted. It currently cannot detect that an extension
was installed after it was started.
Before deploying the VM Extension for SAP, please make sure to assign a user or system
assigned managed identity to the virtual machine. For more information, read the
following guides:
Configure managed identities for Azure resources on a VM using the Azure portal
Configure managed identities for Azure resources on an Azure VM using Azure CLI
Configure managed identities for Azure resources on an Azure VM using
PowerShell
Configure managed identities for Azure resources on an Azure VM using templates
Terraform VM Identity
After assigning an identity to the virtual machine, give the VM read access to either the
resource group or the individual resources associated to the virtual machine (VM,
Network Interfaces, OS Disks and Data Disks). It is recommended to use the built-in
Reader role to grant the access to these resources. You can also grant this access by
adding the VM identity to an Azure Active Directory group that already has read access
to the required resources. It is then no longer needed to have Owner privileges when
deploying the VM Extension for SAP if you use a user assigned identity that already has
the required permissions.
There are different ways how to deploy the VM Extension for SAP manually. Please find
a few examples in the next chapters.
The extension currently supports the following configuration keys. In the example
below, the msi_res_id is shown.
msi_res_id: ID of the user assigned identity the extension should use to get the
required information about the VM and its resources
proxy: URL of the proxy the extension should use to connect to the internet, for
example to retrieve information about the virtual machine and its resources.
PowerShell
Bash
Terraform
settings = <<SETTINGS
{
"cfg":[
{
"key":"msi_res_id",
"value":"<user assigned resource id>"
}
]
}
SETTINGS
}
settings = <<SETTINGS
{
"cfg":[
]
}
SETTINGS
}
settings = <<SETTINGS
{
"cfg":[
{
"key":"msi_res_id",
"value":"<user assigned resource id>"
}
]
}
SETTINGS
}
settings = <<SETTINGS
{
"cfg":[
]
}
SETTINGS
}
Azure PowerShell
PowerShell
# Windows
Get-AzVMExtensionImage -Location westeurope -PublisherName
Microsoft.AzureCAT.AzureEnhancedMonitoring -Type MonitorX64Windows
# Linux
Get-AzVMExtensionImage -Location westeurope -PublisherName
Microsoft.AzureCAT.AzureEnhancedMonitoring -Type MonitorX64Linux
Azure CLI
Azure CLI
# Windows
az vm extension image list --location westeurope --publisher
Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Windows
# Linux
az vm extension image list --location westeurope --publisher
Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Linux
Readiness check
This check makes sure that all performance metrics that appear inside your SAP
application are provided by the underlying Azure Extension for SAP.
Bash
curl http://127.0.0.1:11812/azure4sap/metrics
Expected result: Returns an XML document that contains the monitoring
information of the virtual machine, its disks and network interfaces.
If the preceding check was not successful, run these additional checks:
Expected result: Displays one entry similar to: python /usr/sbin/waagent -daemon
2. Make sure that the Azure Extension for SAP is installed and running.
*/'
Expected result: Lists the content of the Azure Extension for SAP directory.
1.0.0.82/AzureEnhancedMonitoring -monitor
3. Install SAP Host Agent as described in SAP Note 1031096 , and check the output
of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d
If you already have an SAP NetWeaver ABAP application server installed, open
transaction ST06 and check whether monitoring is enabled.
If any of these checks fail, and for detailed information about how to redeploy the
extension, see Troubleshooting for Windows or Troubleshooting for Linux
Health checks
If some of the infrastructure data is not delivered correctly as indicated by the tests
described in Readiness check, run the health checks described in this chapter to check
whether the Azure infrastructure and the Azure Extension for SAP are configured
correctly.
2. Run the following PowerShell cmdlet. For a list of available environments, run the
cmdlet Get-AzEnvironment . To use global Azure, select the AzureCloud
environment. For Azure China 21Vianet, select AzureChinaCloud.
PowerShell
3. The script tests the configuration of the virtual machine you selected.
Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Repeat the checks described in Readiness check and this chapter. If the checks still
indicate a problem with some or all counters, see Troubleshooting for Linux or
Troubleshooting for Windows.
1. Install Azure CLI 2.0. Ensure that you use at least version 2.19.1 or later (use the
latest version).
Azure CLI
az login
3. Install the Azure CLI AEM Extension. Ensure that you use version 0.2.2 or later.
Azure CLI
Azure CLI
The script tests the configuration of the virtual machine you select.
Make sure that every health check result is OK. If some checks do not display OK, run
the update cmdlet as described in Configure the Azure VM extension for SAP solutions
with Azure CLI or Configure the Azure VM extension for SAP solutions with PowerShell.
Repeat the checks described in Readiness check and this chapter. If the checks still
indicate a problem with some or all counters, see Troubleshooting for Linux or
Troubleshooting for Windows.
Issue
Solution
The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine or install the VM extension again.
If troubleshooting by using SAP Note 1999351 does not resolve the issue, open an
SAP customer support message on the component BC-OP-NT-AZR for Windows or BC-
OP-LNX-AZR for a Linux virtual machine. Please attach the log file
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.MonitorX64Window
s\<version>\logapp.txt to the incident.
Issue
The directory /var/lib/waagent/ does not have a subdirectory for the Azure Extension for
SAP.
Solution
The extension is not installed. Determine whether this is a proxy issue (as described
earlier). You might need to restart the machine and/or install the VM extension again.
Next steps
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines planning and implementation for SAP NetWeaver
SAP BusinessObjects BI platform
planning and implementation guide on
Azure
Article • 06/16/2023
The purpose of this guide is to provide guidelines for planning, deploying, and
configuring SAP BusinessObjects BI Platform, also known as SAP BOBI Platform on
Azure. This guide is intended to cover common Azure services and features that are
relevant for SAP BOBI Platform. This guide isn't an exhaustive list of all possible
configuration options. It covers solutions common to typical deployment scenarios.
This guide isn't intended to replace the standard SAP BOBI Platform installation and
administration guides, operating system, or any database documentation.
Architecture overview
SAP BusinessObjects BI Platform is a self-contained system that can exist on a single
Azure virtual machine or can be scaled into a cluster of many Azure Virtual Machines
that run different components. SAP BOBI Platform consists of six conceptual tiers: Client
Tier, Web Tier, Management Tier, Storage Tier, Processing Tier, and Data Tier. (For more
details on each tier, refer Administrator Guide in SAP BusinessObjects Business
Intelligence Platform help portal). Following is the high-level details on each tier:
Client Tier: It contains all desktop client applications that interact with the BI
platform to provide different kind of reporting, analytic, and administrative
capabilities.
Web Tier: It contains web applications deployed to Java web application servers.
Web applications provide BI Platform functionality to end users through a web
browser.
Management Tier: It coordinates and controls all the components that makes the
BI Platform. It includes Central Management Server (CMS) and the Event Server
and associated services
Storage Tier: It's responsible for handling files, such as documents and reports. It
also handles report caching to save system resources when user access reports.
Processing Tier: It analyzes data, and produces reports and other output types. It's
the only tier that accesses the databases that contain report data.
Data Tier: It consists of the database servers hosting the CMS system databases
and Auditing Data Store.
The SAP BI Platform consists of a collection of servers running on one or more hosts. It's
essential that you choose the correct deployment strategy based on the sizing, business
need and type of environment. For small installation like development or test, you can
use a single Azure Virtual Machine for web application server, database server, and all BI
Platform servers. In case you're using Database-as-a-Service (DBaaS) offering from
Azure, database server runs separately from other components. For medium and large
installation, you can have servers running on multiple Azure virtual machines.
The diagram below illustrates the architecture of a large-scale deployment of the SAP
BOBI Platform on Azure virtual machines, with each component distributed. To ensure
infrastructure resilience against service disruption, VMs can be deployed using either
flexible scale set, availability sets or availability zones.
Architecture details
Load balancer
In SAP BOBI multi-instance deployment, Web application servers (or web tier) are
running on two or more hosts. To distribute user load evenly across web servers,
you can use a load balancer between end users and web servers. In Azure, you can
either use Azure Load Balancer or Azure Application Gateway to manage traffic to
your web servers.
The web server hosts the web applications of SAP BOBI Platform like CMC and BI
Launch Pad. To achieve high availability for web server, you must deploy at least
two web application servers to manage redundancy and load balancing. In Azure,
these web application servers can be placed either in flexible scale set, availability
zones or availability sets for better availability.
Tomcat is the default web application for SAP BI Platform. To achieve high
availability for tomcat, enable session replication using Static Membership
Interceptor in Azure. It ensures that user can access SAP BI web application even
when tomcat service is disrupted.
) Important
By default Tomcat uses multicast IP and Port for clustering which is not
supported on Azure (SAP Note 2764907 ).
BI platform servers
BI Platform servers include all the services that are part of SAP BOBI application
(management tier, processing tier, and storage tier). When a web server receives a
request, it detects each BI platform server (specifically, all CMS servers in a cluster)
and automatically load balance their requests. In case if one of the BI Platform
hosts fails, web server automatically send requests to other host.
To achieve high availability or redundancy for BI Platform, you must deploy the
application in at least two Azure virtual machines. Based on the sizing, you can
scale your BI Platform to run on more Azure virtual machines.
File Repository Server contains all reports and other BI documents that have been
created. In multi-instance deployment, BI Platform servers are running on multiple
virtual machines and each VM should have access to these reports and other BI
documents. So, a filesystem needs to be shared across all BI platform servers.
In Azure, you can either use Azure Premium Files or Azure NetApp Files for File
Repository Server. Both of these Azure services have built-in redundancy.
Support matrix
This section describes supportability of different SAP BOBI component like SAP
BusinessObjects BI Platform version, Operating System and, Databases in Azure.
The SAP BI Platform runs on different operating system and databases. Supportability of
SAP BOBI platform between Operating System and Database version can be found in
Product Availability Matrix for SAP BOBI.
Operating system
Azure supports following operating systems for SAP BusinessObjects BI Platform
deployment.
The operating system version that is listed in Product Availability Matrix (PAM) for SAP
BusinessObjects BI Platform are supported as long as they're compatible to run on
Azure Infrastructure.
Databases
The BI Platform needs database for CMS and Auditing Data store, which can be installed
on any supported databases that are listed in SAP Product Availability Matrix that
includes the following -
Azure SQL Database (Supported database only for SAP BOBI Platform on
Windows)
It's a fully managed SQL Server database engine, based on the latest stable
Enterprise Edition of SQL Server. Azure SQL database handles most of the database
management functions such as upgrading, patching, and monitoring without user
involvement. With Azure SQL Database, you can create a highly available and high-
performance data storage layer for the applications and solutions in Azure. For
more details, check Azure SQL Database documentation.
It's a relational database service powered by the MySQL community edition. Being
a fully managed Database-as-a-Service (DBaaS) offering, it can handle mission-
critical workloads with predictable performance and dynamic scalability. It has
built-in high availability, automatic backups, software patching, automatic failure
detection, and point-in-time restore for up to 35 days, which substantially reduce
operation tasks. For more details, check Azure Database for MySQL
documentation.
SAP HANA
SAP ASE
IBM DB2
MaxDB
This document illustrates the guidelines to deploy SAP BOBI Platform on Windows with
Azure SQL Database and SAP BOBI Platform on Linux with Azure Database for MySQL.
It's also our recommended approach for running SAP BusinessObjects BI Platform on
Azure.
Sizing
Sizing is a process of determining the hardware requirement to run the application
efficiently. For SAP BOBI Platform, sizing needs to be done using SAP sizing tool called
Quick Sizer . The tool provides the SAPS based on the input, which then needs to be
mapped to certified Azure virtual machines types for SAP. SAP Note 1928533 provides
the list of supported SAP products and Azure VM types along with SAPS. For more
information on sizing, check SAP BI Sizing Guide .
For storage need for SAP BOBI Platform, Azure offers different types of Managed Disks.
For SAP BOBI Installation directory, it's recommended to use premium managed disk
and for the database that runs on virtual machines, follow the guidance that is provided
in DBMS deployment for SAP workload.
Azure supports two DBaaS offering for SAP BOBI Platform data tier - Azure SQL
Database (BI Application running on Windows) and Azure Database for MySQL (BI
Application running on Linux and Windows). So based on the sizing result, you can
choose purchasing model that best fits your need.
Tip
For quick sizing reference, consider 800 SAPS = 1 vCPU while mapping the SAPS
result of SAP BOBI Platform database tier to Azure Database-as-a-Service (Azure
SQL Database or Azure Database for MySQL).
vCore-based
It lets you choose the number of vCores, amount of memory, and the amount and
speed of storage. The vCore-based purchasing model also allows you to use Azure
Hybrid Benefit for SQL Server to gain cost savings. This model is suited for
customer who value flexibility, control, and transparency.
There are three Service Tier Options being offered in vCore model that includes -
General Purpose, Business Critical, and Hyperscale. The service tier defines the
storage architecture, space, I/O limits, and business continuity options related to
availability and disaster recovery. Following is high-level details on each service tier
option -
1. General Purpose service tier is best suited for Business workloads. It offers
budget-oriented, balanced, and scalable compute and storage options. For
more information, refer Resource options and limits.
2. Business Critical service tier offers business applications the highest resilience
to failures by using several isolated replicas, and provides the highest I/O
performance per database replica. For more information, refer Resource
options and limits.
3. Hyperscale service tier is best for business workloads with highly scalable
storage and read-scale requirements. It offers higher resilience to failures by
allowing configuration of more than one isolated database replica. For more
information, refer Resource options and limits.
DTU-based
The DTU-based purchasing model offers a blend of compute, memory, and I/O
resources in three service tiers, to support light and heavy database workloads.
Compute sizes within each tier provide a different mix of these resources, to which
you can add additional storage resources. It's best suited for customers who want
simple, preconfigure resource options.
Serverless
It's more suitable for intermittent, unpredictable usage with low average compute
utilization over time. So this model can be used for nonproduction SAP BOBI
deployment.
7 Note
For SAP BOBI, it's convenient to use vCore based model and choose either General
Purpose or Business Critical service tier based on the business need.
Basic
It's used for the target workloads that require light compute and I/O performance.
General Purpose
It's suited for most business workloads that require balanced compute and
memory with scalable I/O throughput.
Memory Optimized
7 Note
Azure resources
Choosing regions
Azure region is one or a collection of data-centers that contains the infrastructure to run
and hosts different Azure Services. This infrastructure includes large number of nodes
that function as compute nodes or storage nodes, or run network functionality. Not all
region offers the same services.
SAP BI Platform contains different components that might require specific VM types,
Storage like Azure Files or Azure NetApp Files or Database as a Service (DBaaS) for its
data tier that might not be available in certain regions. You can find out the exact
information on VM types, Azure Storage types or, other Azure Services in Products
available by region site. If you're already running your SAP systems on Azure,
probably you have your region identified. In that case, you need to first investigate that
the necessary services are available in those regions to decide the architecture of SAP BI
Platform.
Virtual machine scale sets with flexible orchestration
Virtual machine scale sets with flexible orchestration provide a logical grouping of
platform-managed virtual machines. You have an option to create scale set within region
or span it across availability zones. On creating, the flexible scale set within a region with
platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be
distributed across specified number of fault domains in the same region. On the other
hand, creating the flexible scale set across availability zones with
platformFaultDomainCount=1 (FD=1) would distribute VMs across specified zone and
the scale set would also distribute VMs across different fault domains within the zone on
a best effort basis.
For SAP workload only flexible scale set with FD=1 is supported. The advantage of
using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional
availability zone deployment is that the VMs deployed with the scale set would be
distributed across different fault domains within the zone in a best-effort manner. To
learn more about SAP workload deployment with scale set, see flexible virtual machine
scale deployment guide.
Availability zones
Availability Zones are physically separate locations within an Azure region. Each
Availability Zone is made of one or more datacenters equipped with independent
power, cooling, and networking.
To achieve high availability on each tier for SAP BI Platform, you can distribute VMs
across Availability Zone by implementing high availability framework, which can provide
the best SLA in Azure. For Virtual Machine SLA in Azure, check the latest version of
Virtual Machine SLAs .
For data tier, Azure Database as a Service (DBaaS) service provides high availability
framework by default. You just need to select the region and service inherent high
availability, redundancy, and resiliency capabilities to mitigate database downtime from
planned and unplanned outages, without requiring you to configure any additional
components. For more details on the SLA for supported DBaaS offering on Azure, check
High availability in Azure Database for MySQL and High availability for Azure SQL
Database.
Availability sets
Availability set is a logical grouping capability for isolating Virtual Machine (VM)
resources from each other on being deployed. Azure makes sure of the VMs you place
within an Availability Set run across multiple physical servers, compute racks, storage
units, and network switches. If a hardware or software failure happens, only a subset of
your VMs is affected and your overall solution stays operational. So when virtual
machines are placed in availability sets, Azure Fabric Controller distributes the VMs over
different Fault and Upgrade domains to prevent all VMs from being inaccessible
because of infrastructure maintenance or failure within one Fault domain.
SAP BI Platform contains many different components and while designing the
architecture you have to make sure that each of this component is resilient of any
disruption. It can be achieved by placing Azure virtual machines of each component
within availability sets. Keep in mind, when you mix VMs of different VM families within
one availability set, you may come across problems that prevent you to include a certain
VM type into such availability set. So have separate availability set for Web Application,
BI Application for SAP BI Platform as highlighted in Architecture Overview.
Also the number of update and fault domains that can be used by an Azure Availability
Set within an Azure Scale unit is finite. So if you keep adding VMs to a single availability
set, two or more VMs will eventually end in the same fault or update domain. For more
information, see the Azure Availability Sets section of the Azure virtual machines
planning and implementation for SAP document.
To understand the concept of Azure availability sets and the way availability sets relate
to Fault and Upgrade Domains, read manage availability article.
) Important
The concepts of Azure availability zones and Azure availability sets are
mutually exclusive. You can deploy a pair or multiple VMs into either a specific
availability zone or an availability set, but you can't do both.
If you planning to deploy across availability zones, it is advised to use flexible
scale set with FD=1 over standard availability zone deployment.
Virtual machines
Azure Virtual Machine is a service offering that enables you to deploy custom images to
Azure as Infrastructure-as-a-Service (IaaS) instances. It simplifies maintaining and
operating applications by providing on-demand compute and storage to host, scale,
and manage web application and connected applications.
Azure offers varieties of virtual machines for all your application needs. But for SAP
workload, Azure has narrowed the selection to different VM families that are suitable for
SAP workload and SAP HANA workload more specifically. For more insight, check What
SAP software is supported for Azure deployments.
Based on the SAP BI Platform sizing, you need to map your requirement to Azure Virtual
Machine, which is supported in Azure for SAP product. SAP Note 1928533 is a good
starting point that list out supported Azure VM types for SAP Products on Windows and
Linux. Also a point to keep in mind that beyond the selection of purely supported VM
types, you also need to check whether those VM types are available in specific region.
You can check the availability of VM type on Products available by region page. For
choosing the pricing model, you can refer to Azure virtual machines for SAP workload
Storage
Azure Storage is an Azure-managed cloud service that provides storage that is highly
available, secure, durable, scalable, and redundant. Some of the storage types have
limited use for SAP scenarios. But several Azure Storage types are well suited or
optimized for specific SAP workload scenarios. For more information, refer Azure
Storage types for SAP Workload guide, as it highlights different storage options that are
suited for SAP.
Azure Storage has different Storage types available for customers and details for the
same can be read in the article What disk types are available in Azure?. SAP BOBI
Platform uses following Azure Storage to build the application -
Azure-managed disks
It's a block-level storage volume that is managed by Azure. You can use the disks
for SAP BOBI Platform application servers and databases, when installed on Azure
virtual machines. There are different types of Azure Managed Disks available, but
it's recommended to use Premium SSDs for SAP BOBI Platform application and
database.
In below example, Premium SSDs are used for BOBI Platform installation directory.
For database installed on virtual machine, you can use managed disks for data and
log volume as per the guidelines. CMS and Audit databases are typically small and
it doesn’t have the same storage performance requirements as that of other SAP
OLTP/OLAP databases.
In SAP BOBI Platform, File Repository Server (FRS) refers to the disk directories
where contents like reports, universes, and connections are stored which are used
by all application servers of that system. Azure Premium Files or Azure NetApp
Files storage can be used as a shared file system for SAP BOBI applications FRS. As
this storage offering isn't available all regions, refer to Products available by
region site to find out up-to-date information.
If the service is unavailable in your region, you can create NFS server from which
you can share the file system to SAP BOBI application. But you'll also need to
consider its high availability.
Networking
SAP BOBI is a reporting and analytics BI platform that doesn’t hold any business data. So
the system is connected to other database servers from where it fetches all the data and
provide insight to users. Azure provides a network infrastructure, which allows the
mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-
premises system, systems in different virtual network and others. For more information
check Microsoft Azure Networking for SAP Workload.
For Database-as-a-Service offering, any newly created database (Azure SQL Database or
Azure Database for MySQL) has a firewall that blocks all external connections. To allow
access to the DBaaS service from BI Platform virtual machines, you need to specify one
or more server-level firewall rules to enable access to your DBaaS server. For more
information, see Firewall rules for Azure Database for MySQL and Network Access
Controls section for Azure SQL database.
Next steps
SAP BusinessObjects BI Platform Deployment on Linux
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP BusinessObjects BI platform
deployment guide for Windows on
Azure
Article • 06/16/2023
This article describes the strategy to deploy the SAP BusinessObjects Business
Intelligence (SAP BOBI) platform on Azure for Windows. In this example, two virtual
machines (VMs) with Azure Premium SSD managed disks as their installation directory
are configured. Azure SQL Database, a platform as a service (PaaS) offering, is used for
the central management server (CMS) and audit databases. Azure Premium Files, an
SMB protocol, is used as a file store that's shared across both VMs. The default Tomcat
Java web application and business intelligence (BI) platform application are installed
together on both VMs. To load balance the user requests, Azure Application Gateway is
used, which has native TLS/SSL offloading capabilities.
Don't use a single subnet for all Azure services in an SAP BI platform
deployment. Based on SAP BI platform architecture, you might need to create
multiple subnets. In this deployment, we'll create two subnets: a BI
application subnet and an Application Gateway subnet.
Follow SAP Note 2276646 to identify ports for SAP BOBI platform
communication across different components.
SQL Database communicates over port 1433. Outbound traffic over port 1433
should be allowed from your SAP BOBI application servers.
In Azure, Application Gateway must be on a separate subnet. For more
information, see Application Gateway configuration overview.
If you're using Azure NetApp Files for a file store instead of Azure Files, create
a separate subnet for Azure NetApp Files. For more information, see
Guidelines for Azure NetApp Files network planning.
You can either use a custom image or choose an image from Azure
Marketplace. Based on your need, see Deploy a VM from Azure Marketplace
for SAP or Deploy a VM with a custom image for SAP.
6. Add one Premium SSD disk. It will be used as an SAP BOBI installation directory.
Azure Files offers standard file shares hosted on HDD-based hardware and premium file
shares hosted on SSD-based hardware. For an SAP BusinessObjects file store, use Azure
Premium Files.
Azure premium file shares are available with local and zone redundancy in a subset of
regions. To find out if premium file shares are currently available in your region, see
Products available by region . For information about regions that support zone-
redundant storage (ZRS), see Azure Storage redundancy.
7 Note
FileStorage accounts can only be used to store Azure file shares. No other storage
resources, such as blobs, containers, queues, or tables, can be deployed in a
FileStorage account.
The storage account will be accessed via private endpoint and deployed in the same
virtual network of an SAP BOBI platform. With this setup, the traffic from your SAP
system never leaves the virtual network security boundaries. SAP systems often contain
sensitive and business-critical data, so staying within the boundaries of the virtual
network is an important security consideration for many customers.
If you need to access the storage account from a different virtual network, you can use
Azure Virtual Network peering.
1. To create a storage account via the Azure portal, select Create a resource >
Storage > Storage account.
2. On the Basics tab, complete all required fields to create a storage account:
b. Enter the Storage account name. For example, enter azusbobi. This name must
be globally unique, but otherwise you can provide any name you want.
c. Select Premium as the performance tier, and select FileStorage as the account
kind.
d. For Replication label, choose a redundancy level. Select Locally redundant
storage (LRS).
For Premium FileStorage, ZRS and LRS are the only options available. Based on
your VM deployment strategy (flexible scale set, availability zone or availability
set), choose the appropriate redundancy level. For more information, see Azure
Storage redundancy.
e. Select Next.
3. On the Networking tab, select private endpoint as the connectivity method. For
more information, see Azure Files networking considerations.
c. Enter the Name of the private endpoint. For example, enter azusbobi-pe.
e. In the Networking section, select the Virtual network and Subnet on which the
SAP BusinessObjects BI application is deployed.
f. Accept the default (yes) for Integrate with private DNS zone.
4. On the Data protection tab, configure the soft-delete policy for Azure file shares in
your storage account. By default, soft-delete functionality is turned off. To learn
more about soft delete, see Prevent accidental deletion of Azure file shares.
The Secure transfer required field indicates whether the storage account requires
encryption in transit for communication to the storage account. If you require SMB
2.1 support, you must disable this field. For the SAP BOBI platform, keep it default
(enabled).
For details on how to create a storage account, see Create a FileStorage storage
account.
Create Azure file shares
The next step is to create Azure files in the storage account. Azure files use a
provisioned model for premium file shares. In a provisioned business model, you
proactively specify to Azure files what your storage requirements are, rather than being
billed based on what you use. To understand more about this model, see Provisioned
model. In this example, we create two Azure files: frsinput (256 GB) and frsoutput (256
GB) for the SAP BOBI file store.
In this example, an SAP BOBI application is installed on a separate partition (F:). Initialize
the Premium SSD disk that you attached during the VM provisioning:
[A] To mount the Azure file share, follow the steps in Mount the Azure file share.
To mount an Azure file share on a Windows server, the SMB protocol requires TCP port
445 to be open. Connections will fail if port 445 is blocked. You can check if your firewall
or ISP is blocking port 445 by using the Test-NetConnection cmdlet. See Port 445 is
blocked.
The guidelines are applicable only if you're using SQL Database. For other databases,
see SAP or database-specific documentation for instructions.
2. Under SQL databases, change Resource type to Database server. Select Create.
3. On the Basics tab, fill in all the required fields to Create SQL Database Server:
b. Enter a Server name. For example, enter azussqlbodb. The server name must be
globally unique, but otherwise, you can provide any name you want.
d. Enter the Server admin login. For example, enter boadmin. Then enter a
Password.
4. On the Networking tab, change Allow Azure services and resources to access this
server to No under Firewall rules.
In the next step, create the CMS and the audit databases in the SQL Database server
(azussqlbodb.database.windows.net).
3. On the Networking tab, select private endpoint for the connectivity method. The
private endpoint will be used to access SQL Database within the configured virtual
network.
c. Enter the Name of the private endpoint. For example, enter azusbodb-pe.
e. In the Networking section, select the Virtual network and Subnet on which the
SAP BusinessObjects BI application is deployed.
Similarly, you can create the audit database. For example, enter boaudit.
1. See the CMS + Audit repository support by OS section in the Product Availability
Matrix (PAM) for SAP BusinessObjects BI platform to find out the database
connectors that are compatible with SQL Database.
2. Download the ODBC driver version from the link. In this example, we're
downloading ODBC driver 13.1.
3. Install the ODBC driver on all BI servers (azuswinboap1 and azuswinboap2).
4. After you install the driver in azuswinboap1, go to Start > Windows
Administrative Tools > ODBC Data Sources (64-bit).
5. Go to the System DSN tab.
6. Select Add to create a connection to the CMS database.
7. Select ODBC Driver 13 for SQL Server, and select Finish.
8. Enter the information of your CMS database like the following, and select Next:
Name: The name of the database created in the section "Create the CMS and
the audit database." For example, enter bocms or boaudit.
Description: A description that describes the data source. For example, enter
CMS database or Audit database.
Server: The name of the server created in the section "Create a SQL Database
server." For example, enter azussqlbodb.database.windows.net.
9. Select With SQL Server authentication using a login ID and password entered by
user to verify authenticity to Azure SQL Server. Enter the user credential that was
created at the time of the SQL Database server creation. For example, enter
boadmin. Select Next.
10. Change the default database to bocms, and keep everything else as default. Select
Next.
11. Select the Use strong encryption for data checkbox, and keep everything else as
default. Select Finish.
12. The data source to the CMS database has been created. Now you can select Test
Data Source to validate the connection to the CMS database from the BI
application. It should complete successfully. If it fails, troubleshoot the connectivity
issue.
7 Note
SQL Database communicates over port 1433. Outbound traffic over port 1433
should be allowed from your SAP BOBI application servers.
Repeat the preceding steps to create a connection for the audit database on the server
azuswinboap1. Similarly, install and configure both ODBC data sources (bocms and
boaudit) on all BI application servers (azuswinboap2).
Server preparation
Follow the latest guide by SAP to prepare servers for the installation of the BI platform.
For the most up-to-date information, see the "Preparation" section in the SAP Business
Intelligence Platform Installation Guide for Windows .
Installation
To install the BI platform on a Windows host, sign in with a user that has local
administrative privileges.
Follow the instructions in the SAP Business Intelligence Platform Installation Guide for
Windows that are specific to your version. Here are a few points to note while you
install the SAP BOBI platform on Windows:
On the Configure Destination Folder screen, provide the destination folder where
you want to install the BI platform. For example, enter F:\SAP BusinessObjects*.
On the Configure Product Registration screen, you can either use a temporary
license key for SAP BusinessObjects Solutions from SAP Note 1288121 or
generate a license key in SAP Service Marketplace.
On the Select Install Type screen, select Full installation on the first server
(azuswinboap1). For the other server (azuswinboap2), select Custom / Expand,
which expands the existing SAP BOBI setup.
On the Select Default or Existing Database screen, select configure an existing
database, which prompts you to select the CMS and the audit database. Select
Microsoft SQL Server using ODBC for the CMS Database type and the Audit
Database type.
You can also select No auditing database if you don't want to configure auditing
during installation.
Select the appropriate options on the Select Java Web Application Server screen
based on your SAP BOBI architecture. In this example, we've selected option 1,
which installs a Tomcat server on the same SAP BOBI platform.
For a multi-instance deployment, run the installation setup on the second host
(azuswinboap2). In the Select Install Type screen, select Custom / Expand, which
expands the existing SAP BOBI setup. For more information, see the SAP blog SAP
BusinessObjects Business Intelligence platform setup with Azure SQL Database .
) Important
The database engine version numbers for SQL Server and SQL Database aren't
comparable with each other. They're internal build numbers for these separate
products. The database engine for SQL Database is based on the same code base
as the SQL Server database engine. Most importantly, the database engine in SQL
Database always has the newest SQL Database engine bits. Version 12 of SQL
Database is newer than version 15 of SQL Server.
To find out the current SQL Database version, you can either check in the settings of the
Central Management Console (CMC) or run the following query by using sqlcmd or SQL
Server Management Studio. The alignment of SQL versions to default compatibility can
be found in the database compatibility level article.
SQL
(1 rows affected)
(3 rows affected)
Post installation
After a multi-instance installation of the SAP BOBI platform, more post-configuration
steps need to be performed to support application high availability.
To configure the cluster name on Windows, follow the instructions in the SAP Business
Intelligence Platform Administrator Guide . After you configure the cluster name,
follow SAP Note 1660440 to set the default system entry on the CMC or BI Launchpad
sign-in page.
1. If not created, follow the instructions provided in the preceding section, "Provision
Azure Premium Files," to create and mount Azure Premium Files.
Tip
2. Follow SAP Note 2512660 to change the path of the file repository (Input and
Output).
In SAP Note 2808640 , steps to configure Tomcat clustering are provided by using
multicast. But multicast isn't supported in Azure. To make a Tomcat cluster work in
Azure, you must use StaticMembershipInterceptor (SAP Note 2764907 ). To set up a
Tomcat cluster in Azure, see Tomcat clustering using static membership for the SAP
BusinessObjects BI platform on the SAP blog.
In the following figure, see the "Internal Load Balancer" section where the web
application server runs on port 8080 (default Tomcat HTTP port), which will be
monitored by a health probe. Any incoming request that comes from users will get
redirected to the web application servers (azuswinboap1 or azuswinboap2) in the
back-end pool. Load Balancer doesn't support TLS/SSL termination, which is also
known as TLS/SSL offloading. If you're using Load Balancer to distribute traffic
across web servers, we recommend using Standard Load Balancer.
7 Note
When VMs without public IP addresses are placed in the back-end pool of
internal (no public IP address) Standard Load Balancer, there will be no
outbound internet connectivity, unless additional configuration is performed
to allow routing to public endpoints. For information on how to achieve
outbound connectivity, see Public endpoint connectivity for virtual machines
using Azure Standard Load Balancer in SAP high-availability scenarios.
Application Gateway provides an application delivery controller as a service, which
is used to help applications direct user traffic to one or more web application
servers. It offers various layer 7 load-balancing capabilities like TLS/SSL offloading,
Web Application Firewall, and cookie-based session affinity for your applications.
To configure Application Gateway for an SAP BOBI web server, see Load balancing
SAP BOBI web servers by using Application Gateway on the SAP blog.
7 Note
Use Application Gateway to load balance the traffic to the web server because
it provides features like SSL offloading, centralized SSL management to
reduce encryption and decryption overhead on the server, round-robin
algorithms to distribute traffic, Web Application Firewall capabilities, and high
availability.
This guide explores how features native to Azure in combination with an SAP BOBI
platform configuration improve the availability of an SAP deployment. This section
focuses on the following options for SAP BOBI platform reliability on Azure:
Backup and restore: This process creates periodic copies of data and applications
to separate locations. If the original data or applications are lost or damaged, the
copies can be used to restore or recover to the previous state.
High availability: A high-availability platform has at least two of everything within
an Azure region to keep the application operational if one of the servers becomes
unavailable.
Disaster recovery (DR): This process restores your application functionality if there
are any catastrophic losses. For example, an entire Azure region might become
unavailable because of a natural disaster.
Implementation of this solution varies based on the nature of the system set up in
Azure. You need to tailor your backup and restore, high-availability, and DR solutions
based on your business requirements.
To develop a comprehensive backup and restore strategy for an SAP BOBI platform,
identify the components that lead to system downtime or disruption in the application.
In an SAP BOBI platform, backup of the following components is vital to protect the
application:
The following section describes how to implement a backup and restore strategy for
each component on an SAP BOBI platform.
Azure NetApp Files: For Azure NetApp Files, you can create on-demand snapshots
and schedule an automatic snapshot by using snapshot policies. Snapshot copies
provide a point-in-time copy of your Azure NetApp Files volume. For more
information, see Manage snapshots by using Azure NetApp Files.
Azure Files: Azure Files backup is integrated with a native instance of Backup,
which centralizes the backup and restore function along with VM backup and
simplifies operation work. For more information, see Azure file share backup and
FAQs: Back up Azure Files.
If you've created a separate NFS server, make sure you implement the backup and
restore strategy for the same.
SQL Database uses SQL Server technology to create full backups every week,
differential backups every 12 to 24 hours, and transaction log backups every 5 to
10 minutes. The frequency of transaction log backups is based on the compute
size and the amount of database activity.
Users can choose an option to configure backup storage redundancy between LRS,
ZRS, or GRS blobs. Storage redundancy mechanisms store multiple copies of your
data to protect it from planned and unplanned events, which includes transient
hardware failure, network or power outages, or massive natural disasters. By
default, SQL Database stores backup in GRS blobs that are replicated to a paired
region. It can be changed based on the business requirement to either LRS or ZRS
blobs. For more up-to-date information on SQL Database backup scheduling,
retention, and storage consumption, see Automated backups: Azure SQL Database
and Azure SQL Managed Instance.
Azure Database for MySQL automatically creates server backups and stores in
user-configured LRS or GRS. Azure Database for MySQL takes backups of the data
files and the transaction log. Depending on the supported maximum storage size,
it either takes full and differential backups (4-TB max storage servers) or snapshot
backups (up to 16-TB max storage servers). These backups allow you to restore a
server at any point in time within your configured backup retention period. The
default backup retention period is 7 days, which you can optionally configure up to
35 days. All backups are encrypted by using AES 256-bit encryption. These backup
files aren't user exposed and can't be exported. These backups can only be used
for restore operations in Azure Database for MySQL. You can use mysqldump to
copy a database. For more information, see Backup and restore in Azure Database
for MySQL.
For a database installed on an Azure VM, you can use standard backup tools or
Backup for supported databases. Also, if the Azure services and tools don't meet
your requirements, you can use supported third-party backup tools that provide an
agent for backup and recovery of all SAP BOBI platform components.
High availability
High availability refers to a set of technologies that can minimize IT disruptions by
providing business continuity of applications or services through redundant, fault-
tolerant, or failover-protected components inside the same datacenter. In our case, the
datacenters are within one Azure region. The article High-availability architecture and
scenarios for SAP provides insight on different high-availability techniques and
recommendations offered on Azure for SAP applications, which complement the
instructions in this section.
Based on the sizing result of the SAP BOBI platform, you need to design the landscape
and determine the distribution of BI components across Azure VMs and subnets. The
level of redundancy in the distributed architecture depends on the business-required
recovery time objective (RTO) and recovery point objective (RPO). The SAP BOBI
platform includes different tiers, and components on each tier should be designed to
achieve redundancy. Then if one component fails, there's little to no disruption to your
SAP BOBI application. For example:
The following section describes how to achieve high availability on each component of
an SAP BOBI platform.
Currently, not all Azure regions offer availability zones, so you need to adopt the
deployment strategy based on your region. The Azure regions that offer zones are listed
in Azure availability zones.
) Important
The concepts of Azure availability zones and Azure availability sets are
mutually exclusive. You can deploy a pair or multiple VMs into either a specific
availability zone or an availability set, but you can't do both.
If you planning to deploy across availability zones, it is advised to use flexible
scale set with FD=1 over standard availability zone deployment.
For other database management system (DBMS) deployment for a CMS database, see
DBMS deployment guides for SAP workload for insight on a different DBMS deployment
and its approach to achieving high availability.
For an SAP BOBI platform running on Windows, you can either choose Azure Premium
Files or Azure NetApp Files for filestore, which is designed to be highly available and
highly durable in nature. Azure Premium Files support ZRS, which can be useful for
cross-zone deployment of an SAP BOBI platform. For more information, see the
Redundancy section for Azure Files.
Because the file share service isn't available in all regions, make sure you see the list of
products available by region to find up-to-date information. If the service isn't
available in your region, you can create an NFS server from which you can share the file
system to an SAP BOBI application. But you'll also need to consider its high availability.
In the following figure, the incoming traffic (HTTPS - TCP/443) is load balanced by using
Application Gateway v2 SKU, which spans multiple availability zones. The application
gateway distributes the user request across web servers, which are distributed across
availability zones. The web server forwards the request to management and processing
server instances that are deployed in separate VMs across availability zones. Azure
premium files with ZRS are attached via private link to management and storage tier
VMs to access the contents like reports, universe, and connections. The application
accesses the CMS and audit database running on a zone-redundant instance of SQL
Database, which replicates databases across multiple physical locations within an Azure
region.
The preceding architecture provides insight on how an SAP BOBI deployment on Azure
can be done. But it doesn't cover all possible configuration options for an SAP BOBI
platform on Azure. You can tailor your deployment based on your business
requirements by choosing different products or services for components like Load
Balancer, File Repository Server, and DBMS.
If availability zones aren't available in your selected region, you can deploy Azure VMs in
availability sets. Azure makes sure the VMs you place within an availability set run across
multiple physical servers, compute racks, storage units, and network switches. If
hardware or software failure occurs, only a subset of your VMs is affected and the
overall solution stays operational.
Disaster recovery
This section explains the strategy to provide DR protection for an SAP BOBI platform. It
complements the Disaster recovery for SAP document, which represents the primary
resources for an overall SAP DR approach. For the SAP BOBI platform, see SAP Note
2056228 , which describes the following methods to implement a DR environment
safely:
In this guide, we'll talk about the second option to implement a DR environment. We
won't cover an exhaustive list of all possible configuration options for DR. We'll cover a
solution that features native Azure services in combination with SAP BOBI platform
configuration.
) Important
Filestore
Filestore is a disk directory where the actual files like reports and BI documents are
stored. It's important that all the files in the filestore are in sync to the DR region. Based
on the type of file share service you use for the SAP BOBI platform running on Windows,
the necessary DR strategy needs to be adopted to sync the content. For example:
Azure Premium Files only supports LRS and ZRS. For Azure Premium Files DR
strategy, you can use AzCopy or Azure PowerShell to copy your files to another
storage account in a different region. For more information, see Disaster recovery
and storage account failover.
Azure NetApp Files provides NFS and SMB volumes, so any file-based copy tool
can be used to replicate data between Azure regions. For more information on
how to copy Azure NetApp Files volume in another region, see FAQs about Azure
NetApp Files.
You can use Azure NetApp Files Cross-Region Replication, currently in preview ,
which uses NetApp SnapMirror technology. With this technology, only changed
blocks are sent over the network in a compressed, efficient format. This proprietary
technology minimizes the amount of data required to replicate across the regions,
which saves data transfer costs. It also shortens the replication time so that you
can achieve a smaller RPO. For more information, see Requirements and
considerations for using cross-region replication.
CMS database
The CMS and audit database in the DR region must be a copy of the databases running
in the primary region. Based on the database type, it's important to copy the database
to a DR region based on business-required RTO and RPO. This section describes
different options available for each database solution in Azure that's supported for an
SAP BOBI application running on Windows.
For a SQL Database DR strategy, two options are available to copy the database to the
secondary region. Both recovery options offer different levels of RTO and RPO. For more
information on the RTO and RPO for each recovery option, see Recover a database to an
existing server.
By default, SQL Database stores data in GRS blobs that are replicated to a paired region.
For a SQL database, the backup storage redundancy can be configured at the time of
CMS and audit database creation, or it can be updated for an existing database. The
changes made to an existing database apply to future backups only. You can restore a
database on any SQL database in any Azure region from the most recent geo-replicated
backups. Geo-restore uses a geo-replicated backup as its source. There's a delay
between when a backup is taken and when it's geo-replicated to an Azure blob in a
different region. As a result, the restored database can be up to one hour behind the
original database.
) Important
Geo-replication is a SQL Database feature that allows you to create readable secondary
databases of individual databases on a server in the same or different region. If geo-
replication is enabled for the CMS and audit database, the application can initiate
failover to a secondary database in a different Azure region. Geo-replication is enabled
for individual databases, but to enable transparent and coordinated failover of multiple
databases (CMS and audit) for an SAP BOBI application, it's advisable to use an auto-
failover group. It provides the group semantics on top of active geo-replication, which
means the entire SQL server (all databases) is replicated to another region instead of
individual databases. Check the capabilities table that compares geo-replication with
failover groups.
Auto-failover groups provide read/write and read-only listener endpoints that remain
unchanged during failover. The read/write endpoint can be maintained as a listener in
the ODBC connection entry for the CMS and audit database. So whether you use
manual or automatic failover activation, failover switches all secondary databases in the
group to primary. After the database failover is completed, the DNS record is
automatically updated to redirect the endpoints to the new region. The application is
automatically connected to the CMS database as the read/write endpoint is maintained
as a listener in the ODBC connection.
In the following image, an auto-failover group for the SQL server (azussqlbodb) running
on the East US 2 region is replicated to the East US secondary region (DR site). The
read/write listener endpoint is maintained as a listener in an ODBC connection for the BI
application server running on Windows. After failover, the endpoint will remain the
same. No manual intervention is required to connect the BI application to the SQL
database on the secondary region.
This option provides a lower RTO and RPO than option 1. For more information about
this option, see Use auto-failover groups to enable transparent and coordinated failover
of multiple databases.
Azure Database for MySQL provides options to recover a database if there's a disaster.
Choose the appropriate option that works for your business:
Use the Azure Database for MySQL geo-restore feature that restores the server by
using geo-redundant backups. These backups are accessible even when the region
on which your server is hosted is offline. You can restore from these backups to
any other region and bring your server back online.
) Important
Geo-restore is only possible if you provisioned the server with geo-redundant
backup storage. Changing the backup redundancy options after server
creation isn't supported. For more information, see Backup redundancy.
The following table lists the recommendations for DR for each tier used in this example.
Azure NetApp Files File-based copy tool to replicate data to a secondary region or
Azure NetApp Files Cross-Region Replication Preview
Azure Database for MySQL Cross-region read replicas or restore backup from geo-redundant
backups
Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP BusinessObjects BI platform
deployment guide for Linux on Azure
Article • 06/15/2023
This article describes the strategy to deploy SAP BusinessObjects BI (BOBI) platform on
Azure for Linux. In this example, you configure two virtual machines with premium solid-
state drive (SSD) managed disks as the install directory. You use Azure Database for
MySQL for your CMS database, and you share Azure NetApp Files for your file
repository server across both servers. On both virtual machines, you install the default
Tomcat Java web application and BI platform application together. To load-balance user
requests, you use Azure Application Gateway with native TLS/SSL offloading capabilities.
/usr/sap The file system for SAP sizing bl1adm sapsys Managed
installation of the SAP BOBI guidelines premium
instance, the default Tomcat disk - SSD
web application, and the
database drivers (if
necessary).
) Important
While the setup of the SAP BusinessObjects platform is explained using Azure
NetApp Files, you could use NFS on Azure Files as the input and output file
repository.
Don't use a single subnet for all Azure services in the SAP BI platform
deployment. Based on SAP BI platform architecture, you need to create
multiple subnets. In this deployment, you create three subnets: one each for
the application, the file repository store, and Application Gateway.
In Azure, Application Gateway and Azure NetApp Files must always be on a
separate subnet. For more information, see Azure Application Gateway and
Guidelines for Azure NetApp Files network planning.
You can either use a custom image or choose an image from Azure
Marketplace. For more information, see Deploying a VM from the Azure
Marketplace for SAP or Deploying a VM with a custom image for SAP .
6. Add one premium SSD disk. You'll use it as your SAP BOBI Installation directory.
Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.
Use Azure NetApp Files availability by Azure Region to check the availability of Azure
NetApp Files by region.
2. Set up an Azure NetApp Files capacity pool. The SAP BI platform architecture
presented in this article uses a single Azure NetApp Files capacity pool at the
Premium service level. For SAP BI File Repository Server on Azure, we recommend
using an Azure NetApp Files Premium or Ultra service Level.
You can deploy the volumes as NFSv3 and NFSv4.1, because both protocols are
supported for the SAP BOBI platform. Deploy the volumes in their respective Azure
NetApp Files subnets. The IP addresses of the Azure NetApp Files volumes are
assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the
same Azure virtual network or in peered Azure virtual networks. For example, azusbobi-
frsinput and azusbobi-frsoutput are the volume names, and nfs://10.31.2.4/azusbobi-
frsinput and nfs://10.31.2.4/azusbobi-frsoutput are the file paths for the Azure NetApp
Files volumes.
Important considerations
As you're creating your Azure NetApp Files for SAP BOBI platform file repository server,
be aware of the following considerations:
The minimum capacity pool is 4 tebibytes (TiB). The capacity pool size can be
increased in 1 TiB increments.
The minimum volume size is 100 gibibytes (GiB).
Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes
will be mounted must be in the same Azure virtual network, or in peered virtual
networks in the same region. Azure NetApp Files access over virtual network
peering in the same region is supported. Azure NetApp Files access over global
peering isn't currently supported.
The selected virtual network must have a subnet that is delegated to Azure NetApp
Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. While sizing the SAP Azure NetApp volumes, make sure
that the resulting throughput meets the application requirements.
With the Azure NetApp Files export policy, you can control the allowed clients, the
access type (for example, read-write or read only).
The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't
deployed in all availability zones in an Azure region. Be aware of the potential
latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP BI platform applications.
Bash
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 1G 0 part /boot
└─sda4 8:4 0 28.5G 0 part /
sdb 8:16 0 32G 0 disk
└─sdb1 8:17 0 32G 0 part /mnt
sdc 8:32 0 128G 0 disk
sr0 11:0 1 628K 0 rom
# Premium SSD of 128 GB is attached to virtual machine, whose device
name is sdc
Bash
Bash
Bash
sudo blkid
Bash
Bash
sudo mount -a
sudo df -h
Bash
2. [A] Configure the client operating system to support NFSv4.1 Mount (only
applicable if using NFSv4.1).
If you're using Azure NetApp Files volumes with NFSv4.1 protocol, run the
following configuration on all VMs where Azure NetApp Files NFSv4.1 volumes
need to be mounted.
In this step, you need to verify NFS domain settings. Make sure that the domain is
configured as the default Azure NetApp Files domain ( defaultv4iddomain.com ), and
that the mapping is set to nobody .
Bash
) Important
Bash
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
Bash
Bash
Bash
sudo mount -a
sudo df -h
Create a database
Sign in to the Azure portal, and follow the steps in Quickstart: Create an Azure Database
for MySQL server by using the Azure portal. Here are a few points to note while you're
provisioning Azure Database for MySQL:
Select the same region for Azure Database for MySQL as where your SAP BI
platform application servers are running.
Storage Autogrowth is enabled by default. Keep in mind that storage can only be
scaled-up, not down.
By default, Back up Retention Period is seven days. You can optionally configure it
up to 35 days.
Backups of Azure Database for MySQL are locally redundant by default. If you want
server backups in geo-redundant storage, select Geographically Redundant from
Backup Redundancy Options.
) Important
Changing the Backup Redundancy Options after server creation isn't supported.
7 Note
The private link feature is only available for Azure Database for MySQL servers in
the General Purpose or Memory Optimized pricing tiers. Ensure that the database
server is in one of these pricing tiers.
7. In the Networking section, select the Virtual network and Subnet on which the
SAP BOBI application is deployed.
7 Note
If you have a network security group (NSG) enabled for the subnet, it will be
disabled for private endpoints on this subnet only. Other resources on the
subnet will still have NSG enforcement.
8. For Integrate with private DNS zone, accept the default (yes).
9. Select your private DNS zone from the dropdown list.
10. Select Review+Create, and create a private endpoint.
For more information, see Private Link for Azure Database for MySQL.
2. Connect to the server by using MySQL Workbench. Follow the instructions in Get
connection information. If the connection test is successful, you get following
message:
3. In the SQL query tab, run the following query to create a schema for the CMS and
audit databases.
SQL
# Here cmsbl1 is the database name of CMS database. You can provide the
name you want for CMS database.
CREATE SCHEMA `cmsbl1` DEFAULT CHARACTER SET utf8;
# auditbl1 is the database name of Audit database. You can provide the
name you want for CMS database.
CREATE SCHEMA `auditbl1` DEFAULT CHARACTER SET utf8;
SQL
# Create a user that can connect from any host, use the '%' wildcard as
a host part
CREATE USER 'cmsadmin'@'%' IDENTIFIED BY 'password';
CREATE USER 'auditadmin'@'%' IDENTIFIED BY 'password';
SQL
USE sys;
SHOW GRANTS for 'cmsadmin'@'%';
+----------------------------------------------------------------------
--+
| Grants for cmsadmin@%
|
+----------------------------------------------------------------------
--+
| GRANT USAGE ON *.* TO `cmsadmin`@`%`
|
| GRANT ALL PRIVILEGES ON `cmsbl1`.* TO `cmsadmin`@`%` WITH GRANT
OPTION |
+----------------------------------------------------------------------
--+
USE sys;
SHOW GRANTS FOR 'auditadmin'@'%';
+----------------------------------------------------------------------
------+
| Grants for auditadmin@%
|
+----------------------------------------------------------------------
------+
| GRANT USAGE ON *.* TO `auditadmin`@`%`
|
| GRANT ALL PRIVILEGES ON `auditbl1`.* TO `auditadmin`@`%` WITH GRANT
OPTION |
+----------------------------------------------------------------------
------+
1. Refer to MySQL drivers and management tools compatible with Azure Database
for MySQL. Check for the MySQL Connector/C (libmysqlclient) driver in the article.
3. Select the operating system and download the shared component rpm package of
MySQL Connector. In this example, mysql-connector-c-shared-6.1.11 connector
version is used.
Bash
# sample output
libmysqlclient: /usr/lib64/libmysqlclient.so
6. Set LD_LIBRARY_PATH to point to the /usr/lib64 directory for the user account that
will be used for installation.
Bash
# This configuration is for bash shell. If you are using any other
shell for sidadm, kindly set environment variable accordingly.
vi /home/bl1adm/.bashrc
export LD_LIBRARY_PATH=/usr/lib64
Server preparation
The steps in this section use the following prefix:
1. [A] Based on the flavor of Linux (SLES or RHEL), you need to set kernel parameters
and install required libraries. Refer to the "System requirements" section in
Business Intelligence Platform Installation Guide for Unix .
2. [A] Ensure that the time zone on your machine is set correctly. In the Installation
Guide, see Additional Unix and Linux requirements .
3. [A] Create user account (bl1adm) and group (sapsys) under which the software's
background processes can run. Use this account to run the installation and the
software. The account doesn't require root privileges.
4. [A] Set the user account (bl1adm) environment to use a supported UTF-8 locale,
and ensure that your console software supports UTF-8 character sets. To ensure
that your operating system uses the correct locale, set the LC_ALL and LANG
environment variables to your preferred locale in your (bl1adm) user environment.
Bash
# This configuration is for bash shell. If you are using any other
shell for sidadm, kindly set environment variable accordingly.
vi /home/bl1adm/.bashrc
export LANG=en_US.utf8
export LC_ALL=en_US.utf8
Bash
root@azusbosl1:~> su - bl1adm
bl1adm@azusbosl1:~> ulimit -a
6. Download and extract media for SAP BusinessObjects BI platform from SAP Service
Marketplace.
Installation
Check the locale for user account bl1adm on the server:
Bash
bl1adm@azusbosl1:~> locale
LANG=en_US.utf8
LC_ALL=en_US.utf8
Go to the media of SAP BOBI platform, and run the following command with bl1adm
user:
Bash
Follow the SAP BOBI platform Installation Guide for Unix, specific to your version.
Here are a few points to note while you're installing the SAP BOBI platform:
On Configure Product Registration, you can either use a temporary license key for
SAP BusinessObjects Solutions from SAP Note 1288121 , or you can generate a
license key in SAP Service Marketplace.
On Select Install Type, select Full installation on the first server ( azusbosl1 ). For
the other server ( azusbosl2 ), select Custom / Expand, which will expand the
existing BOBI setup.
You can also select No auditing database, if you don’t want to configure auditing
during installation.
On Select Java Web Application Server screen, select appropriate options based
on your SAP BOBI architecture. In this example, we have selected option 1, which
installs a tomcat server on the same SAP BOBI platform.
Follow the instructions and enter required inputs to complete the installation.
For multi-instance deployment, run the installation setup on a second host ( azusbosl2 ).
For Select Install Type, select Custom / Expand, which will expand the existing BOBI
setup.
In Azure Database for MySQL, a gateway redirects the connections to server instances.
After the connection is established, the MySQL client displays the version of MySQL set
in the gateway, not the actual version running on your MySQL server instance. To
determine the version of your MySQL server instance, use the SELECT VERSION();
command at the MySQL prompt. For more details, see Supported Azure Database for
MySQL server versions.
SQL
# Run direct query to the database using MySQL Workbench
select version();
+-----------+
| version() |
+-----------+
| 8.0.15 |
+-----------+
Post-installation
After a multi-instance installation of the SAP BOBI platform, you need to perform
additional, post-configuration steps, to support application high availability.
To configure the cluster name on Linux, follow the instructions in the SAP Business
Intelligence Platform Administrator Guide . After configuring the cluster name, follow
SAP Note 1660440 to set the default system entry on the CMC or BI launchpad sign-in
page.
1. If you haven't already created NFS volumes, create them in Azure NetApp Files.
(Follow the instructions in the earlier section "Provision Azure NetApp Files.")
2. Mount the NFS volume. (Follow the instructions in the earlier section "Mount the
Azure NetApp Files volume.")
3. Follow SAP Note 2512660 to change the path of file repository (both input and
output).
For example, suppose a user is connected to a web server that fails while the user is
navigating a folder hierarchy in a SAP BI application. With a correctly configured cluster,
the user can continue navigating the folder hierarchy without being redirected to the
sign-in page.
See SAP Note 2808640 for steps to configure Tomcat clustering by using multicast.
Note that Azure, however, doesn't support multicast. So to make the Tomcat cluster
work in Azure, you must use StaticMembershipInterceptor (SAP Note 2764907 ). For
more information, see the blog post Tomcat Clustering using Static Membership for SAP
BusinessObjects BI Platform .
Azure Load Balancer is a high performance, low latency layer 4 (TCP, UDP) load balancer.
It distributes traffic among healthy virtual machines (VMs). A load balancer health probe
monitors a specified port on each VM, and only distributes traffic to an operational VM.
You can either choose a public load balancer or an internal load balancer, depending on
whether or not you want SAP BI platform accessible from the internet. It's zone
redundant, ensuring high-availability across availability zones.
In the following diagram, refer to the Internal Load Balancer section. The web
application server runs on port 8080, the default Tomcat HTTP port, which will be
monitored by health probe. Any incoming request that comes from end users is
redirected to the web application servers ( azusbosl1 or azusbosl2 ). Load Balancer
doesn’t support TLS/SSL termination (also known as TLS/SSL offloading). If you're using
Load Balancer to distribute traffic across web servers, use Standard Load Balancer.
7 Note
When VMs without public IP addresses are placed in the pool of internal (no public
IP address) Standard Load Balancer, there will be no outbound internet
connectivity, unless you perform additional configuration to allow routing to public
end points. For more information, see Public endpoint connectivity for Virtual
Machines using Azure Standard Load Balancer in SAP high-availability scenarios.
Azure Application Gateway
Azure Application Gateway provides Application Delivery Controller (ADC) as a service.
This service is used to help the application to direct user traffic to one or more web
application servers. It offers various layer 7 load-balancing capabilities, such as TLS/SSL
offloading, web application firewall (WAF), and cookie-based session affinity.
In SAP BI platform, Application Gateway directs application web traffic to the specified
resources, either azusbosl1 or azusbos2 . You assign a listener to a port, create rules, and
add resources to a pool. In the following diagram, Application Gateway has a private IP
address (10.31.3.20) that acts as an entry point for users. It also handles incoming
TLS/SSL (HTTPS - TCP/443) connections, decrypts the TLS/SSL, and passes on the
unencrypted request (HTTP - TCP/8080) to the servers. It simplifies operations to
maintain just one TLS/SSL certificate on Application Gateway.
To configure Application Gateway for a SAP BOBI web server, see the blog post Load
Balancing SAP BOBI Web Servers using Azure Application Gateway .
7 Note
Azure Application Gateway is preferable to load balance the traffic to a web server.
It provides helpful features, such as SSL offloading, centralized SSL management to
reduce encryption and decryption overhead on the server, a round-robin algorithm
to distribute traffic, WAF capabilities, and high availability.
This guide explores how features native to Azure, in combination with the SAP BOBI
platform configuration, improves the availability of SAP deployment. This section
focuses on the following options:
Backup and restore: It's a process of creating periodic copies of data and
applications to a separate location. You can restore or recover to a previous state if
the original data or applications are lost or damaged.
High availability: A highly available platform has at least two of everything within
an Azure region, to keep the application operational if one of the servers becomes
unavailable.
Implementation of this solution varies based on the nature of the system setup in Azure.
Tailor you backup/restore, high availability, and disaster recovery solutions according to
your business requirements.
The following section describes how to implement a backup and restore strategy for
each of these components.
As part of backup process, a snapshot is taken, and the data is transferred to the vault
with no impact on production workloads. For more information, see Snapshot
consistency. You can also choose to back up a subset of the data disks in your VM, by
using the selective disks backup and restore functionality. For more information, see
Azure VM Backup and FAQs - Backup Azure VMs.
Azure NetApp Files: You can create on-demand snapshots, and schedule
automatic snapshots by using snapshot policies. Snapshot copies provide a point-
in-time copy of your volume. For more information, see Manage snapshots by
using Azure NetApp Files.
If you have created a separate NFS server, make sure you implement the backup
and restore strategy for the same server.
Azure Database for MySQL automatically creates server backups, and stores them
in user-configured, locally redundant or geo-redundant storage. Azure Database
for MySQL takes backups of the data files and the transaction log. Depending on
the supported maximum storage size, it either takes full and differential backups (4
TB max storage servers), or snapshot backups (up to 16 TB max storage servers).
These backups allow you to restore a server at any point in time within your
configured backup retention period. The default backup retention period is seven
days, which you can optionally configure up to three days. All backups are
encrypted by using AES 256-bit encryption. These backup files aren't user-exposed
and can't be exported. These backups can only be used for restore operations in
Azure Database for MySQL. You can use mysqldump to copy a database. For more
information, see Backup and restore in Azure Database for MySQL.
For a database installed on an Azure virtual machine, you can use standard backup
tools or Azure Backup for supported databases. You can also use supported third-
party backup tools that provide an agent for backup and recovery of all SAP BOBI
platform components.
High availability
High availability refers to a set of technologies that can minimize IT disruptions by
providing business continuity of applications and services. It does so through redundant,
fault-tolerant, or failover-protected components inside the same datacenter. In our case,
the datacenters are within one Azure region. For more information, see High-availability
architecture and scenarios for SAP.
Based on the sizing result of the SAP BOBI platform, you need to design the landscape
and determine the distribution of BI components across Azure Virtual Machines and
subnets. The level of redundancy in the distributed architecture depends on the
recovery time objective (RTO) and recovery point objective (RPO) that you need for your
business. SAP BOBI platform includes different tiers, and components on each tier
should be designed to achieve redundancy. For example:
The following sections describe how to achieve high availability on each component of
the SAP BOBI platform.
To reduce the impact of downtime due to planned and unplanned events, it's a good
idea to follow the high availability architecture guidance.
For more information, see Manage the availability of Linux virtual machines.
) Important
The concepts of Azure availability zones and Azure availability sets are
mutually exclusive. You can deploy a pair or multiple VMs into either a specific
availability zone or an availability set, but you can't do both.
If you planning to deploy across availability zones, it is advised to use flexible
scale set with FD=1 over standard availability zone deployment.
For other deployments for the CMS database, see the high availability information in the
DBMS deployment guides for SAP Workload.
For SAP BOBI platform running on Linux, you can choose Azure Premium Files or Azure
NetApp Files for file shares that are designed to be highly available and highly durable
in nature. For more information, see Redundancy for Azure Files.
Note that this file share service isn't available in all regions. See Products available by
region to find up-to-date information. If the service isn't available in your region, you
can create an NFS server from which you can share the file system to the SAP BOBI
application. But you'll also need to consider its high availability.
For Application Gateway, high availability can be achieved based on the type of tier
selected during deployment.
v1 SKU supports high-availability scenarios when you've deployed two or more
instances. Azure distributes these instances across update and fault domains to
ensure that instances don't all fail at the same time. You achieve redundancy
within the zone.
v2 SKU automatically ensures that new instances are spread across fault
domains and update domains. If you choose zone redundancy, the newest
instances are also spread across availability zones to offer zonal failure
resiliency. For more details, see Autoscaling and Zone-redundant Application
Gateway v2.
Notice that the incoming traffic (HTTPS) is load-balanced by using Azure Application
Gateway v1/v2 SKU, which is highly available when deployed on two or more instances.
Multiple instances of the web server, management servers, and processing servers are
deployed in separate VMs to achieve redundancy. Azure NetApp Files has built-in
redundancy within the datacenter, so your Azure NetApp Files volumes for the file
repository server will be highly available. The CMS database is provisioned on Azure
Database for MySQL, which has inherent high availability. For more information, see
High availability in Azure Database for MySQL.
The preceding architecture provides insight on how a SAP BOBI deployment on Azure
can be done. But it doesn't cover all possible configuration options. You can tailor your
deployment based on your business requirements.
In several Azure regions, you can use availability zones. This means you can take
advantage of an independent supply of power source, cooling, and network. It enables
you to deploy an application across two or three availability zones. If you want to
achieve high availability across availability zones, you can deploy SAP BOBI platform
across these zones, making sure that each component in the application is zone
redundant.
Disaster recovery
This section explains the strategy to provide disaster recovery protection for a SAP BOBI
platform running on Linux. It complements the Disaster Recovery for SAP document,
which represents the primary resources for the overall SAP disaster recovery approach.
For SAP BOBI, refer to SAP Note 2056228 , which describes the following methods to
implement a disaster recovery environment safely.
This guide focuses on the second option. It won't cover all possible configuration
options for disaster recovery, but does cover a solution that features native Azure
services in combination with a SAP BOBI platform configuration.
) Important
Load balancer
A load balancer is used to distribute traffic across web application servers of the SAP
BOBI platform. On Azure, you can either use Azure Load Balancer or Azure Application
Gateway for this purpose. To achieve disaster recovery for the load balancer services,
you need to implement another Azure Load Balancer or Azure Application Gateway on
the secondary region. To keep the same URL after a disaster recovery failover, you need
to change the entry in the DNS, pointing to the load-balancing service running on the
secondary region.
Azure NetApp Files provides NFS and SMB volumes, so you can use any file-based
copy tool to replicate data between Azure regions. For more information on how
to copy a volume in another region, see FAQs About Azure NetApp Files.
You can use Azure NetApp Files cross-region replication, currently in preview .
Only changed blocks are sent over the network in a compressed, efficient format.
This minimizes the amount of data required to replicate across the regions, saving
data transfer costs. It also shortens the replication time, so you can achieve a
smaller RPO. For more information, see Requirements and considerations for using
cross-region replication.
Azure premium files only support locally redundant (LRS) and zone redundant
storage (ZRS). For the disaster recovery strategy, you can use AzCopy or Azure
PowerShell to copy your files to another storage account in a different region. For
more information, see Disaster recovery and storage account failover.
) Important
SMB Protocol for Azure Files is generally available, but NFS Protocol support
for Azure Files is currently in preview. For more information, see NFS 4.1
support for Azure Files is now in preview .
CMS database
The CMS and audit databases in the disaster recovery region must be a copy of the
databases running in primary region. Based on the database type, it's important to copy
the database to the disaster recovery region based on the RTO and RPO that your
business requires.
Azure Database for MySQL provides multiple options to recover a database if there's a
disaster. Choose an appropriate option that works for your business.
Enable cross-region read replicas to enhance your business continuity and disaster
recovery planning. You can replicate from the source server to up to five replicas.
Read replicas are updated asynchronously by using MySQL's binary log replication
technology. Replicas are new servers that you manage similar to regular servers in
Azure Database for MySQL. For more information, see Read replicas in Azure
Database for MySQL.
Use the geo-restore feature to restore the server by using geo-redundant backups.
These backups are accessible even when the region on which your server is hosted
is offline. You can restore from these backups to any other region, and bring your
server back online.
7 Note
The following table shows the recommendation for disaster recovery of each tier used in
this example.
ノ Expand table
Azure NetApp Files File-based copy tool to replicate data to a secondary region, or by using
cross-region replication.
SAP BOBI platform Recommendation
tiers
Azure Database for Cross-region read replicas, or restore backup from geo-redundant
MySQL backups.
Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
Azure Storage types for SAP workload
Article • 07/13/2023
Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and prices. Some of
the storage types aren't, or of limited usable for SAP scenarios. Whereas several Azure storage types are
well suited or optimized for specific SAP workload scenarios. Especially for SAP HANA, some Azure storage
types got certified for the usage with SAP HANA. In this document, we're going through the different types
of storage and describe their capability and usability with SAP workloads and SAP components.
Remark about the units used throughout this article. The public cloud vendors moved to use GiB
(Gibibyte ) or TiB (Tebibyte as size units, instead of Gigabyte or Terabyte. Therefore all Azure
documentation and prizing are using those units. Throughout the document, we're referencing these size
units of MiB, GiB, and TiB units exclusively. You might need to plan with MB, GB, and TB. So, be aware of
some small differences in the calculations if you need to size for a 400 MiB/sec throughput, instead of a 250
MiB/sec throughput.
There are several more redundancy methods, which are all described in the article Azure Storage replication
that applies to some of the different storage types Azure has to offer.
7 Note
Using Azure storage for storing database data and redo log file, LRS is the only supported resiliency
level at this point in time
Also keep in mind that different Azure storage types influence the single VM availability SLAs as released in
SLA for Virtual Machines .
7 Note
We require that new deployments of VMs that use Azure block storage for their disks (all Azure
storage except Azure NetApp Files and Azure Files) need to use Azure managed disks for the base
VHD/OS disks and data disks which store SAP database files. Independent on whether you deploy the
VMs through availability set, across Availability Zones or independent of the sets and zones. Disks that
are used for the purpose of storing backups aren't necessarily required to be managed disks.
Persistent the base VHD of your VM that holds the operating system and other software you install in
that disk. This disk/VHD is the root of your VM. Any changes made to it, need to be persisted. So, that
the next time, you stop and restart the VM, all the changes made before still exist. Especially in cases
where the VM is getting deployed by Azure onto another host than it was running originally
Persisted data disks. These disks are VHDs you attach to store application data in. This application
data could be data and log/redo files of a database, backup files, or software installations. Means any
disk beyond your base VHD that holds the operating system
File shares or shared disks that contain your global transport directory for NetWeaver or S/4HANA.
Content of those shares is either consumed by software running in multiple VMs or is used to build
high-availability failover cluster scenarios
The /sapmnt directory or common file shares for EDI processes or similar. Content of those shares is
either consumed by software running in multiple VMs or is used to build high-availability failover
cluster scenarios
In the next few sections, the different Azure storage types and their usability for the four SAP workload
scenarios gets discussed. A general categorization of how the different Azure storage types should be used
is documented in the article What disk types are available in Azure?. The recommendations for using the
different Azure storage types for SAP workload aren't going to be majorly different.
For support restrictions on Azure storage types for SAP NetWeaver/application layer of S/4HANA, read the
SAP support note 2015553 . For SAP HANA certified and supported Azure storage types, read the article
SAP HANA Azure virtual machine storage configurations.
The sections describing the different Azure storage types will give you more background about the
restrictions and possibilities using the SAP supported storage.
Usage Standard Standard Premium Premium SSD Ultra disk Azure NetApp Azure
scenario HDD SSD Storage v2 Files Premium Files
OS disk Not Restricted Recommended Not possible Not possible Not possible Not possible
suitable suitable
(non-
prod)
DBMS log Not Not Recommended1 Recommended Recommended Recommended2 Not supported
volume supported supported
SAP HANA
M/Mv2
VM
families
DBMS log Not Not Not supported Recommended Recommended Recommended2 Not supported
volume supported supported
SAP HANA
Esv3/Edsv4
VM
families
DBMS Not Restricted Recommended Recommended Recommended Only for Not supported
Data supported suitable specific Oracle
volume (non- releases on
non-HANA prod) Oracle Linux,
Usage Standard Standard Premium Premium SSD Ultra disk Db2
Azure and SAP
NetApp Azure
scenario HDD SSD Storage v2 ASE on
Files Premium Files
SLES/RHEL
Linux
DBMS log Not Restricted Recommended1 Recommended Recommended Only for Not supported
volume supported suitable specific Oracle
non-HANA (non- releases on
M/Mv2 prod) Oracle Linux,
VM Db2 and SAP
families ASE on
SLES/RHEL
Linux
DBMS log Not restricted Suitable for up Recommended Recommended Only for Not supported
volume supported suitable to medium specific Oracle
non-HANA (non- workload releases on
non- prod) Oracle Linux,
M/Mv2 Db2 and SAP
VM ASE on
families SLES/RHEL
Linux
1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes
2
Using ANF requires /hana/data and /hana/log to be on ANF
3
So far tested on SLES only
Characteristics you can expect from the different storage types list like:
Usage Standard Standard Premium Premium SSD Ultra disk Azure Azure
scenario HDD SSD Storage v2 NetApp Files Premium
Files
Allocation of Through Through Through Disk type not Disk type not No3 No
disks on managed managed managed disks supported supported
different disks disks with VMs with VMs
storage deployed deployed
clusters when through through
using availability availability
availability sets sets
sets
Usage Standard Standard Premium Premium SSD Ultra disk Azure Azure
scenario HDD SSD Storage v2 NetApp Files Premium
Files
1
With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes
2
Costs depend on provisioned IOPS and throughput
3 Creation of different ANF capacity pools doesn't guarantee deployment of capacity pools onto different
storage units
) Important
Check out the Azure NetApp Files section of this document to find specifics around proximity
placement of NFS volumes and VMs when less than 1 millisecond latencies are required.
This type of storage is targeting DBMS workloads, storage traffic that requires low single digit millisecond
latency, and SLAs on IOPS and throughput. Cost basis for Azure premium storage isn't the actual data
volume stored in such disks, but the size category of such a disk, independent of the amount of the data
that is stored within the disk. You also can create disks on premium storage that aren't directly mapping
into the size categories shown in the article Premium SSD. Conclusions out of this article are:
The storage is organized in ranges. For example, a disk in the range 513 GiB to 1024 GiB capacity
share the same capabilities and the same monthly costs
The IOPS per GiB aren't tracking linear across the size categories. Smaller disks below 32 GiB have
higher IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB, the IOPS rate per GiB is between 4-5
IOPS per GiB. For larger disks up to 32,767 GiB, the IOPS rate per GiB is going below 1
The I/O throughput for this storage isn't linear with the size of the disk category. For smaller disks, like
the category between 65 GiB and 128 GiB capacity, the throughput is around 780 KB per GiB. Whereas
for the extreme large disks like a 32,767 GiB disk, the throughput is around 28 KB per GiB
The IOPS and throughput SLAs can't be changed without changing the capacity of the disk
Shares/shared disk Not available Needs Azure Premium Files or third party
Maximum IOPS per disk 20,000 dependent on disk size Also consider VM limits
Costs Medium -
Azure premium storage doesn't fulfill SAP HANA storage latency KPIs with the common caching types
offered with Azure premium storage. In order to fulfill the storage latency KPIs for SAP HANA log writes,
you need to use Azure Write Accelerator caching as described in the article Enable Write Accelerator. Azure
Write Accelerator benefits all other DBMS systems for their transaction log writes and redo log writes.
Therefore, it's recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of
Azure Write Accelerator for /hana/log with Azure premium storage is mandatory.
Summary: Azure premium storage is one of the Azure storage types recommended for SAP workload. This
recommendation applies for non-production and production systems. Azure premium storage is suited to
handle database workloads. The usage of Azure Write Accelerator is going to improve write latency against
Azure premium disks substantially. However, for DBMS systems with high IOPS and throughput rates, you
need to either overprovision storage capacity. Or you need to use functionality like Windows Storage
Spaces or logical volume managers in Linux to build stripe sets that give you the desired capacity on the
one side. But also the necessary IOPS or throughput at best cost efficiency.
Azure burst functionality for premium storage
For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The
exact way how disk bursting works is described in the article Disk bursting. When you read the article, you
understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the
nominal IOPS and throughput of the disks (for details on the nominal throughput see Managed Disk
pricing ). You're going to accrue the delta of IOPS and throughput between your current usage and the
nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks
that contain data files for the different DBMS. The I/O workload expected against those volumes, especially
with small to mid-ranged systems is expected to look like:
Low to moderate read workload since data ideally is cached in memory. Or like with SAP HANA should
be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued regularly
Backup workload that reads in a continuous stream in cases where backups aren't executed via
storage snapshots
For SAP HANA, load of the data into memory after an instance restart
Especially on smaller DBMS systems where your workload is handling a few hundred transactions per
seconds only, such a burst functionality can make sense as well for the disks or volumes that store the
transaction or redo log. Expected workload against such a disk or volumes looks like:
Regular writes to the disk that are dependent on the workload and the nature of workload since every
commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding indexes
Read bursts when performing transaction log or redo log backups
Submillisecond I/O latency for smaller read and write I/O sizes
SLAs for IOPS and throughput
Pay capacity by the provisioned GB
Provide a default set of IOPS and storage throughput per disk
Give the possibility to add more IOPS and throughput to each disk and pay separately for these extra
provisioned resources
Pass SAP HANA certification without the help of other functionality like Azure Write Accelerator or
other caches
This type of storage is targeting DBMS workloads, storage traffic that requires submillisecond latency, and
SLAs on IOPS and throughput. The Premium SSD v2 disks are delivered with a default set of 3,000 IOPS and
125 MBps throughput. And the possibility to add more IOPS and throughput to individual disks. The pricing
of the storage is structured in a way that adding more throughput or IOPS isn't influencing the price
majorly. Nevertheless, we leave it up to you to decide how the storage configuration for Premium SSD v2
will look like. For a base start, read SAP HANA Azure virtual machine Premium SSD v2 storage
configurations.
For the actual regions, this new block storage type is available and the actual restrictions read the
document Premium SSD v2.
Shares/shared disk Not available Needs Azure Premium Files or Azure NetApp
Files
Latency submillisecond -
Maximum IOPS per disk 80,000 dependent on disk size Also consider VM limits
Disk bursting No -
Costs Medium -
In opposite to Azure premium storage, Azure Premium SSD v2 fulfills SAP HANA storage latency KPIs. As a
result, you DON'T need to use Azure Write Accelerator caching as described in the article Enable Write
Accelerator.
Summary: Azure Premium SSD v2 is the block storage that fits the best price/performance ratio for SAP
workloads. Azure Premium SSD v2 is suited to handle database workloads. The submillisecond latency is
ideal storage for demanding DBMS workloads. Though it's a newer storage type that got released in
November 2022. Therefore, there still might be some limitations that are going to go away over the next
few months.
As you create an ultra disk, you have three dimensions you can define:
The capacity of the disk. Ranges are from 4 GiB to 65,536 GiB
Provisioned IOPS for the disk. Different maximum values apply to the capacity of the disk. Read the
article Ultra disk for more details
Provisioned storage bandwidth. Different maximum bandwidth applies dependent on the capacity of
the disk. Read the article Ultra disk for more details
The cost of a single disk is determined by the three dimensions you can define for the particular disks
separately.
Disk bursting No -
7 Note
The minimum provisioning size is a 4 TiB unit that is called capacity pool. You then create volumes out
of this capacity pool. Whereas the smallest volume you can build is 100 GiB. You can expand a capacity
pool in TiB steps. For pricing, check the article Azure NetApp Files Pricing
7 Note
So far no DBMS workloads are supported on SMB based on Azure NetApp Files.
As already with Azure premium storage, a fixed or linear throughput size per GB can be a problem when
you're required to adhere to some minimum numbers in throughput. Like this is the case for SAP HANA.
With ANF, this problem can become more pronounced than with Azure premium disk. Using Azure
premium disk, you can take several smaller disks with a relatively high throughput per GiB and stripe across
them to be cost efficient and have higher throughput at lower capacity. This kind of striping doesn't work
for NFS or SMB shares hosted on ANF. This restriction resulted in deployment of overcapacity like:
To achieve, for example, a throughput of 250 MiB/sec on an NFS volume hosted on ANF, you need to
deploy 1.95 TiB capacity of the Ultra service level.
To achieve 400 MiB/sec, you would need to deploy 3.125 TiB capacity. But you may need the over-
provisioning of capacity to achieve the throughput you require of the volume. This over-provisioning
of capacity impacts the pricing of smaller HANA instances.
Using NFS on top of ANF for the SAP /sapmnt directory, you're usually going far with the minimum
capacity of 100 GiB to 150 GiB that is enforced by Azure NetApp Files. However customer experience
showed that the related throughput of 12.8 MiB/sec (using Ultra service level) may not be enough and
may have negative impact on the stability of the SAP system. In such cases, customers could avoid
issues by increasing the volume of the /sapmnt volume, so, that more throughput is provided to that
volume.
Data disk Suitable SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on
SLES/RHEL
SAP sapmnt Suitable All systems SMB (Windows only) or NFS (Linux only)
Shares/shared disk Yes SMB 3.0, NFS v3, and NFS v4.1
) Important
Specifically for database deployments you want to achieve low latencies for at least your redo logs.
Especially for SAP HANA, SAP requires a latency of less than than 1 millisecond for HANA redo log
writes of smaller sizes. To get to such latencies, see the possibilities below.
) Important
Even for non-DBMS usage, you should use the preview functionality that allows you to create the NFS
share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares
into. This functionality is documented in the article Manage availability zone volume placement for
Azure NetApp Files. The motivation to have this type of Availability Zone alignment is the reduction of
risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
You go for the closest proximity between VM and NFS share that can be arranged by using
Application Volume Groups. The advantage of Application Volume Groups, besides allocating best
proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA
deployments are distributed across different controllers in the Azure NetApp Files backend clusters.
Disadvantage of this method is that you need to go through a pinning process again. A process that
will end restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the
first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs
that have the NFS volumes mounted.
Current process of not using Availability Placement Groups. Which so far are available for SAP HANA
only. This process also uses the same manual pinning process as this is the case with Availability
Volume groups. This method is the method used for the last three years. It has the same flexibility
restrictions as the process has with Availability Volume Groups.
As preferences for allocating NFS volumes based on ANF for database specific usage, you should attempt
to allocate the NFS volume in the same zone as your VM first. Especially for non-HANA databases. Only if
latency proves to be insufficient you should go through a manual pinning process. For smaller HANA
workload or non-production HANA workload, you should follow a zonal allocation method as well. Only in
cases where performance and latency aren't sufficient you should use Application Volume Groups.
Summary: Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB
volumes or shares. The storage comes with three different service levels that provide different throughput
and IOPS in a linear manner per GiB capacity of the volume. The ANF storage is enabling to deploy SAP
HANA scale-out scenarios with a standby node. The storage is suitable for providing file shares as needed
for /sapmnt or SAP global transport directory. ANF storage come with functionality availability that is
available as native NetApp functionality.
7 Note
So far no SAP DBMS workloads are supported on shared volumes based on Azure Premium Files.
Azure Premium Files is starting with larger amount of IOPS at the minimum share size of 100 GB compared
to Azure NetApp Files. This higher bar of IOPS can avoid capacity overprovisioning to achieve certain IOPS
and throughput values. For IOPS and storage throughput, read the section Azure file share scale targets in
Azure Files scalability and performance targets.
SAP sapmnt Suitable All systems SMB (Windows only) or NFS (Linux
only)
Resiliency LRS and ZRS No GRS available for Azure Premium Files
Latency low -
HANA certified No -
Costs low -
Summary: Azure Premium Files is a low latency storage that allows to deploy NFS and SMB volumes or
shares. Azure Premium Files provides excellent price/performance ratio for SAP application layer shares. It
also provides synchronous zonal replication for these shares. So far, we don't support this storage type for
SAP DBMS workload. Though it can be used for /hana/shared volumes.
Data disk Restricted Some non-production systems with low IOPS and latency
suitable demands
Latency high Too high for SAP Global Transport directory, or production
systems
IOPS SLA No -
Capability Comment Notes/Links
Throughput SLA No -
HANA certified No -
Costs LOW -
Summary: Azure standard SSD storage is the minimum recommendation for non-production VMs for base
VHD, eventual DBMS deployments with relative latency insensitivity and/or low IOPS and throughput rates.
This Azure storage type isn't supported anymore for hosting the SAP Global Transport Directory.
Latency high Too high for DBMS usage, SAP Global Transport directory, or
sapmnt/saploc
IOPS SLA No -
Throughput SLA No -
HANA certified No -
Costs Low -
Summary: Standard HDD is an Azure storage type that should only be used to store SAP backups. It should
only be used as base VHD for rather inactive systems, like retired systems used for looking up data here
and there. But no active development, QA or production VMs should be based on that storage. Nor should
database files being hosted on that storage
Standard HDD Sizes for Linux Sizes for Windows Likely hard to touch the storage limits of medium or
VMs in Azure VMs in Azure large VMs
Standard SSD Sizes for Linux Sizes for Windows Likely hard to touch the storage limits of medium or
VMs in Azure VMs in Azure large VMs
Premium Sizes for Linux Sizes for Windows Easy to hit IOPS or storage throughput VM limits with
Storage VMs in Azure VMs in Azure storage configuration
Premium SSD Sizes for Linux Sizes for Windows Easy to hit IOPS or storage throughput VM limits with
v2 VMs in Azure VMs in Azure storage configuration
Ultra disk Sizes for Linux Sizes for Windows Easy to hit IOPS or storage throughput VM limits with
storage VMs in Azure VMs in Azure storage configuration
Azure NetApp Sizes for Linux Sizes for Windows Storage traffic is using network throughput bandwidth
Files VMs in Azure VMs in Azure and not storage bandwidth!
Azure Premium Sizes for Linux Sizes for Windows Storage traffic is using network throughput bandwidth
Files VMs in Azure VMs in Azure and not storage bandwidth!
The smaller the VM, the fewer disks you can attach. This restriction doesn't apply to ANF. Since you
mount NFS or SMB shares, you don't encounter a limit of number of shared volumes to be attached
VMs have I/O throughput and IOPS limits that easily could be exceeded with premium storage disks
and Ultra disks
With ANF and Azure Premium Files, the traffic to the shared volumes is consuming the VM's network
bandwidth and not storage bandwidth
With large NFS volumes in the double digit TiB capacity space, the throughput accessing such a
volume out of a single VM is going to plateau based on limits of Linux for a single session interacting
with the shared volume.
As you up-size Azure VMs in the lifecycle of an SAP system, you should evaluate the IOPS and storage
throughput limits of the new and larger VM type. In some cases, it also could make sense to adjust the
storage configuration to the new capabilities of the Azure VM.
Striping or not striping
Creating a stripe set out of multiple Azure disks into one larger volume allows you to accumulate the IOPS
and throughput of the individual disks into one volume. It's used for Azure standard storage and Azure
premium storage only. Azure Ultra disk where you can configure the throughput and IOPS independent of
the capacity of a disk, doesn't require the usage of stripe sets. Shared volumes based on NFS or SMB can't
be striped. Due to the non-linear nature of Azure premium storage throughput and IOPS, you can provision
smaller capacity with the same IOPS and throughput than large single Azure premium storage disks. That is
the method to achieve higher throughput or IOPS at lower cost using Azure premium storage. For example,
striping across two P15 premium storage disks gets you to a throughput of:
250 MiB/sec. Such a volume is going to have 512 GiB capacity. If you want to have a single disk that
gives you 250 MiB throughput per second, you would need to pick a P40 disk with 2 TiB capacity.
400 MiB/sec by striping four P10 premium storage disks with an overall capacity of 512 GiB by
striping. If you would like to have a single disk with a minimum of 500 MiB throughput per second,
you would need to pick a P60 premium storage disk with 8 TiB. Because the cost of premium storage
is near linear with the capacity, you can sense the cost savings by using striping.
No in-VM configured storage should be used since Azure storage keeps the data redundant already
The disks the stripe set is applied to, need to be of the same size
With Premium SSD v2 and Ultra disk, the capacity, provisioned IOPS and provisioned throughput
needs to be the same
Striping across multiple smaller disks is the best way to achieve a good price/performance ratio using Azure
premium storage. It's understood that striping can have some extra deployment and management
overhead.
For specific stripe size recommendations, read the documentation for the different DBMS, like SAP HANA
Azure virtual machine storage configurations.
Next steps
Read the articles:
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP HANA Azure virtual machine storage configurations
SAP HANA Azure virtual machine
storage configurations
Article • 03/19/2024
Azure provides different types of storage that are suitable for Azure VMs that are
running SAP HANA. The SAP HANA certified Azure storage types that can be
considered for SAP HANA deployments list like:
To learn about these disk types, see the article Azure Storage types for SAP workload
and Select a disk type
Azure offers two deployment methods for VHDs on Azure Standard and premium
storage v1/v2. We expect you to take advantage of Azure managed disk for Azure
block storage deployments.
For a list of storage types and their SLAs in IOPS and storage throughput, review the
Azure documentation for managed disks .
) Important
Independent of the Azure storage type chosen, the file system that is used on that
storage needs to be supported by SAP for the specific operating system and DBMS.
SAP support note #2972496 lists the supported file systems for different
operating systems and databases, including SAP HANA. This applies to all volumes
SAP HANA might access for reading and writing for whatever task. Specifically
using NFS on Azure for SAP HANA, additional restrictions of NFS versions apply as
stated later in this article
The minimum SAP HANA certified conditions for the different storage types are:
Based on experience gained with customers, we changed the support for combining
different storage types between /hana/data and /hana/log. It is supported to combine
the usage of the different Azure block storages that are certified for HANA AND NFS
shares based on Azure NetApp Files. For example, it's possible to put /hana/data onto
premium storage v1 or v2 and /hana/log can be placed on Ultra disk storage in order to
get the required low latency. If you use a volume based on ANF for /hana/data,
/hana/log volume can be placed on one of the HANA certified Azure block storage
types as well. Using NFS on top of ANF for one of the volumes (like /hana/data) and
Azure premium storage v1/v2 or Ultra disk for the other volume (like /hana/log) is
supported.
In the on-premises world, you rarely had to care about the I/O subsystems and its
capabilities. Reason was that the appliance vendor needed to make sure that the
minimum storage requirements are met for SAP HANA. As you build the Azure
infrastructure yourself, you should be aware of some of these SAP issued requirements.
Some of the minimum throughput characteristics that SAP is recommending, are:
Given that low storage latency is critical for DBMS systems, even as DBMS, like SAP
HANA, keep data in-memory. The critical path in storage is usually around the
transaction log writes of the DBMS systems. But also operations like writing savepoints
or loading data in-memory after crash recovery can be critical. Therefore, it's mandatory
to use Azure premium storage v1/v2, Ultra disk, or ANF for /hana/data and /hana/log
volumes.
Some guiding principles in selecting your storage configuration for HANA can be listed
like:
Decide on the type of storage based on Azure Storage types for SAP workload and
Select a disk type
The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for
a VM. Overall VM storage throughput is documented in the article Memory
optimized virtual machine sizes
When deciding for the storage configuration, try to stay below the overall
throughput of the VM with your /hana/data volume configuration. SAP HANA
writing savepoints, HANA can be aggressive issuing I/Os. It's easily possible to
push up to throughput limits of your /hana/data volume when writing a savepoint.
If your disk(s) that build the /hana/data volume have a higher throughput than
your VM allows, you could run into situations where throughput utilized by the
savepoint writing is interfering with throughput demands of the redo log writes. A
situation that can impact the application throughput
If you're considering using HANA System Replication, the storage used for
/hana/data on each replica must be same and the storage type used for /hana/log
on each replica must be same. For example, using Azure premium storage v1 for
/hana/data with one VM and Azure Ultra disk for /hana/data in another VM
running a replica of the same HANA System replication configuration, isn't
supported
) Important
Reading through the details, it's apparent that applying this functionality takes away
complexities of volume manager based stripe sets. You also realize that the HANA data
volume partitioning isn't only working for Azure block storage, like Azure premium
storage v1/v2. You can use this functionality as well to stripe across NFS shares in case
these shares have IOPS or throughput limitations.
On Red Hat, leave the settings as established by the specific tune profiles for the
different SAP applications.
The stripe size for /hana/data got changed from earlier recommendations calling
for 64 KB or 128 KB to 256 KB based on customer experiences with more recent
Linux versions. The size of 256 KB is providing slightly better performance. We also
changed the recommendation for stripe sizes of /hana/log from 32 KB to 64 KB in
order to get enough throughput with larger I/O sizes.
7 Note
You don't need to configure any redundancy level using RAID volumes since Azure
block storage keeps three images of a VHD. The usage of a stripe set with Azure
premium disks is purely to configure volumes that provide sufficient IOPS and/or
I/O throughput.
Accumulating multiple Azure disks underneath a stripe set, is accumulative from an IOPS
and storage throughput side. So, if you put a stripe set across over 3 x P30 Azure
premium storage v1 disks, it should give you three times the IOPS and three times the
storage throughput of a single Azure premium Storage v1 P30 disk.
) Important
In case you're using LVM or mdadm as volume manager to create stripe sets across
multiple Azure premium disks, the three SAP HANA FileSystems /data, /log and
/shared must not be put in a default or root volume group. It's highly
recommended to follow the Linux Vendors guidance which is typically to create
individual Volume Groups for /data, /log and /shared.
If the HANA system is in an HA configuration, slow responses from the shared file
system, i.e. /hana/shared could cause cluster resources timeouts. These timeouts may
lead to unnecessary failovers, because the HANA resource agents might incorrectly
assume that the database is not available.
The SAP guidelines for /hana/shared recommended sizes would look like:
ノ Expand table
Next steps
For more information, see:
This document is about HANA storage configurations for Azure premium storage or premium
ssd as it was introduced years back as low latency storage for DBMS and other applications that
need low latency storage. For general considerations around stripe sizes when using LVM, HANA
data volume partitioning or other considerations that are independent of the particular storage
type, check these two documents:
) Important
The suggestions for the storage configurations in this document are meant as directions to
start with. Running workload and analyzing storage utilization patterns, you might realize
that you aren't utilizing all the storage bandwidth or IOPS provided. You might consider
downsizing on storage then. Or in contrary, your workload might need more storage
throughput than suggested with these configurations. As a result, you might need to
deploy more capacity, IOPS or throughput. In the field of tension between storage capacity
required, storage latency needed, storage throughput and IOPS required and least
expensive configuration, Azure offers enough different storage types with different
capabilities and different price points to find and adjust to the right compromise for you
and your HANA workload.
) Important
When using Azure premium storage, the usage of Azure Write Accelerator for the
/hana/log volume is mandatory. Write Accelerator is available for premium storage and M-
Series and Mv2-Series VMs only. Write Accelerator is not working in combination with
other Azure VM families, like Esv3 or Edsv4.
The caching recommendations for Azure premium disks below are assuming the I/O
characteristics for SAP HANA that list like:
There hardly is any read workload against the HANA data files. Exceptions are large sized
I/Os after restart of the HANA instance or when data is loaded into HANA. Another case of
larger read I/Os against data files can be HANA database backups. As a result read caching
mostly doesn't make sense since in most of the cases, all data file volumes need to be read
completely.
Writing against the data files is experienced in bursts based by HANA savepoints and
HANA crash recovery. Writing savepoints is asynchronous and aren't holding up any user
transactions. Writing data during crash recovery is performance critical in order to get the
system responding fast again. However, crash recovery should be rather exceptional
situations
There are hardly any reads from the HANA redo files. Exceptions are large I/Os when
performing transaction log backups, crash recovery, or in the restart phase of a HANA
instance.
Main load against the SAP HANA redo log file is writes. Dependent on the nature of
workload, you can have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more.
Write latency against the SAP HANA redo log is performance critical.
All writes need to be persisted on disk in a reliable fashion
Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for
the different volumes using Azure premium storage should be set like:
The ideal cases where this burst functionality can be planned in is likely going to be the volumes
or disks that contain data files for the different DBMS. The I/O workload expected against those
volumes, especially with small to mid-ranged systems is expected to look like:
Low to moderate read workload since data ideally is cached in memory, or like with SAP
HANA should be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued on a
regular basis
Backup workload that reads in a continuous stream in cases where backups aren't
executed via storage snapshots
For SAP HANA, load of the data into memory after an instance restart
Especially on smaller DBMS systems where your workload is handling a few hundred
transactions per seconds only, such a burst functionality can make sense as well for the disks or
volumes that store the transaction or redo log. Expected workload against such a disk or
volumes looks like:
Regular writes to the disk that are dependent on the workload and the nature of workload
since every commit issued by the application is likely to trigger an I/O operation
Higher workload in throughput for cases of operational tasks, like creating or rebuilding
indexes
Read bursts when performing transaction log or redo log backups
) Important
SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write
Accelerator for the /hana/log volume. As a result, production scenario SAP HANA
deployments on Azure M-Series virtual machines are expected to be configured with Azure
Write Accelerator for the /hana/log volume.
7 Note
In scenarios that involve Azure premium storage, we are implementing burst capabilities
into the configuration. As you're using storage test tools of whatever shape or form, keep
the way Azure premium disk bursting works in mind. Running the storage tests delivered
through the SAP HWCCT or HCMT tool, we aren't expecting that all tests will pass the
criteria since some of the tests will exceed the bursting credits you can accumulate.
Especially when all the tests run sequentially without break.
7 Note
With M32ts and M32ls VMs it can happen that disk throughput could be lower than
expected using HCMT/HWCCT disk tests. Even with disk bursting or with sufficiently
provisioned I/O throughput of the underlying disks. Root cause of the observed behavior
was that the HCMT/HWCCT storage test files were completely cached in the read cache of
the Premium storage data disks. This cache is located on the compute host that hosts the
virtual machine and can cache the test files of HCMT/HWCCT completely. In such a case the
quotas listed in the column Max cached and temp storage throughput: IOPS/MBps (cache
size in GiB) in the article M-series are relevant. Specifically for M32ts and M32ls, the
throughput quota against the read cache is only 400MB/sec. As a result of the tests files
being completely cached, it is possible that despite disk bursting or higher provisioned I/O
throughput, the tests can fall slightly short of 400MB/sec maximum throughput. As an
alternative, you can test without read cache enabled on the Azure Premium storage data
disks.
7 Note
For production scenarios, check whether a certain VM type is supported for SAP HANA by
SAP in the SAP documentation for IAAS .
ノ Expand table
M32ts 192 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000
GiB
M32ls 256 500 MBps 4 x P6 200 MBps 680 MBps 960 14,000
GiB
M64ls 512 1,000 MBps 4 x P10 400 MBps 680 MBps 2,000 14,000
GiB
M32(d)ms_v2 875 500 MBps 4 x P15 500 MBps 680 MBps 4,400 14,000
GiB
M48(d)s_1_v3, 974 1,560 MBps 4 x P15 500 MBps 680 MBps 4,400 14,000
M96(d)s_1_v3 GiB
M64s, 1,024 1,000 MBps 4 x P15 500 MBps 680 MBps 4,400 14,000
M64(d)s_v2 GiB
VM SKU RAM Max. VM /hana/data Provisioned Maximum IOPS Burst
I/O Throughput burst IOPS
Throughput throughput
M64ms, 1,792 1,000 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
M64(d)ms_v2 GiB
M96(d)s_2_v3 1,946 3,120 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB
M128s, 2,048 2,000 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
M128(d)s_v2 GiB
M192i(d)s_v2 2,048 2,000 MBps 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB
1
VM type not available by default. Please contact your Microsoft account team
ノ Expand table
M32ts 192 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
M32ls 256 500 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
M64ls 512 1,000 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
M32(d)ms_v2 875 500 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M48(d)s_1_v3, 974 1,560 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M96(d)s_1_v3 GiB
M64s, 1,024 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M64(d)s_v2 GiB
M64ms, 1,792 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M64(d)s_v2 GiB
M96(d)s_2_v3 1,946 3,120 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M128s, 2,048 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
M128(d)s_v2 GiB
M192i(d)s_v2 2,048 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M176(d)s_3_v3 2,794 4,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M176(d)s_4_v3 3,750 4,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M192i(d)ms_v2 4,096 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M208s_v2 2,850 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M208ms_v2 5,700 1,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M416s_v2 5,700 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
VM SKU RAM Max. VM I/O /hana/log Provisioned Maximum IOPS Burst
Throughput volume Throughput burst IOPS
throughput
M416s_8_v2 7,600 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M416ms_v2 11,400 2,000 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
M832ixs1 14,902 larger than 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB 2,000 Mbps
M832ixs_v21 23,088 larger than 4 x P20 600 MBps 680 MBps 9,200 14,000
GiB 2,000 Mbps
1 VM type not available by default. Please contact your Microsoft account team
ノ Expand table
1
VM type not available by default. Please contact your Microsoft account team
2 Review carefully the considerations for sizing /hana/shared
Check whether the storage throughput for the different suggested volumes meets the workload
that you want to run. If the workload requires higher volumes for /hana/data and /hana/log,
you need to increase the number of Azure premium storage VHDs. Sizing a volume with more
VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual
machine type.
Azure Write Accelerator only works with Azure managed disks . So at least the Azure premium
storage disks forming the /hana/log volume need to be deployed as managed disks. More
detailed instructions and restrictions of Azure Write Accelerator can be found in the article Write
Accelerator.
You may want to use Azure Ultra disk storage instead of Azure premium storage only for the
/hana/log volume to be compliant with the SAP HANA certification KPIs when using E-series
VMs. Though, many customers are using premium storage SSD disks for the /hana/log volume
for non-production purposes or even for smaller production workloads since the write latency
experienced with premium storage for the critical redo log writes are meeting the workload
requirements. The configurations for the /hana/data volume on Azure premium storage could
look like:
ノ Expand table
E20ds_v4 160 480 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
E20(d)s_v5 160 750 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
E32ds_v4 256 768 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
E32ds_v5 256 865 MBps 3 x P10 300 MBps 510 MBps 1,500 10,500
GiB
E48ds_v4 384 1,152 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
E48ds_v4 384 1,315 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
E64s_v3 432 1,200 MB/s 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
E64ds_v4 504 1,200 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
E64(d)s_v5 512 1,735 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
E96(d)s_v5 672 2,600 MBps 3 x P15 375 MBps 510 MBps 3,300 10,500
GiB
For the other volumes, including /hana/log on Ultra disk, the configuration could look like:
ノ Expand table
GiB
1
Review carefully the considerations for sizing /hana/shared
ノ Expand table
VM SKU RAM Max. VM /hana/data /hana/shared3 /root /usr/sap comments
I/O and volume
Throughput /hana/log
striped
with LVM
or
MDADM
will limit
IOPS rate
to 10,0002
IOPS rate
to 10,0002
1
Azure Write Accelerator can't be used with the Ev4 and Ev4 VM families. As a result of using
Azure premium storage the I/O latency won't be less than 1ms
2 The VM family supports Azure Write Accelerator, but there's a potential that the IOPS limit of
Write accelerator could limit the disk configurations IOPS capabilities
When combining the data and log volume for SAP HANA, the disks building the striped volume
shouldn't have read cache or read/write cache enabled.
There are VM types listed that aren't certified with SAP and as such not listed in the so called
SAP HANA hardware directory . Feedback of customers was that those non-listed VM types
were used successfully for some non-production tasks.
Next steps
For more information, see:
This document is about HANA storage configurations for Azure Premium SSD v2. Azure Premium SSD v2 is a
new storage that was developed to more flexible block storage with submillisecond latency for general purpose
and DBMS workload. Premium SSD v2 simplifies the way how you build storage architectures and let's you tailor
and adapt the storage capabilities to your workload. Premium SSD v2 allows you to configure and pay for
capacity, IOPS, and throughput independent of each other.
For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other
considerations that are independent of the particular storage type, check these two documents:
) Important
The suggestions for the storage configurations in this document are meant as directions to start with.
Running workload and analyzing storage utilization patterns, you might realize that you're not utilizing all
the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary,
your workload might need more storage throughput than suggested with these configurations. As a result,
you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage
capacity required, storage latency needed, storage throughput and IOPS required and least expensive
configuration, Azure offers enough different storage types with different capabilities and different price
points to find and adjust to the right compromise for you and your HANA workload.
With Premium SSD v2, you pay the exact deployed capacity. Unlike with premium disk and Ultra disk,
where brackets of sizes are being taken to determine the costs of capacity
Every Premium SSD v2 storage disk comes with 3,000 IOPS and 125 MBps on throughput that is included
in the capacity pricing
Extra IOPS and throughput on top of the default ones that come with each disk can be provisioned at any
point in time and are charged separately
Changes to the provisioned IOPS and throughput can be executed once in 6 hours
Latency of Premium SSD v2 is lower than premium storage, but higher than Ultra disk. But is
submilliseconds, so, that it passes the SAP HANA KPIs without the help of any other functionality, like
Azure Write Accelerator
Like with Ultra disk, you can use Premium SSD v2 for /hana/data and /hana/log volumes without the
need of any accelerators or other caches.
Like Ultra disk, Azure Premium SSD doesn't offer caching options as premium storage does
With Premium SSD v2, the same storage configuration applies to the HANA certified Ev4, Ev5, and M-series
VMs that offer the same memory
Unlike premium storage, there's no disk bursting for Premium SSD v2
Not having Azure Write Accelerator support or support by other caches makes the configuration of Premium
SSD v2 for the different VM families easier and more unified and avoid variations that need to be considered in
deployment automation. Not having bursting capabilities makes throughput and IOPS delivered more
deterministic and reliable. Since Premium SSD v2 is a new storage type, there are still some restrictions related
to its features and capabilities. to read up on these limitations and differences between the different storages,
start with reading the document Azure managed disk types.
7 Note
The configurations suggested below keep the HANA minimum KPIs, as listed in SAP HANA Azure virtual
machine storage configurations in mind. Our tests so far gave no indications that with the values listed,
SAP HCMT tests would fail in throughput or latency. That stated, not all variations possible and
combinations around stripe sets stretched across multiple disks or different stripe sizes were tested. Tests
condcuted with striped volumes across multiple disks were done with the stripe sizes documented in SAP
HANA Azure virtual machine storage configurations.
7 Note
For production scenarios, check whether a certain VM type is supported for SAP HANA by SAP in the SAP
documentation for IAAS .
When you look up the price list for Azure managed disks, then it becomes apparent that the cost scheme
introduced with Premium SSD v2, gives you two general paths to pursue:
You try to simplify your storage architecture by using a single disk for /hana/data and /hana/log and pay
for more IOPS and throughput as needed to achieve the levels we recommend below. With the awareness
that a single disk has a throughput level of 1,200 MBps and 80,000 IOPS.
You want to benefit of the 3,000 IOPS and 125MBps that come for free with each disk. To do so, you would
build multiple smaller disks that sum up to the capacity you need and then build a striped volume with a
logical volume manager across these multiple disks. Striping across multiple disks would give you the
possibility to reduce the IOPS and throughput cost factors. But would result in some more efforts in
automating deployments and operating such solutions.
Since we don't want to define which direction you should go, we're leaving the decision to you on whether to
take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk
approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to
stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing
storage throughput are going to grow over time. And that HANA savepoints are critical and demand high
throughput for the /hana/data volume
) Important
You have the possibility to define the sector size of Azure Premium SSD v2 as 512 Bytes or 4096 Bytes.
Default sector size is 4096 Bytes. Tests conducted with HCMT did not reveal any significant differences in
performance and throughput between the different sector sizes. This sector size is different than stripe sizes
that you need to define when using a logical volume manager.
Recommendation: The recommended starting configurations with Azure premium storage v2 for production
scenarios look like:
ノ Expand table
E20ds_v4 160 GiB 480 MBps 32,000 192 GB 425 MBps 3,000
E20(d)s_v5 160 GiB 750 MBps 32,000 192 GB 425 MBps 3,000
E32ds_v4 256 GiB 769 MBps 51,200 304 GB 425 MBps 3,000
E32ds_v5 256 GiB 865 MBps 51,200 304 GB 425 MBps 3,000
E48ds_v4 384 GiB 1,152 MBps 76,800 464 GB 425 MBps 3,000
E48ds_v4 384 GiB 1,315 MBps 76,800 464 GB 425 MBps 3,000
E64ds_v4 504 GiB 1,200 MBps 80,000 608 GB 425 MBps 3,000
E64(d)s_v5 512 GiB 1,735 MBps 80,000 608 GB 425 MBps 3,000
E96(d)s_v5 672 GiB 2,600 MBps 80,000 800 GB 425 MBps 3,000
M32ts 192 GiB 500 MBps 20,000 224 GB 425 MBps 3,000
M32ls 256 GiB 500 MBps 20,000 304 GB 425 MBps 3,000
M64ls 512 GiB 1,000 MBps 40,000 608 GB 425 MBps 3,000
M32(d)ms_v2 875 GiB 500 MBps 30,000 1056 GB 425 MBps 3,000
M48(d)s_1_v3, 974 GiB 1,560 MBps 65,000 1232 GB 600 MBps 5,000
M96(d)s_1_v3
M64s, M64(d)s_v2 1,024 1,000 MBps 40,000 1232 GB 600 MBps 5,000
GiB
M128s, M128(d)s_v2 2,048 2,000 MBps 80,000 2464 GB 800 MBps 12,000
GiB
M832ixs1 14,902 larger than 2,000 80,000 19,200 GB 2,000 MBps2 40,000
GiB Mbps
M832ixs_v21 23,088 larger than 2,000 80,000 28,400 GB 2,000 MBps2 60,000
GiB Mbps
1 VM type not available by default. Please contact your Microsoft account team
2
Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially
savepoint activity, can force you to deploy significant more throughput and IOPS
ノ Expand table
E32ds_v4 256 768 MBps 51,200 128 GB 275 MBps 3,000 256 GB
GiB
E32(d)s_v5 256 865 MBps 51,200 128 GB 275 MBps 3,000 256 GB
GiB
E48ds_v4 384 1,152 MBps 76,800 192 GB 275 MBps 3,000 384 GB
GiB
E48(d)s_v5 384 1,315 MBps 76,800 192 GB 275 MBps 3,000 384 GB
GiB
VM SKU RAM Max. VM I/O Max VM /hana/log /hana/log /hana/log /hana/shared2
Throughput IOPS capacity throughput IOPS capacity
using default
IOPS
and throughput
E64ds_v4 504 1,200 MBps 80,000 256 GB 275 MBps 3,000 504 GB
GiB
E64(d)s_v5 512 1,735 MBps 80,000 256 GB 275 MBps 3,000 512 GB
GiB
E96(d)s_v5 672 2,600 MBps 80,000 512 GB 275 MBps 3,000 672 GB
GiB
M32ls 256 500 MBps 20,000 128 GB 275 MBps 3,000 256 GB
GiB
M64ls 512 1,000 MBps 40,000 256 GB 275 MBps 3,000 512 GB
GiB
M32(d)ms_v2 875 500 MBps 20,000 512 GB 275 MBps 3,000 875 GB
GiB
M48(d)s_1_v3, 974 1,560 MBps 65,000 512 GB 275 MBps 3,000 1,024 GB
M96(d)s_1_v3 GiB
M64s, M64(d)s_v2 1,024 1,000 MBps 40,000 512 GB 275 MBps 3,000 1,024 GB
GiB
M64ms, 1,792 1,000 MBps 40,000 512 GB 275 MBps 3,000 1,024 GB
M64(d)ms_v2 GiB
M96(d)s_2_v3 1,946 3,120 MBps 130,000 512 GB 300 MBps 4,000 1,024 GB
GiB
M128s, 2,048 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
M128(d)s_v2 GiB
M192i(d)s_v2 2,048 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
GiB
M176(d)s_3_v3 2,794 4,000 MBps 130,000 512 GB 300 MBps 4,000 1,024 GB
GiB
M176(d)s_4_v3 3,750 4,000 MBps 130,000 512 GB 300 MBps 4,000 1,024 GB
GiB
M128ms, 3,892 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
M128(d)ms_v2 GiB
M192i(d)ms_v2 4,096 2,000 MBps 80,000 512 GB 300 MBps 4,000 1,024 GB
GiB
M208s_v2 2,850 1,000 MBps 40,000 512 GB 300 MBps 4,000 1,024 GB
GiB
M208ms_v2 5,700 1,000 MBps 40,000 512 GB 350 MBps 4,500 1,024 GB
GiB
M416s_v2 5,700 2,000 MBps 80,000 512 GB 400 MBps 5,000 1,024 GB
VM SKU RAM Max. VM I/O Max VM /hana/log /hana/log /hana/log /hana/shared2
Throughput IOPS capacity throughput IOPS capacity
using default
IOPS
and throughput
GiB
M416s_8_v2 5,700 2,000 MBps 80,000 512 GB 400 MBps 5,000 1,024 GB
GiB
M416ms_v2 11,400 2,000 MBps 80,000 512 GB 400 MBps 5,000 1,024 GB
GiB
M832ixs1 14,902 larger than 80,000 512 GB 600 MBps 9,000 1,024 GB
GiB 2,000 Mbps
M832ixs_v21 23,088 larger than 80,000 512 GB 600 MBps 9,000 1,024 GB
GiB 2,000 Mbps
1
VM type not available by default. Please contact your Microsoft account team
2 Review carefully the considerations for sizing /hana/shared
Check whether the storage throughput for the different suggested volumes meets the workload that you want
to run. If the workload requires higher volumes for /hana/data and /hana/log, you need to increase either IOPS,
and/or throughput on the individual disks you're using.
A few examples on how combining multiple Premium SSD v2 disks with a stripe set could impact the
requirement to provision more IOPS or throughput for /hana/data is displayed in this table:
ノ Expand table
VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned
E32(d)s_v5 256 1 304 GB 3,000 3,000 0 425 MBps 125 MBps 300 MBps
GiB
E32(d)s_v5 256 2 152 GB 3,000 6,000 0 425 MBps 250 MBps 175 MBps
GiB
E96(d)s_v5 672 1 304 GB 3,000 3,000 0 425 MBps 125 MBps 300 MBps
GiB
E96(d)s_v5 672 2 152 GB 3,000 6,000 0 425 MBps 250 MBps 175 MBps
GiB
M128s, 2,048 1 2,464 GB 12,000 3,000 9,000 800 MBps 125 MBps 675 MBps
M128ds_v2, GiB
M128s_v2
M128s, 2,048 2 1,232 GB 12,000 6,000 6,000 800 MBps 250 MBps 550 MBps
M128ds_v2, GiB
M128s_v2
VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned
M128s, 2,048 4 616 GB 12,000 12,000 0 800 MBps 500 MBps 300 MBps
M128ds_v2, GiB
M128s_v2
M416ms_v2 11,400 1 13,680 25,000 3,000 22,000 1,200 MBps 125 MBps 1,075 MBps
GiB
M416ms_v2 11,400 2 6,840 25,000 6,000 19,000 1,200 MBps 250 MBps 950 MBps
GiB
M416ms_v2 11,400 4 3,420 25,000 12,000 13,000 1,200 MBps 500 MBps 700 MBps
GiB
M832ixs1 14,902 2 7,451 GB 40,000 6,000 34,000 2,000 MBps 250 MBps 1750 MBps
GiB
M832ixs1 14,902 4 3,726 GB 40,000 12,000 28,000 2,000 MBps 500 MBps 1500 MBps
GiB
M832ixs1 14,902 8 1,863 GB 40,000 24,000 16,000 2,000 MBps 1,000 MBps 1000 MBps
GiB
1 VM type not available by default. Please contact your Microsoft account team
For /hana/log, a similar approach of using two disks could look like:
ノ Expand table
VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned
E32(d)s_v5 256 1 128 GB 3,000 3,000 0 275 MBps 125 MBps 150 MBps
GiB
E96(d)s_v5 672 1 512 GB 3,000 3,000 0 275 MBps 125 MBps 150 MBps
GiB
E96(d)s_v5 672 2 256 GB 3,000 6,000 0 275 MBps 250 MBps 25 MBps
GiB
M128s, 2,048 1 512 GB 4,000 3,000 1,000 300 MBps 125 MBps 175 MBps
M128ds_v2, GiB
M128s_v2
M128s, 2,048 2 256 GB 4,000 6,000 0 300 MBps 250 MBps 50 MBps
M128ds_v2, GiB
M128s_v2
M416ms_v2 11,400 1 512 GB 5,000 3,000 2,000 400 MBps 125 MBps 275 MBps
GiB
M416ms_v2 11,400 2 256 GB 5,000 6,000 0 400 MBps 250 MBps 150 MBps
GiB
VM SKU RAM number individual Proposed Default Extra IOPS Proposed Default Extra
of disk IOPS IOPS provisioned throughput throughput throughput
disks capacity provisioned for volume provisioned provisioned
M832ixs1 14,902 1 512 GB 9,000 3,000 6,000 600 MBps 125 MBps 475 MBps
GiB
M832ixs1 14,902 2 256 GB 9,000 6,000 3,000 600 MBps 250 MBps 350 MBps
GiB
1
VM type not available by default. Please contact your Microsoft account team
These tables combined with the prices of IOPS and throughput should give you an idea how striping across
multiple Premium SSD v2 disks could reduce the costs for the particular storage configuration you're looking at.
Based on these calculations, you can decide whether to move ahead with a single disk approach for /hana/data
and/or /hana/log.
Next steps
For more information, see:
This document is about HANA storage configurations for Azure Ultra Disk storage as it was introduced as
ultra low latency storage for DBMS and other applications that need ultra low latency storage. For
general considerations around stripe sizes when using LVM, HANA data volume partitioning or other
considerations that are independent of the particular storage type, check these two documents:
Ultra disk gives you the possibility to define a single disk that fulfills your size, IOPS, and disk throughput
range. Instead of using logical volume managers like LVM or MDADM on top of Azure premium storage
to construct volumes that fulfill IOPS and storage throughput requirements. You can run a configuration
mix between Ultra disk and premium storage. As a result, you can limit the usage of Ultra disk to the
performance critical /hana/data and /hana/log volumes and cover the other volumes with Azure
premium storage
Other advantages of Ultra disk can be the better read latency in comparison to premium storage. The
faster read latency can have advantages when you want to reduce the HANA startup times and the
subsequent load of the data into memory. Advantages of Ultra disk storage also can be felt when HANA
is writing savepoints.
7 Note
Ultra disk might not be present in all the Azure regions. For detailed information where Ultra disk is
available and which VM families are supported, check the article What disk types are available in
Azure?.
) Important
You have the possibility to define the sector size of Ultra disk as 512 Bytes or 4096 Bytes. Default
sector size is 4096 Bytes. Tests conducted with HCMT did not reveal any significant differences in
performance and throughput between the different sector sizes. This sector size is different than
stripe sizes that you need to define when using a logical volume manager.
Production recommended storage solution with pure
Ultra disk configuration
In this configuration, you keep the /hana/data and /hana/log volumes separately. The suggested values
are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as
recommended in the SAP TDI Storage Whitepaper .
The recommendations are often exceeding the SAP minimum requirements as stated earlier in this
article. The listed recommendations are a compromise between the size recommendations by SAP and
the maximum storage throughput the different VM types provide.
7 Note
Azure Ultra disk is enforcing a minimum of 2 IOPS per Gigabyte capacity of a disk
E20ds_v4 160 480 MBps 200 GB 400 MBps 2,500 80 GB 250 MB 1,800
GiB
E32ds_v4 256 768 MBps 300 GB 400 MBps 2,500 128 GB 250 MBps 1,800
GiB
E48ds_v4 384 1152 MBps 460 GB 400 MBps 3,000 192 GB 250 MBps 1,800
GiB
E64ds_v4 504 1200 MBps 610 GB 400 MBps 3,500 256 GB 250 MBps 1,800
GiB
E64s_v3 432 1,200 MBps 610 GB 400 MBps 3,500 220 GB 250 MB 1,800
GiB
M32ts 192 500 MBps 250 GB 400 MBps 2,500 96 GB 250 MBps 1,800
GiB
M32ls 256 500 MBps 300 GB 400 MBps 2,500 256 GB 250 MBps 1,800
GiB
M64ls 512 1,000 MBps 620 GB 400 MBps 3,500 256 GB 250 MBps 1,800
GiB
M32(d)ms_v2, 875 500 MBps 1,200 GB 600 MBps 5,000 512 GB 250 MBps 2,500
GiB
M48(d)s_1_v3, 974 1,560 MBps 1,200 GB 600 MBps 5,000 512 GB 250 MBps 2,500
M96(d)s_1_v3 GiB
M64s, 1,024 1,000 MBps 1,200 GB 600 MBps 5,000 512 GB 250 MBps 2,500
M64(d)s_v2 GiB
M64ms, 1,792 1,000 MBps 2,100 GB 600 MBps 5,000 512 GB 250 MBps 2,500
M64(d)ms_v2 GiB
VM SKU RAM Max. VM /hana/data /hana/data /hana/data /hana/log /hana/log /hana/log
I/O volume I/O IOPS volume I/O IOPS
Throughput throughput throughput
M96(d)s_2_v3 1,946 3,120 MBps 2,400 GB 750 MBps 7,000 512 GB 250 MBps 2,500
GiB
M128s, 2,048 2,000 MBps 2,400 GB 750 MBps 7,000 512 GB 250 MBps 2,500
M128(d)s_v2 GiB
M192i(d)s_v2 2,048 2,000 MBps 2,400 GB 750 MBps 7,000 512 GB 250 MBps 2,500
GiB
M176(d)s_3_v3 2,794 4,000 MBps 750 MBps 7,000 512 GB 250 MBps 2,500
GiB
M176(d)s_4_v3 3,750 4,000 MBps 4,800 GB 750 MBps 9,600 512 GB 250 MBps 2,500
GiB
M128ms, 3,892 2,000 MBps 4,800 GB 750 MBps 9,600 512 GB 250 MBps 2,500
M128(d)ms_v2 GiB
M192i(d)ms_v2 4,096 2,000 MBps 4,800 GB 750 MBps 9,600 512 GB 250 MBps 2,500
GiB
M208s_v2 2,850 1,000 MBps 3,500 GB 750 MBps 7,000 512 GB 250 MBps 2,500
GiB
M208ms_v2 5,700 1,000 MBps 7,200 GB 750 MBps 14,400 512 GB 250 MBps 2,500
GiB
M416s_v2 5,700 2,000 MBps 7,200 GB 1,000 MBps 14,400 512 GB 400 MBps 4,000
GiB
M416s_8_v2 7,600 2,000 MBps 9,500 GB 1,250 MBps 20,000 512 GB 400 MBps 4,000
M416ms_v2 11,400 2,000 MBps 14,400 GB 1,500 MBps 28,800 512 GB 400 MBps 4,000
GiB
M832isx1 14902 larger than 19,200 GB 2,000 40,000 512 GB 600 MBps 9,000
GiB 2,000 Mbps MBps2
M832isx_v21 23088 larger than 28,400 GB 2,000 60,000 512 GB 600 MBps 9,000
GiB 2,000 Mbps MBps2
1
VM type not available by default. Please contact your Microsoft account team
2
Maximum throughput provided by the VM and throughput requirement by SAP HANA workload,
especially savepoint activity, can force you to deploy significant more throughput and IOPS
The values listed are intended to be a starting point and need to be evaluated against the real
demands. The advantage with Azure Ultra disk is that the values for IOPS and throughput can be
adapted without the need to shut down the VM or halting the workload applied to the system.
7 Note
So far, storage snapshots with Ultra disk storage is not available. This blocks the usage of VM
snapshots with Azure Backup Services
Next steps
For more information, see:
Azure NetApp Files provides native NFS shares that can be used for /hana/shared,
/hana/data, and /hana/log volumes. Using ANF-based NFS shares for the /hana/data
and /hana/log volumes requires the usage of the v4.1 NFS protocol. The NFS protocol
v3 isn't supported for the usage of /hana/data and /hana/log volumes when basing the
shares on ANF.
) Important
Important considerations
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware
of the following important considerations:
ANF-based NFS shares and the virtual machines that mount those shares must be
in the same Azure Virtual Network or in peered virtual networks in the same region
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
For SAP workload, it is highly recommended to configure a /25 range for the
subnet delegated to ANF.
It's important to have the virtual machines deployed sufficient proximity to the
Azure NetApp storage for lower latency as, for example, demanded by SAP HANA
for redo log writes.
Azure NetApp Files meanwhile has functionality to deploy NFS volumes into
specific Azure Availability Zones. Such a zonal proximity is going to be sufficient
in the majority of cases to achieve a latency of less than 1 millisecond. The
functionality is in public preview and described in the article Manage availability
zone volume placement for Azure NetApp Files. This functionality isn't requiring
any interactive process with Microsoft to achieve proximity between your VM
and the NFS volumes you allocate.
To achieve most optimal proximity, the functionality of Application Volume
Groups is available. This functionality isn't only looking for most optimal
proximity, but for most optimal placement of the NFS volumes, so, that HANA
data and redo log volumes are handled by different controllers. The
disadvantage is that this method needs some interactive process with Microsoft
to pin your VMs.
Make sure the latency from the database server to the ANF volume is measured
and below 1 millisecond
The throughput of an Azure NetApp volume is a function of the volume quota and
Service level, as documented in Service level for Azure NetApp Files. When sizing
the HANA Azure NetApp volumes, make sure the resulting throughput meets the
HANA system requirements. Alternatively consider using a manual QoS capacity
pool where volume capacity and throughput can be configured and scaled
independently (SAP HANA specific examples are in this document
Azure NetApp Files offers export policy: you can control the allowed clients, the
access type (Read&Write, Read Only, etc.).
The User ID for sidadm and the Group ID for sapsys on the virtual machines must
match the configuration in Azure NetApp Files.
) Important
For SAP HANA workloads, low latency is critical. Work with your Microsoft
representative to ensure that the virtual machines and the Azure NetApp Files
volumes are deployed in close proximity.
) Important
If there's a mismatch between User ID for sidadm and the Group ID for sapsys
between the virtual machine and the Azure NetApp configuration, the permissions
for files on Azure NetApp volumes, mounted to the VM, would be be displayed as
nobody . Make sure to specify the correct User ID for sidadm and the Group ID for
Mounting of multiple ANF hosted NFS volumes with different service levels in one
VM
The maximum write throughput for a volume and a single Linux session is between
1.2 and 1.4 GB/s. Having multiple sessions against one ANF hosted NFS volume
can increase the throughput
For Linux OS releases that support nconnect as a mount option and some important
configuration considerations of nconnect, especially with different NFS server endpoints,
read the document Linux NFS mount options best practices for Azure NetApp Files.
Important to understand is the performance relationship the size and that there are
physical limits for a storage endpoint of the service. Each storage endpoint is going to
be dynamically injected into the Azure NetApp Files delegated subnet upon volume
creation and receive an IP address. Azure NetApp Files volumes can – depending on
available capacity and deployment logic – share a storage endpoint
The table below demonstrates that it could make sense to create a large “Standard”
volume to store backups and that it doesn't make sense to create a “Ultra” volume
larger than 12 TB because the maximal physical bandwidth capacity of a single volume
would be exceeded.
If you require more than the maximum write throughput for your /hana/data volume
than a single Linux session can provide, you could also use SAP HANA data volume
partitioning as an alternative. SAP HANA data volume partitioning stripes the I/O activity
during data reload or HANA savepoints across multiple HANA data files that are located
on multiple NFS shares. For more details on HANA data volume striping read these
articles:
ノ Expand table
1
: write or single session read throughput limits (in case NFS mount option nconnect
isn't used)
It's important to understand that the data is written to the same SSDs in the storage
backend. The performance quota from the capacity pool was created to be able to
manage the environment. The Storage KPIs are equal for all HANA database sizes. In
almost all cases, this assumption doesn't reflect the reality and the customer
expectation. The size of HANA Systems doesn't necessarily mean that a small system
requires low storage throughput – and a large system requires high storage throughput.
But generally we can expect higher throughput requirements for larger HANA database
instances. As a result of SAP's sizing rules for the underlying hardware such larger HANA
instances also provide more CPU resources and higher parallelism in tasks like loading
data after an instances restart. As a result the volume sizes should be adopted to the
customer expectations and requirements. And not only driven by pure capacity
requirements.
As you design the infrastructure for SAP in Azure you should be aware of some
minimum storage throughput requirements (for productions Systems) by SAP. These
requirements translate into minimum throughput characteristics of:
ノ Expand table
Volume type and I/O Minimum KPI demanded by Premium service Ultra service
type SAP level level
Since all three KPIs are demanded, the /hana/data volume needs to be sized toward the
larger capacity to fulfill the minimum read requirements. if you're using manual QoS
capacity pools, the size and throughput of the volumes can be defined independently.
Since both capacity and throughput are taken from the same capacity pool, the pool‘s
service level and size must be large enough to deliver the total performance (see
example here)
For HANA systems, which aren't requiring high bandwidth, the ANF volume throughput
can be lowered by either a smaller volume size or, using manual QoS, by adjusting the
throughput directly. And in case a HANA system requires more throughput the volume
could be adapted by resizing the capacity online. No KPIs are defined for backup
volumes. However the backup volume throughput is essential for a well performing
environment. Log – and Data volume performance must be designed to the customer
expectations.
) Important
Independent of the capacity you deploy on a single NFS volume, the throughput, is
expected to plateau in the range of 1.2-1.4 GB/sec bandwidth utilized by a
consumer in a single session. This has to do with the underlying architecture of the
ANF offer and related Linux session limits around NFS. The performance and
throughput numbers as documented in the article Performance benchmark test
results for Azure NetApp Files were conducted against one shared NFS volume
with multiple client VMs and as a result with multiple sessions. That scenario is
different to the scenario we measure in SAP. Where we measure throughput from a
single VM against an NFS volume. Hosted on ANF.
To meet the SAP minimum throughput requirements for data and log, and according to
the guidelines for /hana/shared, the recommended sizes would look like:
ノ Expand table
The sizes for the backup volumes are estimations. Exact requirements need to be
defined based on workload and operation processes. For backups, you could
consolidate many volumes for different SAP HANA instances to one (or two) larger
volumes, which could have a lower service level of ANF.
7 Note
The Azure NetApp Files, sizing recommendations stated in this document are
targeting the minimum requirements SAP expresses towards their infrastructure
providers. In real customer deployments and workload scenarios, that may not be
enough. Use these recommendations as a starting point and adapt, based on the
requirements of your specific workload.
Therefore you could consider to deploy similar throughput for the ANF volumes as
listed for Ultra disk storage already. Also consider the sizes for the sizes listed for the
volumes for the different VM SKUs as done in the Ultra disk tables already.
Tip
You can re-size Azure NetApp Files volumes dynamically, without the need to
unmount the volumes, stop the virtual machines or stop SAP HANA. That allows
For systems using High Availability (HA) using pacemaker and Azure Load Balancer
following settings need to be implemented in file /etc/sysctl.d/91-NetApp-HANA.conf
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1
Systems running with no pacemaker and Azure Load Balancer should implement these
settings in /etc/sysctl.d/91-NetApp-HANA.conf
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
Using the form you need to request a pinning of the empty AvSet to a compute
HW to ensure that VMs aren't going to move
Assign a PPG to the Availability Set and start a VM assigned to this Availability Set
Use Azure NetApp Files application volume group for SAP HANA functionality to
deploy your HANA volumes
The proximity placement group configuration to use AVGs in an optimal way would look
like:
The diagram shows that you're going to use an Azure proximity placement group for
the DBMS layer. So, that it can get used together with AVGs. It's best to just include only
the VMs that run the HANA instances in the proximity placement group. The proximity
placement group is necessary, even if only one VM with a single HANA instance is used,
for the AVG to identify the closest proximity of the ANF hardware. And to allocate the
NFS volume on ANF as close as possible to the VM(s) that are using the NFS volumes.
This method generates the most optimal results as it relates to low latency. Not only by
getting the NFS volumes and VMs as close together as possible. But considerations of
placing the data and redo log volumes across different controllers on the NetApp
backend are taken into account as well. Though, the disadvantage is that your VM
deployment is pinned down to one datacenter. With that you're losing flexibilities in
changing VM types and families. As a result, you should limit this method to the systems
that absolutely require such low storage latency. For all other systems, you should
attempt the deployment with a traditional zonal deployment of the VM and ANF. In
most cases this is sufficient in terms of low latency. This also ensures a easy maintenance
and administration of the VM and ANF.
Availability
ANF system updates and upgrades are applied without impacting the customer
environment. The defined SLA is 99.99% .
The same applies for the volume you use write full HANA database backups to.
Backup
Besides streaming backups and Azure Back service backing up SAP HANA databases as,
described in the article Backup guide for SAP HANA on Azure Virtual Machines, Azure
NetApp Files opens the possibility to perform storage-based snapshot backups.
2 Warning
Missing the last step or failing to perform the last step has severe impact on SAP
HANA's memory demand and can lead to a halt of SAP HANA
This snapshot backup procedure can be managed in various ways, using various tools.
One example is the Python script “ntaphana_azure.py” available on GitHub
https://github.com/netapp/ntaphana This is sample code, provided “as-is” without
any maintenance or support.
U Caution
A snapshot in itself isn't a protected backup since it's located on the same physical
storage as the volume you just took a snapshot of. It's mandatory to “protect” at
least one snapshot per day to a different location. This can be done in the same
environment, in a remote Azure region or on Azure Blob storage.
The most advanced feature is the SYNC option. If you use the SYNC option, azcopy
keeps the source and the destination directory synchronized. The usage of the
parameter --delete-destination is important. Without this parameter, azcopy isn't
deleting files at the destination site and the space utilization on the destination side
would grow. Create a Block Blob container in your Azure storage account. Then create
the SAS key for the blob container and synchronize the snapshot folder to the Azure
Blob container.
For example, if a daily snapshot should be synchronized to the Azure blob container to
protect the data. And only that one snapshot should be kept, the command below can
be used.
Next steps
Read the article:
This document is about Azure Premium Files file shares used for SAP workload. Both
NFS volumes and SMB file shares are covered. For considerations around Azure NetApp
Files for SMB or NFS volumes, see the following two documents:
) Important
The suggestions for the storage configurations in this document are meant as
directions to start with. Running workload and analyzing storage utilization
patterns, you might realize that you are not utilizing all the storage bandwidth or
IOPS provided. You might consider downsizing on storage then. Or in contrary,
your workload might need more storage throughput than suggested with these
configurations. As a result, you might need to deploy more capacity to increase
IOPS or throughput. In the field of tension between storage capacity required,
storage latency needed, storage throughput and IOPS required and least expensive
configuration, Azure offers enough different storage types with different
capabilities and different price points to find and adjust to the right compromise
for you and your SAP workload.
For SAP workloads, the supported uses of Azure Files shares are:
7 Note
No SAP DBMS workloads are supported on Azure Premium Files volumes, NFS or
SMB. For support restrictions on Azure storage types for SAP
NetWeaver/application layer of S/4HANA, read the SAP support note 2015553
Important considerations for Azure Premium
Files shares with SAP
When you plan your deployment with Azure Files, consider the following important
points. The term share in this section applies to both SMB share and NFS volume.
The minimum share size is 100 gibibytes (GiB). With Azure Premium Files, you pay
for the capacity of the provisioned shares.
Size your file shares not only based on capacity requirements, but also on IOPS
and throughput requirements. For details, see Azure files share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues with NFS
on Azure Files, consult Troubleshoot Azure file share performance.
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces.
Don't use the saptrans share for any other activity, such as interfaces.
If your SAP system has a heavy load of batch jobs, you might have millions of job
logs. If the SAP batch job logs are stored in the file system, pay special attention to
the sizing of the sapmnt share. Reorganize the job log files regularly as per SAP
note 16083 . As of SAP_BASIS 7.52, the default behavior for the batch job logs is
to be stored in the database. For details, see SAP note 2360818 | Job log in the
database .
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also scalability and performance targets for storage accounts.
Be careful to not exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than five SAP systems in a single
storage account. This guideline helps you avoid exceeding the storage account
limits and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for non-production and production
SAP systems in the same storage account.
Use a private endpoint with Azure Files. In the unlikely event of a zonal failure, your
NFS sessions automatically redirect to a healthy zone. You don't have to remount
the NFS shares on your VMs. Use of private link can result in extra charges for the
data processed, see details about private link pricing .
If you're deploying your VMs across availability zones, use a storage account with
ZRS in the Azure regions that support ZRS.
Azure Premium Files doesn't currently support automatic cross-region replication
for disaster recovery scenarios. See guidelines on DR for SAP applications for
available options.
Carefully consider when consolidating multiple activities into one file share or multiple
file shares in one storage accounts. Distributing these shares onto separate storage
accounts improves throughput, resiliency and simplifies the performance analysis. If
many SAP SIDs and shares are consolidated onto a single Azure Files storage account
and the storage account performance is poor due to hitting the throughput limits. It can
become difficult to identify which SID or volume is causing the problem.
Next steps
For more information, see:
Data archiving has always been a critical decision-making item and is heavily used by
many companies to organize their legacy data to achieve cost benefits, balancing the
need to comply with regulations and retain data for certain period with the cost of
storing the data. Customers planning to migrate to S/4HANA or HANA based solution
or reduce existing data storage footprint can leverage on the various data tiering
options supported on Azure.
This article describes options on Azure with emphasis on classifying the data usage
pattern.
Overview
SAP HANA is an in-memory database and is supported on SAP certified servers. Azure
provides more than 100 solutions certified to run SAP HANA. In-memory capabilities
of SAP HANA allow customers to execute business transactions at an incredible speed.
But do you need fast access to all data, at any given point in time? Food for thought.
Most organizations choose to offload less accessed SAP data to HANA storage tier OR
archive legacy data to an extended solution to attain maximum performance out of their
investment. This tiering of data helps balance SAP HANA footprint and reduces cost and
complexity throughout effectively.
Customers can refer to the table below for data tier characteristics and choose to move
data to the temperature tier as per desired usage.
"One size fits all" approach does not work here. Post data characterization is done, the
next step is to map SAP solution to the data tiering solution that is supported by SAP on
Azure.
Native SAP HANA SAP HANA Dynamic Tiering, DLM with Data Intelligence,
certified HANA extension Node, NSE DLM with Hadoop
VMs
SAP BW/4 HANA SAP NSE, HANA extension Node NLS with SAP IQ and Hadoop,
certified Data Intelligence with ADLS
VMs
SAP BW on SAP NSE, HANA extension Node NLS with SAP IQ and Hadoop,
HANA certified Data Intelligence with ADLS
VMs
2462641 - Is HANA Dynamic Tiering supported for Business Suite on HANA, or other
SAP applications ( S/4, BW ) ? - SAP ONE Support Launchpad
2140959 - SAP HANA Dynamic Tiering - Additional Information - SAP ONE Support
Launchpad
2799997 - FAQ: SAP HANA Native Storage Extension (NSE) - SAP ONE Support
Launchpad
2816823 - Use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP
Business Suite powered by SAP HANA - SAP ONE Support Launchpad
Configuration
Overview
The capacity of SAP HANA database with NSE is the amount of hot data memory and
warm data stored on disk. NSE allocates buffer cache in HANA main memory and is
sized separately from SAP HANA hot and working memory. As per SAP documentation,
buffer cache is enabled by default and is sized by default as 10% of HANA memory.
Please be informed NSE is not a replacement for data archiving as it does not reduce
HANA disk size. Unlike data archiving, activation of NSE can be reversed.
2799997 - FAQ: SAP HANA Native Storage Extension (NSE) - SAP ONE Support
Launchpad
2973243 - Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA
and SAP Business Suite powered by SAP HANA - SAP ONE Support Launchpad
NSE is supported for scale-up and scale-out systems. Availability for scale out systems
starts with SAP HANA 2.0 SPS 04. Please refer SAP Note 2927591 to understand the
functional restrictions.
2927591 - SAP HANA Native Storage Extension 2.0 SPS 05 Functional Restrictions - SAP
ONE Support Launchpad
SAP HANA NSE disaster recovery on Azure can be achieved using a variety of methods,
including:
HANA System replication: HANA System replication allows you to create a copy of
your SAP HANA NSE system in another Azure zone or region of choice. This copy is
periodically replicated with your production SAP HANA NSE system. In the event of
a disaster, fail over can be triggered to the disaster recovery SAP HANA NSE
system.
Backup and restore: You can also use backup and restore to protect your SAP
HANA NSE system from disaster. You can back up your SAP HANA NSE system to
Azure Backup, and then restore it to a new SAP HANA NSE system in the event of a
disaster. Native Azure backup capabilities can be leveraged here.
Azure Site Recovery: Azure Site Recovery is a disaster recovery service that can be
used to replicate and recover your SAP HANA NSE system to another Azure region.
Azure Site Recovery provides several features that make it a good choice for SAP
HANA NSE disaster recovery, such as:
Point-in-time restore, which allows you to restore your SAP HANA NSE system
to a specific point in time.
Automated failover and failback, which can help you to quickly recover your SAP
HANA NSE system in the event of a disaster.
The best method for SAP HANA NSE disaster recovery on Azure will depend on your
specific needs and requirements.
Restore SAP HANA database instances on Azure VMs - Azure Backup | Microsoft Learn
HANA Extension nodes are supported for BW on HANA, BW/4HANA and SAP HANA
native applications. For SAP BW on HANA, you will need SAP HANA 1.0 SP 12 as
minimum HANA . release and BW 7.4 SP12 as minimum BW release. For SAP HANA
native applications, you will need HANA 2 SPS03 as minimum HANA release.
The extension nodes setup is based on HANA scale-out offering. Customers with scale-
up architecture need to extend to scale-out deployment. Apart from HANA standard
license, no additional license is required. Extension node cannot share the same OS,
network and disk with HANA standard node.
Networking Configuration
Configure the networking settings for the Azure VMs to ensure proper communication
between the SAP HANA primary node and the extension nodes. This includes
configuring Azure virtual network (VNet) settings, subnets, and network security groups
(NSGs) to allow the necessary network traffic.
Implement a robust backup and recovery strategy to protect your SAP HANA data.
Azure offers various backup options, including Azure Backup or SAP HANA-specific
backup tools. Configure regular backups of both the primary and extension nodes to
ensure data integrity and availability.
Data tiering and extension nodes for SAP HANA on Azure (Large Instances) - Azure
Virtual Machines | Microsoft Learn
Let's explore three common scenarios for SAP HANA data tiering using Azure services.
SAP Data Intelligence enables the integration of SAP HANA with Azure Data Lake
Storage. Cold data can be seamlessly moved from the in-memory tier to ADLS,
leveraging its cost-effective storage capabilities. SAP Data Intelligence facilitates the
orchestration of data pipelines, allowing for transparent access and query execution on
data residing in ADLS.
You can leverage the capabilities and services offered by Azure in conjunction with SAP
Data Intelligence. Here are a few integration options:
Azure Data Lake Storage integration
SAP Data Intelligence supports integration with Azure Data Lake Storage, which is a
scalable and secure data storage solution in Azure. You can configure connections in
SAP Data Intelligence to access and process data stored in Azure Data Lake Storage. This
allows you to leverage the power of SAP Data Intelligence for data ingestion, data
transformation, and advanced analytics on data residing in Azure.
SAP Data Intelligence provides a wide range of connectors and transformations that
facilitate data movement and transformation tasks. You can configure SAP Data
Intelligence pipelines to extract cold data from SAP HANA, transform it if necessary, and
load it into Azure Blob Storage. This ensures seamless data transfer and enables further
processing or analysis on the tiered data.
SAP HANA provides query federation capabilities that seamlessly combine data from
different storage tiers. With SAP HANA Smart Data Access (SDA) and SAP Data
Intelligence, you can federate queries to access data stored in SAP HANA and Azure
Blob Storage as if it were in a single location. This transparent data access allows users
and applications to retrieve and analyze data from both tiers without the need for
manual data movement or complex integration.
Azure Synapse Analytics is a cloud-based analytics service that combines big data and
data warehousing capabilities. You can integrate SAP Data Intelligence with Azure
Synapse Analytics to perform advanced analytics and data processing on large volumes
of data. SAP Data Intelligence can connect to Azure Synapse Analytics to execute data
pipelines, transformations, and machine learning tasks leveraging the power of Azure
Synapse Analytics.
SAP Data Intelligence can also integrate with other Azure services like Azure Blob
Storage, Azure SQL Database, Azure Event Hubs, and more. This allows you to leverage
the capabilities of these Azure services within your data workflows and processing tasks
in SAP Data Intelligence.
You can provision virtual machines (VMs) in Azure and install SAP IQ on those VMs.
Azure Blob Storage is a scalable and cost-effective cloud storage service provided by
Microsoft Azure. With SAP HANA Data Tiering, organizations can integrate SAP IQ with
Azure Blob Storage to store the data that has been tiered off from SAP HANA.
SAP HANA Data Tiering enables organizations to define policies and rules to
automatically move cold data from SAP HANA to SAP IQ in Azure Blob Storage. This
data movement can be performed based on data aging criteria or business rules. Once
the data is in SAP IQ, it can be efficiently compressed and stored, optimizing storage
utilization.
SAP HANA provides query federation capabilities, allowing queries to seamlessly access
and combine data from SAP HANA and SAP IQ as if it were in a single location. This
transparent data access ensures that users and applications can retrieve and analyze
data from both tiers without the need for manual data movement or complex
integration.
It's important to note that the specific steps and configurations may vary based on your
requirements, SAP IQ version, and Azure deployment options. Therefore, referring to the
official documentation and consulting with SAP and Azure experts is highly
recommended for a successful deployment of SAP IQ on Azure with data tiering.
Azure Virtual Machines is the solution for organizations that need compute, storage,
and network resources, in minimal time, and without lengthy procurement cycles. You
can use Azure Virtual Machines to deploy classic applications such as SAP NetWeaver-
based ABAP, Java, and an ABAP+Java stack. Extend reliability and availability without
additional on-premises resources. Azure Virtual Machines supports cross-premises
connectivity, so you can integrate Azure Virtual Machines into your organization's on-
premises domains, private clouds, and SAP system landscape.
Infrastructure preparation.
SAP installation steps for deploying high-availability SAP systems in Azure by using
the Azure Resource Manager deployment model.
) Important
In these articles, you learn how to help protect single point of failure (SPOF)
components, such as SAP Central Services (ASCS/SCS) and database management
systems (DBMS). You also learn about redundant components in Azure, such as SAP
application server.
Azure Virtual Machines high availability architecture and scenarios for SAP
NetWeaver
Prepare Azure infrastructure for SAP high availability by using a SUSE Linux
Enterprise Server cluster framework for SAP ASCS/SCS instances
Prepare Azure infrastructure for SAP high availability by using a SUSE Linux
Enterprise Server cluster framework for SAP ASCS/SCS instances with Azure
NetApp files
Install SAP NetWeaver high availability by using a Windows failover cluster and
shared disk for SAP ASCS/SCS instances
Install SAP NetWeaver high availability by using a Windows failover cluster and
file share for SAP ASCS/SCS instances
Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server
cluster framework for SAP ASCS/SCS instances
Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server
cluster framework for SAP ASCS/SCS instances with Azure NetApp Files
Terminology definitions
High availability: Refers to a set of technologies that minimize IT disruptions by
providing business continuity of IT services through redundant, fault-tolerant, or
failover-protected components inside the same data center. In our case, the data center
resides within one Azure region.
Disaster recovery: Also refers to the minimizing of IT services disruption and their
recovery, but across various data centers that might be hundreds of miles away from
one another. In our case, the data centers might reside in various Azure regions within
the same geopolitical region or in locations as established by you as a customer.
For example, high availability can include compute (VMs), network, or storage and
its benefits for increasing the availability of SAP applications.
If you decide not to use functionalities such as Windows Server Failover Clustering
(WSFC) or Pacemaker on Linux, Azure VM restart is utilized. It restores functionality
in the SAP systems if there are any planned and unplanned downtime of the Azure
physical server infrastructure and overall underlying Azure platform.
To achieve full SAP system high availability, you must protect all critical SAP system
components. For example:
Redundant SAP application servers.
Unique components. An example might be a single point of failure (SPOF)
component, such as an SAP ASCS/SCS instance or a database management
system (DBMS).
SAP high availability in Azure differs from SAP high availability in an on-premises
physical or virtual environment.
The basis for the calculation is 30 days per month, or 43,200 minutes. For example, a
0.05% downtime corresponds to 21.6 minutes. As usual, the availability of the various
services is calculated in the following way:
For example:
When two or more VMs are part of the same availability set, each virtual machine in the
availability set is assigned an update domain and a fault domain by the underlying Azure
platform.
Update domains guarantee that multiple VMs aren't rebooted at the same time
during the planned maintenance of an Azure infrastructure. Only one VM is
rebooted at a time.
Fault domains guarantee that VMs are deployed on hardware components that
don't share a common power source and network switch. When servers, a network
switch, or a power source undergo an unplanned downtime, only one VM is
affected.
For more information, see manage the availability of virtual machines in Azure using
availability set.
On using Availability Zones, there are some things to consider. The considerations list
like:
You can't deploy Azure Availability Sets within an Availability Zone. Only possibility
to combine Availability sets and Availability Zones is with proximity placement
groups. For more information, see article Combine availability sets and availability
zones with proximity placement groups.
You can't use the Basic Load Balancer to create failover cluster solutions based on
Windows Failover Cluster Services or Linux Pacemaker. Instead you need to use the
Azure Standard Load Balancer SKU.
Azure Availability Zones aren't giving any guarantees of certain distance between
the different zones within one region.
The network latency between different Azure Availability Zones within the different
Azure regions might be different from Azure region to region. There would be
cases, where you as a customer can reasonably run the SAP application layer
deployed across different zones since the network latency from one zone to the
active DBMS VM is still acceptable from a business process impact. Whereas there
could be customer scenarios where the latency between the active DBMS VM in
one zone and an SAP application instance in a VM in another zone can be too
intrusive and not acceptable for the SAP business processes. As a result, the
deployment architectures need to be different with an active/active architecture for
the application or active/passive architecture if latency is too high.
Using Azure managed disks is mandatory for deploying into Azure Availability
Zones.
Virtual machine scale set with flexible orchestration offers the flexibility to create the
scale set within a region or span it across availability zones. On creating, the flexible
scale set within a region with platformFaultDomainCount>1 (FD>1), the VMs deployed
in the scale set would be distributed across specified number of fault domains in the
same region. On the other hand, creating the flexible scale set across availability zones
with platformFaultDomainCount=1 (FD=1) would distribute the VMs across different
zones and the scale set would also distribute VMs across different fault domains within
each zone on a best effort basis. For SAP workload only flexible scale set with FD=1 is
supported.
The advantage of using flexible scale sets with FD=1 for cross zonal deployment, instead
of traditional availability zone deployment is that the VMs deployed with the scale set
would be distributed across different fault domains within the zone in a best-effort
manner. To avoid the limitations associated with utilizing proximity placement group for
ensuring VMs availability across all Azure datacenters or under each network spine, it's
advised to deploy SAP workload across availability zones using flexible scale set with
FD=1. This deployment strategy ensures that VMs deployed in each zone aren't
restricted to a single datacenter or network spine, and all SAP system components, such
as databases, ASCS/ERS, and application tier are scoped at zonal level.
So, for new SAP workload deployment across availability zones, we advise to use flexible
scale set with FD=1. For more information, see virtual machine scale set for SAP
workload document.
Because Azure Storage keeps three images of the data by default, the use of RAID 5 or
RAID 1 across multiple Azure disks is unnecessary.
We recommend that you use managed disks because they simplify the deployment and
management of your virtual machines.
Compute to No No Yes
storage fault
domain alignment
7 Note
High Flexible scale set with FD=1 Availability Sets with Availability Sets
Availability Proximity Placement
SAP system Groups
Availability Sets and Availability Flexible scale set with Flexible scale set with
Zones with Proximity Placement FD=1 (select only one FD=1 (no zones are
Groups zone) defined)
Deployment across different zones in a region: For the highest availability, SAP
systems should be deployed across different zones in a region. This ensures that if
one zone is unavailable, the SAP system continues to be available in another zone.
If you're deploying new SAP workload across availability zones, it's advised to use
flexible virtual machine scales set with FD=1 deployment option. It allows you to
deploy multiple VMs across different zones in a region without worrying about
capacity constraints or placement groups. The scale set framework makes sure that
the VMs deployed with the scale set would be distributed across different fault
domains within the zone in a best effort manner. All the high available SAP
components like SAP ASCS/ERS, SAP databases are distributed across different
zones, whereas multiple application servers in each zone are distributed across
different fault domain on best effort basis.
Deployment in a single zone of a region: To deploy your high-availability SAP
system regionally in a location with multiple availability zones, and if it's essential
for all components of the system to be in a single zone, then it's advised to use
Availability Sets with Proximity Placement Groups deployment option. This
approach allows you to group all SAP system components in a single availability
zone, ensuring that the virtual machines within the availability set are spread
across different fault and update domains. While this deployment aligns compute
to storage fault domains, proximity isn't guaranteed. However, as this deployment
option is regional, it doesn't support Azure Site Recovery for zone-to-zone disaster
recovery. Moreover, this option restricts the entire SAP deployment to one data
center, which may lead to capacity limitations if you need to change the SKU size
or scale-out application instances.
Deployment in a region with no zones: If you're deploying your SAP system in a
region that doesn't have any zones, it's advised to use Availability sets. This option
provides redundancy and fault tolerance by placing VMs in different fault domains
and update domains.
) Important
It should be noted that the deployment options for Azure regions are only
suggestions. The most suitable deployment strategy for your SAP system will
depend on your particular requirements and environment.
For more information about the approach, see Utilize Azure infrastructure VM restart to
achieve higher availability of the SAP system.
The next sections discuss how to achieve high availability for all three critical SAP system
components.
Depending on the deployment type (flexible scale set with FD=1, availability zone or
availability set), you must distribute your SAP application server instances accordingly to
achieve redundancy.
Unmanaged disks only: When using unmanaged disks with availability set, it's
important to recognize that the Azure storage account becomes a single point of failure.
Therefore, it's imperative to posses a minimum of two Azure storage accounts, in which
at least two virtual machines are distributed. In an ideal setup, the disks of each virtual
machine that is running an SAP dialog instance would be deployed in a different storage
account.
) Important
We strongly recommend that you use Azure managed disks for your SAP high-
availability installations. Because managed disks automatically align with the
availability set of the virtual machine they are attached to, they increase the
availability of your virtual machine and the services that are running on it.
You can use a WSFC solution to protect the SAP ASCS/SCS instance. Based on the type
of cluster share configuration (file share or shared disk), you can refer to appropriate
solution based on your storage type.
Linux
Multi-SID is supported with WSFC, using file share and shared disk. For more
information about multi-SID high-availability architecture on Windows, see:
File share: SAP ASCS/SCS instance multi-SID high availability for Windows Server
Failover Clustering and file share.
Shared disk: SAP ASCS/SCS instance multi-SID high availability for Windows Server
Failover Clustering and shared disk.
Linux
Multi-SID clustering is supported on Linux Pacemaker clusters for SAP ASCS/ERS, limited
to five SAP SIDs on the same cluster. For more information about multi-SID high-
availability architecture on Linux, see:
SUSE Linux Enterprise Server (SLES): HA for SAP NW on Azure VMs on SLES for SAP
applications multi-SID guide.
Red Hat Linux Enterprise (RHEL): HA for SAP NW on Azure VMs on RHEL for SAP
applications multi-SID guide.
Database DR recommendation
If you decide not to use functionalities such as Windows Server Failover Clustering
(WSFC) or Pacemaker on Linux (currently supported only for SUSE Linux Enterprise
Server [SLES] 12 and later), Azure VM restart is utilized. It protects SAP systems against
planned and unplanned downtime of the Azure physical server infrastructure and overall
underlying Azure platform.
7 Note
Azure VM restart primarily protects VMs and not applications. Although VM restart
doesn't offer high availability for SAP applications, it does offer a certain level of
infrastructure availability. It also indirectly offers “higher availability” of SAP
systems. There is also no SLA for the time it takes to restart a VM after a planned or
unplanned host outage, which makes this method of high availability unsuitable for
the critical components of an SAP system. Examples of critical components might
be an ASCS/SCS instance or a database management system (DBMS).
Another important infrastructure element for high availability is storage. For example,
the Azure Storage SLA is 99.9% availability. If you deploy all VMs and their disks in a
single Azure storage account, potential Azure Storage unavailability will cause the
unavailability of all VMs that are placed in that storage account and all SAP components
that are running inside of the VMs.
Instead of putting all VMs into a single Azure storage account, you can use dedicated
storage accounts for each VM. By using multiple independent Azure storage accounts,
you increase overall VM and SAP application availability.
Azure managed disks are automatically placed in the fault domain of the virtual machine
they are attached to. If you place two virtual machines in an availability set and use
managed disks, the platform takes care of distributing the managed disks into different
fault domains as well. If you plan to use a premium storage account, we highly
recommend using managed disks.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high
availability and storage accounts might look like this:
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure high
availability and managed disks might look like this:
For critical SAP components, you have achieved the following so far:
You can ensure this configuration by using Azure availability sets. For more
information, see the Azure availability sets section.
Each SAP application server instance is placed in its own Azure storage account.
The potential unavailability of one Azure storage account will cause the
unavailability of only one VM with its SAP application server instance. However, be
aware that there is a limit on the number of Azure storage accounts within one
Azure subscription. To ensure automatic start of an ASCS/SCS instance after the
VM reboot, set the Autostart parameter in the ASCS/SCS instance start profile.
For more information, see High availability for SAP application servers.
Even if you use managed disks, the disks are stored in an Azure storage account
and might be unavailable in the event of a storage outage.
In this scenario, utilize Azure VM restart to protect the VM with the installed SAP
ASCS/SCS instance. In the case of planned or unplanned downtime of Azure
servers, VMs are restarted on another available server. As mentioned earlier, Azure
VM restart primarily protects VMs and not applications, in this case the ASCS/SCS
instance. Through the VM restart, you indirectly reach “higher availability” of the
SAP ASCS/SCS instance.
To ensure an automatic start of ASCS/SCS instance after the VM reboot, set the
Autostart parameter in the ASCS/SCS instance start profile. This setting means that
the ASCS/SCS instance as a single point of failure (SPOF) running in a single VM
will determine the availability of the whole SAP landscape.
As in the preceding SAP ASCS/SCS instance use case, you utilize Azure VM restart
to protect the VM with installed DBMS software, and you achieve “higher
availability” of DBMS software through VM restart.
A DBMS that's running in a single VM is also a SPOF, and it is the determinative
factor for the availability of the whole SAP landscape.
Assuming a typical Azure scenario of one SAP application server instance in a VM and a
single VM eventually getting restarted, Autostart is not critical. But you can enable it by
adding the following parameter into the start profile of the SAP Advanced Business
Application Programming (ABAP) or Java instance:
Autostart = 1
7 Note
For more information about Autostart for SAP instances, see the following articles:
Next steps
For information about full SAP NetWeaver application-aware high availability, see SAP
application high availability on Azure IaaS.
SAP workload configurations with Azure
Availability Zones
Article • 06/01/2023
As of the typical SAP NetWeaver or S/4HANA architecture, you need to protect three
different layers:
SAP application layer, which can be one to a few dozen VMs. You want to minimize
the chance of VMs getting deployed on the same host server. You also want those
VMs in an acceptable proximity to the DBMS layer to keep network latency in an
acceptable window
SAP ASCS/SCS layer that is representing a single point of failure in the SAP
NetWeaver and S/4HANA architecture. You usually look at two VMs that you want
to cover with a failover framework. Therefore, these VMs should be allocated in
different infrastructure fault domains
SAP DBMS layer, which represents a single point of failure as well. In the usual
cases, it consists out of two VMs that are covered by a failover framework.
Therefore, these VMs should be allocated in different infrastructure fault domains.
Exceptions are SAP HANA scale-out deployments where more than two VMs are
can be used
The major differences between deploying your critical VMs through availability sets or
Availability Zones are:
Deploying with an availability set is lining up the VMs within the set in a single
zone or datacenter (whatever applies for the specific region). As a result the
deployment through the availability set isn't protected by power, cooling or
networking issues that affect the dataceter(s) of the zone as a whole. On the plus
side, the VMs are aligned with update and fault domains within that zone or
datacenter. Specifically for the SAP ASCS or DBMS layer where we protect two VMs
per availability set, the alignment with fault domains prevents that both VMs are
ending up on the same host hardware.
On deploying VMs through Azure Availability Zones and choosing different zones
(maximum of three possible), is going to deploy the VMs across the different
physical locations and with that adds protection from power, cooling or
networking issues that affect the dataceter(s) of the zone as a whole. However, as
you deploy more than one VM of the same VM family into the same Availability
Zone, there's no protection from those VMs ending up on the same host or same
fault domain. As a result, deploying through Availability Zones is ideal for the SAP
ASCS and DBMS layer where we usually look at two VMs each. For the SAP
application layer, which can be drastically more than two VMs, you might need to
fall back to a different deployment model (see later)
Your motivation for a deployment across Azure Availability Zones should be that you, on
top of covering failure of a single critical VM or ability to reduce downtime for software
patching within a critical, want to protect from larger infrastructure issues that might
affect the availability of one or multiple Azure datacenters.
When you deploy Azure VMs across Availability Zones and establish failover solutions
within the same Azure region, some restrictions apply:
You must use Azure Managed Disks when you deploy to Azure Availability
Zones.
The mapping of zone enumerations to the physical zones is fixed on an Azure
subscription basis. If you're using different subscriptions to deploy your SAP
systems, you need to define the ideal zones for each subscription. If you want to
compare the logical mapping of your different subscriptions, consider the Avzone-
Mapping script
You can't deploy Azure availability sets within an Azure Availability Zone unless you
use Azure Proximity Placement Group. The way how you can deploy the SAP DBMS
layer and the central services across zones and at the same time deploy the SAP
application layer using availability sets and still achieve close proximity of the VMs
is documented in the article Azure Proximity Placement Groups for optimal
network latency with SAP applications. If you aren't using Azure proximity
placement groups, you need to choose one or the other as a deployment
framework for virtual machines.
You can't use an Azure Basic Load Balancer to create failover cluster solutions
based on Windows Server Failover Clustering or Linux Pacemaker. Instead, you
need to use the Azure Standard Load Balancer SKU.
Active/active: The pair of VMs running ASCS/SCS and the pair of VMS running the
DBMS layer are distributed across two zones. The number of VMs running the SAP
application layer are deployed in even numbers across the same two zones. If a
DBMS or ASCS/SCS VM is failing over, some of the open and active transactions
might be rolled back. But users are remaining logged in. It doesn't really matter in
which of the zones the active DBMS VM and the application instances run. This
architecture is the preferred architecture to deploy across zones.
Active/passive: The pair of VMs running ASCS/SCS and the pair of VMS running the
DBMS layer are distributed across two zones. The number of VMs running the SAP
application layer are deployed into one of the Availability Zones. You run the
application layer in the same zone as the active ASCS/SCS and DBMS instance. You
use this deployment architecture if the network latency across the different zones
is too high to run the application layer distributed across the zones. Instead the
SAP application layer needs to run in the same zone as the active ASCS/SCS and/or
DBMS instance. If an ASCS/SCS or DBMS VM fails over to the secondary zone, you
might encounter higher network latency and with that a reduction of throughput.
And you're required to fail back the previously failed over VM as soon as possible
to get back to the previous throughput levels. If a zonal outage occurs, the
application layer needs to be failed over to the secondary zone. An activity that
users experience as complete system shutdown. In some of the Azure regions, this
architecture is the only viable architecture when you want to use Availability Zones.
If you can't accept the potential impact of an ASCS/SCS or DBMS VMS failing over
to the secondary zone, you might be better of staying with availability set
deployments
So before you decide how to use Availability Zones, you need to determine:
The network latency among the three zones of an Azure region. Knowing the
network latency between the zones of a region is going to enable you to choose
the zones with the least network latency in cross-zone network traffic.
The difference between VM-to-VM latency within one of the zones, of your
choosing, and the network latency across two zones of your choosing.
A determination of whether the VM types that you need to deploy are available in
the two zones that you selected. With some VMs SKUs, you might encounter
situations in which some SKUs are available in only two of the three zones.
Deploy the VM SKU you want to use for your DBMS instance in all three zones.
Make sure Azure Accelerated Networking is enabled when you take this
measurement. Accelerated Networking is the default setting since a few years.
Nevertheless, check whether it's enabled and working
When you find the two zones with the least network latency, deploy another three
VMs of the VM SKU that you want to use as the application layer VM across the
three Availability Zones. Measure the network latency against the two DBMS VMs
in the two DBMS zones that you selected.
Use niping as a measuring tool. This tool, from SAP, is described in SAP support
notes #500235 and #1100926 . Focus on the commands documented for
latency measurements. Because ping doesn't work through the Azure Accelerated
Networking code paths, we don't recommend that you use it.
You don't need to perform these tests manually. You can find a PowerShell procedure
Availability Zone Latency Test that automates the latency tests described.
Based on your measurements and the availability of your VM SKUs in the Availability
Zones, you need to make some decisions:
In making these decisions, also take into account SAP's network latency
recommendations, as documented in SAP note #1100926 .
) Important
The measurements and decisions you make are valid for the Azure subscription you
used when you took the measurements. If you use another Azure subscription, the
mapping of enumerated zones might be different for another Azure subscription.
As a result, you need to repeat the measurements or find out the mapping of the
new subscription realitve to the old subscription the tool Avzone-Mapping
script .
) Important
It's expected that the measurements described earlier will provide different results
in every Azure region that supports Availability Zones . Even if your network
latency requirements are the same, you might need to adopt different deployment
strategies in different Azure regions because the network latency between zones
can be different. In some Azure regions, the network latency among the three
different zones can be vastly different. In other regions, the network latency among
the three different zones might be more uniform. The claim that there's always a
network latency between 1 and 2 milliseconds isn't correct. The network latency
across Availability Zones in Azure regions can't be generalized.
Active/Active deployment
This deployment architecture is called active/active because you deploy your active SAP
application servers across two or three zones. The SAP Central Services instance that
uses enqueue replication will be deployed between two zones. The same is true for the
DBMS layer, which will be deployed across the same zones as SAP Central Service. When
considering this configuration, you need to find the two Availability Zones in your
region that offer cross-zone network latency that's acceptable for your workload and
your synchronous DBMS replication. You also want to be sure the delta between
network latency within the zones you selected and the cross-zone network latency isn't
too large.
Nature of the SAP architecture is that, unless you configure it differently, users and
batch jobs can be executed in the different application instances. The side effect of this
fact with the active/active deployment is that batch jobs might be executed by any SAP
application instances independent on whether those run in the same zone with the
active DBMS or not. If the difference in network latency between the difference zones is
small compared to network latency within a zone, the difference in run times of batch
jobs might not be significant. However, the larger the difference of network latency
within a zone, compared to across zone network traffic is, the run time of batch jobs can
be impacted more if the job got executed in a zone where the DBMS instance isn't
active. It's on you as a customer to decide what acceptable differences in run time are.
And with that what the tolerable network latency for cross zones traffic is for your
workload.
The region list provided doesn't relief you as a customer to test your workload to decide
whether an active/active deployment architecture is possible.
Azure regions where the active/active SAP deployment architecture across zones might
not be possible, list like:
Canada Central
France Central
Japan East
Though for your individual workload, it might work. Therefore, you should test before
you decide for an architecture. Azure is constantly working to improve quality and
latency of its networks. Measurements conducted years back might not reflect current
conditions anymore.
Dependent on what you're willing to tolerate on run time differences other regions not
listed could qualify as well.
A simplified schema of an active/active deployment across two zones could look like
this:
Not using Azure Proximity Placement Group, you treat the Azure Availability Zones
as fault domains for all the VMs because availability sets can't be deployed in
Azure Availability Zones.
If you want to combine zonal deployments for the DBMS layer and central services,
but want to use Azure availability sets for the application layer, you need to use
Azure proximity groups as described in the article Azure Proximity Placement
Groups for optimal network latency with SAP applications.
For the load balancers of the failover clusters of SAP Central Services and the
DBMS layer, you need to use the Standard SKU Azure Load Balancer. The Basic
Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with
its subnets, is stretched across zones. You don't need separate virtual networks and
subnets for each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks .
Unmanaged disks aren't supported for zonal deployments.
Azure Premium Storage, Ultra SSD storage, or ANF don't support any type of
storage replication across zones. For DBMS deployments, we rely on database
methods to replicate data across zones
For SMB and NFS shares based on Azure Premium Files , zonal redundancy with
synchronous replication is offered. Check this document for availability of ZRS for
Azure Premium Files in the region you want to deploy into. The usage of zonal
replicated NFS and SMB shares is fully supported with SAP application layer
deployments and high availability failover clusters for NetWeaver or S/4HANA
centrals services. Documents that cover these cases are:
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files
Premium SMB for SAP applications
The third zone is used to host the SBD device if you build a SUSE Linux Pacemaker
cluster and use SBD devices instead of the Azure Fencing Agent. Or for more
application instances.
To achieve run time consistency for critical business processes, you can try to direct
certain batch jobs and users to application instances that are in-zone with the
active DBMS instance by using SAP batch server groups, SAP logon groups, or RFC
groups. However, in zonal failover process, you would need to manually move
these groups to instances running on VMs that are in-zone with the active DB VM.
You might want to deploy dormant dialog instances in each of the zones.
) Important
In this active/active scenario charges for cross zone traffic apply. Check the
document Bandwidth Pricing Details . The data transfer between the SAP
application layer and SAP DBMS layer is quite intensive. Therefore the active/active
scenario can contribute to costs.
Active/Passive deployment
If you can't find an acceptable delta between the network latency within one zone and
the latency of cross-zone network traffic, you can deploy an architecture that has an
active/passive character from the SAP application layer point of view. You define an
active zone, which is the zone where you deploy the complete application layer and
where you attempt to run both the active DBMS and the SAP Central Services instance.
With such a configuration, you need to make sure you don't have extreme run time
variations, depending on whether a job runs in-zone with the active DBMS instance or
not, in business transactions and batch jobs.
Azure regions where this type of deployment architecture across different zones could
be preferable are:
Canada Central
France Central
Japan East
Norway East
South Africa North
7 Note
We recommend that you use a configuration like this only in certain circumstances.
For example, you might use it when data can't leave the Azure region for security or
compliance reasons.
You're either assuming that there's a significant distance between the facilities
hosting an Availability Zone or you're forced to stay within a certain Azure region.
Availability sets can't be deployed in Azure Availability Zones. To compensate for
that, you can use Azure proximity placement groups as documented in the article
Azure Proximity Placement Groups for optimal network latency with SAP
applications.
When you use this architecture, you need to monitor the status closely, and try to
keep the active DBMS and SAP Central Services instances in the same zone as your
deployed application layer. If there was a failover of SAP Central Service or the
DBMS instance, you want to make sure that you can manually fail back into the
zone with the SAP application layer deployed as quickly as possible.
You should have production application instances preinstalled in the VMs that run
the active QA application instances.
In a zonal failure case, shut down the QA application instances and start the
production instances instead. You need to use virtual names for the application
instances to make this work.
For the load balancers of the failover clusters of SAP Central Services and the
DBMS layer, you need to use the Standard SKU Azure Load Balancer. The Basic
Load Balancer won't work across zones.
The Azure virtual network that you deployed to host the SAP system, together with
its subnets, is stretched across zones. You don't need separate virtual networks for
each zone.
For all virtual machines you deploy, you need to use Azure Managed Disks .
Unmanaged disks aren't supported for zonal deployments.
Azure Premium Storage, Ultra SSD storage, or ANF don't support any type of
storage replication across zones. For DBMS deployments, we rely on database
methods to replicate data across zones
For SMB and NFS shares based on Azure Premium Files , zonal redundancy with
synchronous replication is offered. Check this document for availability of ZRS for
Azure Premium Files in the region you want to deploy into. The usage of zonal
replicated NFS and SMB shares is fully supported with SAP application layer
deployments and high availability failover clusters for NetWeaver or S/4HANA
centrals services. Documents that cover these cases are:
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Azure Virtual Machines high availability for SAP NetWeaver on Red Hat
Enterprise Linux with Azure NetApp Files for SAP applications
High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files
Premium SMB for SAP applications
The third zone is used to host the SBD device if you build a SUSE Linux Pacemaker
cluster and use SBD devices instead of the Azure Fencing Agent.
Next steps
Here are some next steps for deploying across Azure Availability Zones:
) Important
SAP applications based on the SAP NetWeaver or SAP S/4HANA architecture are
sensitive to network latency between the SAP application tier and the SAP database tier.
This sensitivity is the result of most of the business logic running in the application layer.
Because the SAP application layer runs the business logic, it issues queries to the
database tier at a high frequency, at a rate of thousands or tens of thousands per
second. In most cases, the nature of these queries is simple. They can often be run on
the database tier in 500 microseconds or less.
The time spent on the network to send such a query from the application tier to the
database tier and receive the result sent back has a major impact on the time it takes to
run business processes. This sensitivity to network latency is why you might want to
achieve certain minimum network latency in SAP deployment projects. See SAP Note
#1100926 - FAQ: Network performance for guidelines on how to classify the network
latency.
In many Azure regions, the number of datacenters has grown. At the same time,
customers, especially for high-end SAP systems, are using more special VM families like
Mv2 or Mv3 family and newer. These Azure virtual machine types aren't always available
in each of the datacenters that collect into an Azure region. These facts can create
opportunities to optimize network latency between the SAP application layer and the
SAP DBMS layer.
Azure provides different deployment options for SAP workloads. For the chosen
deployment type you have options to optimize network latency, if needed. Detailed
information about each option is thoroughly described in the following sections within
this article:
You can't assume that all Azure VM types are available in every and all Azure
datacenters or under each and every network spine. As a result, the combination of
different VM types within one proximity placement group can be severely
restricted. These restrictions occur because the host hardware that is needed to
run a certain VM type might not be present in the datacenter or under the network
spine to which the proximity placement group was assigned
As you resize parts of the VMs that are within one proximity placement group, you
can't automatically assume that in all cases the new VM type is available in the
same datacenter or under the network spine the proximity placement group got
assigned to
As Azure decommissions hardware it might force certain VMs of a proximity
placement group into another Azure datacenter or another network spine. For
details covering this case, read the document Proximity placement groups
) Important
The scenarios where proximity placement groups can be used to optimize network
latency:
You want to deploy the critical resources of your SAP workload across different
availability zones and on the other hand need VMs of the application tier to be
spread across different fault domains by using availability sets in each of the zones.
In this case, as later described in the document, proximity placement groups are
the glue needed.
You deploy the SAP workload with availability sets. Where the SAP database tier,
the SAP application tier and ASCS/SCS VMs are grouped in three different
availability sets. In such a case, you want to make sure that the availability sets
aren't spread across the complete Azure region since this could, dependent on the
Azure region, result in network latency that could impact SAP workload negatively.
You use proximity placement groups to group VMs together to achieve lowest
possible network latency between the services hosted in the VMs. For example,
latency within an availability zone alone does not meet the application
requirements.
As for deployment scenario #2, in many regions, especially regions without availability
zones and most regions with availability zones, the network latency independent on
where the VMs land is acceptable. Though there are some regions of Azure that can't
provide a sufficiently good experience without collocating the three different availability
sets without the usage of proximity placement groups.
The first Azure VM deployed under a network spine with many Azure compute
units and low network latency. Such a network spine often matches a single Azure
datacenter. You can think of the first virtual machine as a "scope VM" that is
deployed into a compute scale unit based on Azure allocation algorithms that are
eventually combined with deployment parameters.
All subsequent VMs deployed that reference the proximity placement group are
going to be deployed under the same network spine as the first virtual machine.
7 Note
If there's no host hardware deployed that could run a specific VM type under the
network spine where the first VM was placed, the deployment of the requested VM
type won’t succeed. You’ll get an allocation failure message that indicates that the
VM can't be supported within the perimeter of the proximity placement group.
To reduce risk of the above, it's recommended to use the intent option when creating
the proximity placement group. The intent option allows you to list the VM types that
you're intending to include into the proximity placement group. This list of VM types will
be taken to find the best datacenter that hosts these VM types. If such a datacenter is
found, the PPG is going to be created and is scoped for the datacenter that fulfills the
VM SKU requirements. If there's no such datacenter found, the creation of the proximity
placement group is going to fail. You can find more information in the documentation
PPG - Use intent to specify VM sizes. Be aware that actual capacity situations aren't
taken into account in the checks triggered by the intent option. As a result, there still
could be allocation errors rooted in insufficient capacity available.
A single Azure resource group can have multiple proximity placement groups assigned
to it. But a proximity placement group can be assigned to only one Azure resource
group.
For more information and deployment examples of proximity placement groups, see the
available documentation.
That you require a VM type that isn't available under the network spine into which
the proximity placement group was assigned to.
That resources of nonmainstream VMs, like M-Series VMs, could eventually be
unfulfilled when you need to expand the number of VMs into a proximity
placement group over time.
Based on many improvements deployed by Microsoft into the Azure regions to reduce
network latency within an Azure availability zone, the deployment guidance when using
proximity placement groups for zonal deployments, looks like:
The difference to the recommendation given so far is that the database VMs in the two
zones are no more a part of the proximity placement groups. The proximity placement
groups per zone are now scoped with the deployment of the VM running the SAP
ASCS/SCS instances. This also means that for the regions where availability zones are
collected by multiple datacenters, the ASCS/SCS instance, and the application tier could
run under one network spine and the database VMs could run under another network
spine. Though with the network improvements made, the network latency between the
SAP application tier and the DBMS tier still should be sufficient for sufficiently good
performance and throughput. The advantage of this new configuration is that you have
more flexibility in resizing VMs or moving to new VM types with either the DBMS layer
or/and the application layer of the SAP system.
For the special case of using Azure NetApp Files (ANF) for the DBMS environment and
the ANF related new functionality of Azure NetApp Files application volume group for
SAP HANA and its necessity for proximity placement groups, check the document NFS
v4.1 volumes on Azure NetApp Files for SAP HANA.
In this graphic, a single proximity placement group would be assigned to a single SAP
system. This PPG gets assigned to the three availability sets. The proximity placement
group is then scoped by deploying the first database tier VMs into the DBMS availability
set. This architecture recommendation collocates all VMs under the same network spine.
It's introducing the restrictions mentioned earlier in this article. Therefore, the proximity
placement group architecture should be used sparsely.
By using proximity placement groups, you can bypass this restriction. Here's the
deployment sequence:
Create a proximity placement group.
Deploy your anchor VM, recommended being the ASCS/SCS VM, by referencing an
availability zone.
Create an availability set that references the Azure proximity placement group.
(See the command later in this article.)
Deploy the application layer VMs by referencing the availability set and the
proximity placement group.
) Important
It is important to understand that disks of the application layer VMs are not
guaranteed to be allocated in the same availability zone as the VMs are directed to
using the proximity placement group. The result of the deployment shown in the
next steps may be that the VMs are allocated in the same network spine and with
that the same availability zone as the anchor VM. But the respctive disks (base VHD
and mounted Azure block storage disks) may not be allocated under the same
network spine or even the same availabity zone. Instead the disks of those VMs can
be allocated in any of the datacenters of the specific region. Though the disks of
the anchor VM that got deployed by defining a zone are going to be deployed in
the same zone as the VM got deployed.
Instead of deploying the first VM as demonstrated in the previous section, you reference
an availability zone and the proximity placement group when you deploy the VM:
Azure PowerShell
A successful deployment of this virtual machine would host the ASCS/SCS instance of
the SAP system in one availability zone. In this case, the VM and the base VHD of the
VM and potentially mounted Azure block storage disks are allocated within the same
availability zone. The scope of the proximity placement group is fixed to one of the
network spines in the availability zone you defined.
In the next step, you need to create the availability sets you want to use for the
application layer of your SAP system.
Define and create the proximity placement group. The command for creating the
availability set requires an additional reference to the proximity placement group ID (not
the name). You can get the ID of the proximity placement group by using this command:
Azure PowerShell
When you create the availability set, you need to consider additional parameters when
you're using managed disks (default unless specified otherwise) and proximity
placement groups:
Azure PowerShell
Ideally, you should use three fault domains. But the number of supported fault domains
can vary from region to region. In this case, the maximum number of fault domains
possible for the specific regions is two. To deploy your application layer VMs, you need
to add a reference to your availability set name and the proximity placement group
name, as shown here:
Azure PowerShell
7 Note
The disks of the VMs deployed into the availability set above are not forced to be
allocated in the same availability zone as the VM is. Though you achieved that the
application layer VMs are spread across different fault domains under the same
network spine as the anchor VM is allocated, the disks, though also allocated in
different fault domains may be allocated in different locations on a region wide
scope.
A Central Services for your SAP system that's located in a specific availability
zone(s).
An SAP application layer that's located through availability sets in the same
network spine as the SAP Central services (ASCS/SCS) VM or VMs.
7 Note
Because you deploy one DBMS and ASCS/SCS VMs into one zone and the second
DBMS and ASCS/SCS VMs into another zone to create a high availability
configurations, you'll need a different proximity placement group for each of the
zones. The same is true for any availability set that you use.
You can also use these commands for cases where you're getting allocation errors in
cases where you can't move to a new VM type with an existing VM in the proximity
placement group.
You create a proximity placement group (PPG) in each of the two availability zones you
deployed your SAP system into. All the VMs of a particular zone are part of the
individual proximity placement group of that particular zone. You start in each zone with
deploying the DBMS VM to scope the PPG and then deploy the ASCS VM into the same
zone and PPG. In a third step, you create an Azure availability set, assign the availability
set to the scoped PPG and deploy the SAP application layer into it. The advantage of
this configuration was that all the components are nicely aligned underneath the same
network spine. The large disadvantage is that your flexibility in resizing virtual machines
can be limited.
Based on many improvements deployed by Microsoft into the Azure regions to reduce
network latency within an Azure availability zone, the current deployment guidance for
zonal deployments in this article exists.
To determine whether your HANA Large Instances unit is deployed in a Revision 4 stamp
or row, check the article Azure HANA Large Instances control through Azure portal. In
the attributes overview of your HANA Large Instances unit, you can also determine the
name of the proximity placement group because it was created when your HANA Large
Instances unit was deployed. The name that appears in the attributes overview is the
name of the proximity placement group that you should deploy your application layer
VMs into.
As compared to SAP systems that use only Azure virtual machines, when you use HANA
Large Instances, you have less flexibility in deciding how many Azure resource groups to
use. All the HANA Large Instances units of a HANA Large Instances tenant are grouped
in a single resource group, as described this article. Unless you deploy into different
tenants to separate, for example, production and non-production systems or other
systems, all your HANA Large Instances units will be deployed in one HANA Large
Instances tenant. This tenant has a one-to-one relationship with a resource group. But a
separate proximity placement group will be defined for each of the single units.
As a result, the relationships among Azure resource groups and proximity placement
groups for a single tenant will be as shown here:
Next steps
Check out the documentation:
The scope of this article is to describe configurations, that will enable outbound
connectivity to public end point(s). The configurations are mainly in the context of High
Availability with Pacemaker for SUSE / RHEL.
If you are using Pacemaker with Azure fence agent in your high availability solution,
then the VMs must have outbound connectivity to the Azure management API. The
article presents several options to enable you to select the option that is best suited for
your scenario.
Overview
When implementing high availability for SAP solutions via clustering, one of the
necessary components is Azure Load Balancer. Azure offers two load balancer SKUs:
standard and basic.
Standard Azure load balancer offers some advantages over the Basic load balancer. For
instance, it works across Azure Availability zones, it has better monitoring and logging
capabilities for easier troubleshooting, reduced latency. The “HA ports” feature covers all
ports, that is, it is no longer necessary to list all individual ports.
There are some important differences between the basic and the standard SKU of Azure
load balancer. One of them is the handling of outbound traffic to public end point. For
full Basic versus Standard SKU load balancer comparison, see Load Balancer SKU
comparison.
When VMs without public IP addresses are placed in the backend pool of internal (no
public IP address) Standard Azure load balancer, there is no outbound connectivity to
public end points, unless additional configuration is done.
If your SAP deployment doesn’t require outbound connectivity to public end points, you
don’t need to implement the additional configuration. It is sufficient to create internal
standard SKU Azure Load Balancer for your high availability scenario, assuming that
there is also no need for inbound connectivity from public end points.
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points.
If the VMs have either public IP addresses or are already in the backend pool of
Azure Load balancer with public IP address, the VM will already have outbound
connectivity to public end points.
Important considerations
You can use one additional Public Load Balancer for multiple VMs in the same
subnet to achieve outbound connectivity to public end point and optimize cost
Use Network Security Groups to control which public end points are accessible
from the VMs. You can assign the Network Security Group either to the subnet, or
to each VM. Where possible, use Service tags to reduce the complexity of the
security rules.
Azure standard Load balancer with public IP address and outbound rules allows
direct access to public end point. If you have corporate security requirements to
have all outbound traffic pass via centralized corporate solution for auditing and
logging, you may not be able to fulfill the requirement with this scenario.
Tip
Where possible, use Service tags to reduce the complexity of the Network Security
Group .
Deployment steps
1. Create Load Balancer
a. In the Azure portal , click All resources, Add, then search for Load Balancer
b. Click Create
c. Load Balancer Name MyPublicILB
d. Select Public as a Type, Standard as SKU
e. Select Create Public IP address and specify as a name MyPublicILBFrondEndIP
f. Select Zone Redundant as Availability zone
g. Click Review and Create, then click Create
Azure CLI
4. Create Network Security group rules to restrict access to specific Public End Points.
If there is existing Network Security Group, you can adjust it. The example below
shows how to enable access to the Azure management API:
a. Navigate to the Network Security Group
b. Click Outbound Security Rules
c. Add a rule to Deny all outbound Access to Internet.
d. Add a rule to Allow access to AzureCloud, with priority lower than the priority
of the rule to deny all internet access.
For more information on Azure Network security groups, see Security Groups .
Tip
Where possible, use Service tags to reduce the complexity of the Azure Firewall
rules.
Deployment steps
1. The deployment steps assume that you already have Virtual network and subnet
defined for your VMs.
2. Create Subnet AzureFirewallSubnet in the same Virtual Network, where the VMS
and the Standard Load Balancer are deployed.
a. In Azure portal, Navigate to the Virtual Network: Click All Resources, Search for
the Virtual Network, Click on the Virtual Network, Select Subnets.
b. Click Add Subnet. Enter AzureFirewallSubnet as Name. Enter appropriate
Address Range. Save.
4. Create Azure Firewall Rule to allow outbound connectivity to specified public end
points. The example shows how to allow access to the Azure Management API
public endpoint.
a. Select Rules, Network Rule Collection, then click Add network rule collection.
b. Name: MyOutboundRule, enter Priority, Select Action Allow.
c. Service: Name ToAzureAPI. Protocol: Select Any. Source Address: enter the
range for your subnet, where the VMs and Standard Load Balancer are deployed
for instance: 11.97.0.0/24. Destination ports: enter *.
d. Save
e. As you are still positioned on the Azure Firewall, Select Overview. Note down
the Private IP Address of the Azure Firewall.
6. Create User Defined Route from the subnet of your VMs to the private IP of
MyAzureFirewall.
a. As you are positioned on the Route Table, click Routes. Select Add.
b. Route name: ToMyAzureFirewall, Address prefix: 0.0.0.0/0. Next hop type: Select
Virtual Appliance. Next hop address: enter the private IP address of the firewall
you configured: 11.97.1.4.
c. Save
Important considerations
If there is already corporate proxy in place, you could route outbound calls to
public end points through it. Outbound calls to public end points will go through
the corporate control point.
Make sure the proxy configuration allows outbound connectivity to Azure
management API: https://management.azure.com and
https://login.microsoftonline.com
Make sure there is a route from the VMs to the Proxy
Proxy will handle only HTTP/HTTPS calls. If there is additional need to make
outbound calls to public end point over different protocols (like RFC), alternative
solution will be needed
The Proxy solution must be highly available, to avoid instability in the Pacemaker
cluster
Depending on the location of the proxy, it may introduce additional latency in the
calls from the Azure Fence Agent to the Azure Management API. If your corporate
proxy is still on the premises, while your Pacemaker cluster is in Azure, measure
latency and consider, if this solution is suitable for you
If there isn’t already highly available corporate proxy in place, we do not
recommend this option as the customer would be incurring extra cost and
complexity. Nevertheless, if you decide to deploy additional proxy solution, for the
purpose of allowing outbound connectivity from Pacemaker to Azure Management
public API, make sure the proxy is highly available, and the latency from the VMs
to the proxy is low.
Console
sudo vi /etc/sysconfig/pacemaker
# Add the following lines
http_proxy=http://MyProxyService:MyProxyPort
https_proxy=http://MyProxyService:MyProxyPort
SUSE
Console
Red Hat
Console
# Place the cluster in maintenance mode
sudo pcs property set maintenance-mode=true
#Restart on all nodes
sudo systemctl restart pacemaker
# Take the cluster out of maintenance mode
sudo pcs property set maintenance-mode=false
Other options
If outbound traffic is routed via third party, URL-based firewall proxy:
if using Azure fence agent make sure the firewall configuration allows outbound
connectivity to the Azure management API: https://management.azure.com and
https://login.microsoftonline.com
if using SUSE's Azure public cloud update infrastructure for applying updates and
patches, see Azure Public Cloud Update Infrastructure 101
Next steps
Learn how to configure Pacemaker on SUSE in Azure
Learn how to configure Pacemaker on Red Hat in Azure
SAP HANA high availability for Azure
virtual machines
Article • 02/10/2023
You can use numerous Azure capabilities to deploy mission-critical databases like SAP
HANA on Azure VMs. This article provides guidance on how to achieve availability for
SAP HANA instances that are hosted in Azure VMs. The article describes several
scenarios that you can implement by using the Azure infrastructure to increase
availability of SAP HANA in Azure.
Prerequisites
This article assumes that you are familiar with infrastructure as a service (IaaS) basics in
Azure, including:
How to deploy virtual machines or virtual networks via the Azure portal or
PowerShell.
Using the Azure cross-platform command-line interface (Azure CLI), including the
option to use JavaScript Object Notation (JSON) templates.
This article also assumes that you are familiar with installing SAP HANA instances, and
with administrating and operating SAP HANA instances. It's especially important to be
familiar with the setup and operations of HANA system replication. This includes tasks
like backup and restore for SAP HANA databases.
It's also a good idea to be familiar with these articles about SAP HANA:
SLA for Virtual Machines describes three different SLAs, for three different
configurations:
A single VM that uses Azure premium SSDs for the OS disk and all data disks. This
option provides a monthly uptime of 99.9 percent.
Multiple (at least two) VMs that are organized in an Azure availability set. This
option provides a monthly uptime of 99.95 percent.
Multiple (at least two) VMs that are organized in an Availablity Zone. This option
provided a monthly uptime of 99.99 percent.
Measure your availability requirement against the SLAs that Azure components can
provide. Then, choose your scenarios for SAP HANA to achieve your required level of
availability.
Next steps
Learn about SAP HANA availability within one Azure region.
Learn about SAP HANA availability across Azure regions.
SAP HANA availability within one Azure
region
Article • 06/20/2023
This article describes several availability scenarios for SAP HANA within one Azure
region. Azure has many regions, spread throughout the world. For the list of Azure
regions, see Azure regions . For deploying SAP HANA on VMs within one Azure region,
Microsoft offers deployment of a single VM with a HANA instance. For increased
availability, you can deploy two VMs with two HANA instances using either a flexible
scale set with FD=1, availability zones or an availability set that uses HANA system
replication for availability.
Azure regions that provide Availability Zones consist of multiple data centers, each with
its own power source, cooling, and network infrastructure. The purpose of offering
different zones within a single Azure region is to enable the deployment of applications
across two or three available Availability Zones. By distributing your application
deployment across zones, any power or networking issues affecting a specific Azure
Availability Zone infrastructure wouldn't fully disrupt your application's functionality
within the Azure region. While there might be some reduced capacity, such as the
potential loss of VMs in one zone, the VMs in the remaining zones would continue to
operate without interruption. To set up two HANA instances in separate VMs spanning
across different zones, you have the option to deploy VMs using either the flexible scale
set with FD=1 or availability zones deployment option.
For increased availability within a region, it's advised to deploy two VMs with two HANA
instances using an availability set. An Azure Availability Set is a logical grouping
capability that ensures that the VM resources configured within Availability Set are
failure-isolated from each other when they're deployed within an Azure datacenter.
Azure ensures that the VMs you place within an Availability Set run across multiple
physical servers, compute racks, storage units, and network switches. In some Azure
documentation, this configuration is referred to as placements in different update and
fault domains. These placements usually are within an Azure datacenter. Assuming that
power source and network issues would affect the datacenter that you're deploying, all
your capacity in one Azure region would be affected.
Single-VM scenario
In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use
Azure Premium Storage to host the operating system disk and all your data disks. The
Azure uptime SLA of 99.9 percent and the SLAs of other Azure components is sufficient
for you to fulfill your availability SLAs for your customers. In this scenario, you have no
need to use an Azure Availability Set for VMs that run the DBMS layer. In this scenario,
you rely on two different features:
Azure VM auto restart, or service healing, is a functionality in Azure that works on two
levels:
The Azure server host checks the health of a VM that's hosted on the server host.
The Azure fabric controller monitors the health and availability of the server host.
A health check functionality monitors the health of every VM that's hosted on an Azure
server host. If a VM falls into a non-healthy state, a reboot of the VM can be initiated by
the Azure host agent that checks the health of the VM. The fabric controller checks the
health of the host by checking many different parameters that might indicate issues with
the host hardware. It also checks on the accessibility of the host via the network. An
indication of problems with the host can lead to the following events:
If the host signals a bad health state, a reboot of the host and a restart of the VMs
that were running on the host is triggered.
If the host isn't in a healthy state after successful reboot, a redeployment of the
VMs that were originally on the now unhealthy node onto a healthy host server is
initiated. In this case, the original host is marked as not healthy. It won't be used
for further deployments until it's cleared or replaced.
If the unhealthy host has problems during the reboot process, an immediate
restart of the VMs on a healthy host is triggered.
With the host and VM monitoring provided by Azure, Azure VMs that experience host
issues are automatically restarted on a healthy Azure host.
) Important
Azure service healing will not restart Linux VMs where the guest OS is in a kernel
panic state. The default settings of the commonly used Linux releases, are not
automatically restarting VMs or server where the Linux kernel is in panic state.
Instead the default foresees to keep the OS in kernel panic state to be able to
attach a kernel debugger to analyze. Azure is honoring that behavior by not
automatically restarting a VM with the guest OS in such a state. Assumption is that
such occurrences are extremely rare. You could overwrite the default behavior to
enable a restart of the VM. To change the default behavior enable the parameter
'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds.
Common recommended values are to wait for 20-30 seconds before triggering the
reboot through this parameter. For more information, see sysctl.conf .
The second feature that you rely on in this scenario is the fact that the HANA service
that runs in a restarted VM starts automatically after the VM reboots. You can set up
HANA service auto restart through the watchdog services of the various HANA
services.
You might improve this single-VM scenario by adding a cold failover node to an SAP
HANA configuration. In the SAP HANA documentation, this setup is called host
autofailover . This configuration might make sense in an on-premises deployment
situation where the server hardware is limited, and you dedicate a single-server node as
the host autofailover node for a set of production hosts. But in Azure, where the
underlying infrastructure of Azure provides a healthy target server for a successful VM
restart, it doesn't make sense to deploy SAP HANA host autofailover. Because of Azure
service healing, there's no reference architecture that foresees a standby node for HANA
host autofailover.
To illustrate the different SAP HANA availability scenarios, a few of the layers in the
diagram are omitted. The diagram shows only layers that depict VMs, hosts, Availability
Sets, and Azure regions. Azure Virtual Network instances, resource groups, and
subscriptions don't play a role in the scenarios described in this section.
This setup isn't well suited to achieving great Recovery Point Objective (RPO) and
Recovery Time Objective (RTO) times. RTO times especially would suffer due to the need
to fully restore the complete database by using the copied backups. However, this setup
is useful for recovering from unintended data deletion on the main instances. With this
setup, at any time, you can restore to a certain point in time, extract the data, and
import the deleted data into your main instance. Hence, it might make sense to use a
backup copy method in combination with other high-availability functionality.
While backups are being copied, you might be able to use a smaller VM than the main
VM that the SAP HANA instance is running on. Keep in mind that you can attach a
smaller number of VHDs to smaller VMs. For information about the limits of individual
VM types, see Sizes for Linux virtual machines in Azure.
Run another SAP HANA instance in the second VM. The SAP HANA instance in the
second VM takes most of the memory of the virtual machine. In case a failover to
the second VM, you need to shut down the running SAP HANA instance that has
the data fully loaded in the second VM, so that the replicated data can be loaded
into the cache of the targeted HANA instance in the second VM.
Use a smaller VM size on the second VM. If a failover occurs, you have an
additional step before the manual failover. In this step, you resize the VM to the
size of the source VM.
7 Note
Even if you don't use data preload in the HANA system replication target, you need
at least 64 GB of memory. You also need enough memory in addition to 64 GB to
keep the rowstore data in the memory of the target instance.
SAP HANA system replication without auto failover and with data
preload
In this scenario, data that's replicated to the HANA instance in the second VM is
preloaded. This eliminates the two advantages of not preloading data. In this case, you
can't run another SAP HANA system on the second VM. You also can't use a smaller VM
size. Hence, customers rarely implement this scenario.
From an SAP HANA perspective, the replication mode that's used is synced and an
automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot
standby node. The standby node receives a synchronous stream of change records from
the primary SAP HANA instance. As transactions are committed by the application at the
HANA primary node, the primary HANA node waits to confirm the commit to the
application until the secondary SAP HANA node confirms that it received the commit
record. SAP HANA offers two synchronous replication modes. For details and for a
description of differences between these two synchronous replication modes, see the
SAP article Replication modes for SAP HANA system replication .
Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
For more information about SAP HANA availability across Azure regions, see:
This article describes scenarios related to SAP HANA availability across different Azure
regions. Because of the distance between Azure regions, setting up SAP HANA
availability in multiple Azure regions involves special considerations.
On the other hand, organizations often have a distance requirement between the
location of the primary datacenter and a secondary datacenter. A distance requirement
helps provide availability if a natural disaster occurs in a wider geographic location.
Examples include the hurricanes that hit the Caribbean and Florida in September and
October 2017. Your organization might have at least a minimum distance requirement.
For most Azure customers, a minimum distance definition requires you to design for
availability across Azure regions . Because the distance between two Azure regions is
too large to use the HANA synchronous replication mode, RTO and RPO requirements
might force you to deploy availability configurations in one region, and then
supplement with additional deployments in a second region.
Another aspect to consider in this scenario is failover and client redirect. The assumption
is that a failover between SAP HANA instances in two different Azure regions always is a
manual failover. Because the replication mode of SAP HANA system replication is set to
asynchronous, there's a potential that data committed in the primary HANA instance
hasn't yet made it to the secondary HANA instance. Therefore, automatic failover isn't
an option for configurations where the replication is asynchronous. Even with manually
controlled failover, as in a failover exercise, you need to take measures to ensure that all
the committed data on the primary side made it to the secondary instance before you
manually move over to the other Azure region.
Azure Virtual Network uses a different IP address range. The IP addresses are deployed
in the second Azure region. So, you either need to change the SAP HANA client
configuration, or preferably, you need to create steps to change the name resolution.
This way, the clients are redirected to the new secondary site's server IP address. For
more information, see the SAP article Client connection recovery after takeover .
If you're using the scenario of sharing the DR target with a QA system in one VM, you
need to take these considerations into account:
There are two operation modes with delta_datashipping and logreplay, which
are available for such a scenario
Both operation modes have different memory requirements without preloading
data
Delta_datashipping might require drastically less memory without the preload
option than logreplay could require. See chapter 4.3 of the SAP document How To
Perform System Replication for SAP HANA
The memory requirement of logreplay operation mode without preload isn't
deterministic and depends on the columnstore structures loaded. In extreme cases,
you might require 50% of the memory of the primary instance. The memory for
logreplay operation mode is independent on whether you chose to have the data
preloaded set or not.
7 Note
In this configuration, you can't provide an RPO=0 because your HANA system
replication mode is asynchronous. If you need to provide an RPO=0, this
configuration isn't the configuration of choice.
A small change that you can make in the configuration might be to configure data as
preloading. However, given the manual nature of failover and the fact that application
layers also need to move to the second region, it might not make sense to preload data.
In these cases, you can set up what SAP calls an SAP HANA multi-tier system replication
configuration by using HANA system replication. The architecture would look like:
SAP introduced multi-target system replication with HANA 2.0 SPS3. Multi-target
system replication brings some advantages in update scenarios. For example, the DR site
(Region 2) isn't impacted when the secondary HA site is down for maintenance or
updates. You can find out more about HANA multi-target system replication at the SAP
Help Portal . Possible architecture with multi-target replication would look like:
If the organization has requirements for high availability readiness in the second(DR)
Azure region, then the architecture would look like:
Using logreplay as operation mode, this configuration provides an RPO=0, with low
RTO, within the primary region. The configuration also provides decent RPO if a move to
the second region is involved. The RTO times in the second region are dependent on
whether data is preloaded. Many customers use the VM in the secondary region to run a
test system. In that use case, the data can't be preloaded.
) Important
The operation modes between the different tiers need to be homogeneous. You
can't use logreplay as operation mode between tier 1 and tier 2 and
delta_datashipping to supply tier 3. You can only choose the one or the other
operation mode that needs to be consistent for all tiers. Since delta_datashipping is
not suitable to give you an RPO=0, the only reasonable operation mode for such a
multi-tier configuration remains logreplay. For details about operation modes and
some restrictions, see the SAP article Operation modes for SAP HANA system
replication .
Next steps
For step-by-step guidance on setting up these configurations in Azure, see:
Many organizations running critical business applications on Azure set up both High
Availability (HA) and Disaster Recovery (DR) strategy. The purpose of high availability is
to increase the SLA of business systems by eliminating single points of failure in the
underlying system infrastructure. High Availability technologies reduce the effect of
unplanned infrastructure failure and help with planned maintenance. Disaster Recovery
is defined as policies, tools, and procedures to enable the recovery or continuation of
vital technology infrastructure and systems following a geographically widespread
natural or human-induced disaster.
To achieve high availability for SAP workload on Azure, virtual machines are typically
deployed in an availability set, availability zones or in flexible scale set to protect
applications from infrastructure maintenance or failure within region. But the
deployment doesn’t protect applications from widespread disaster within region. So to
protect applications from regional disaster, disaster recovery strategy for the
applications should be in place. Disaster recovery is a documented and structured
approach that is designed to assist an organization in executing the recovery processes
in response to a disaster, and to protect or minimize IT services disruption and promote
recovery.
This document provides details on protecting SAP workloads from large scale
catastrophe by implementing structured DR approach. The details in this document are
presented at an abstract level, based on different Azure services and SAP components.
Exact DR strategy and the order of recovery for your SAP workload must be tested,
documented, and fine tuned regularly. Also, the document focuses on the Azure-to-
Azure DR strategy for SAP workload.
For DR on Azure, organizations should consider different scenarios that might trigger
failover.
To achieve the recovery goal for different scenarios, organization must outline Recovery
Time Objective (RTO) and Recovery Point Objective (RPO) for their workload based on
the business requirements. RTO describes the amount of time application can be down,
typically measured in hours, minutes, or seconds. Whereas RPO describes the amount of
transactional data that is acceptable by business to lose in order for normal operations
to resume. Identifying RTO and RPO of your business is crucial, as it would help you
design your DR strategy optimally. The components (compute, storage, database etc.)
involved in SAP workload are replicated to the DR region using different techniques
(Azure native services, native DB replication technology, custom scripts). Each technique
provides different RPO, which must be accounted for when designing a DR strategy. On
Azure, you can use some of the Azure native services like Azure Site Recovery, Azure
Backup that can help you to meet RTO and RPO of your SAP workloads. Refer to SLA of
Azure Site Recovery and Azure Backup to optimally align with your RTO and RPO.
Customers who want to mimic their on-premises metro DR strategy on Azure can
use availability zones for disaster recovery. But zone-to-zone DR strategy might fall
short of resilience requirement if there’s geographically widespread natural
disaster.
On Azure, each region is paired with another region within the same geography
(except for Brazil South). This approach allows for platform provided replication of
resources across region. The benefit of choosing paired region can be found in
region pairs document. If an organization chooses to use Azure paired regions
several additional points for an SAP workload needs to be considered:
The Azure services and features in paired Azure regions might not be
symmetrical. For example, Azure NetApp Files, VM SKUs like M-Series available
in the Primary region might not be available in the paired region. To check if the
Azure product or services is available in a region, see Azure Products by
Region .
GRS option is available for storage account with standard storage type that
replicates data to paired region. But standard storage isn't suitable for SAP
DBMS or virtual data disks.
The Azure backup service used to back up supported solutions can replicate
backups only between paired regions. For all your other data, run your own
replications with native DBMS features like SQL Server Always On, SAP HANA
System Replication, and other services. Use a combination of Azure Site
Recovery, rsync or robocopy, and other third-party software for the SAP
application layer.
The following reference architecture shows typical SAP NetWeaver system running on
Azure along with high availability in primary region. The secondary site shown down
below is the disaster recovery site where the SAP systems will be restored after a
disaster event. Both primary and disaster recovery regions are part of the same
subscription. To achieve DR for SAP workload, you need to identify recovery strategy for
each SAP layer along with the different Azure services that the application uses.
Organizations should plan and design a DR strategy for their entire IT landscape. Usually
SAP systems running in production environment are integrated with different services
and interfaces like Active directory, DNS, third-party application, and so on. So you must
include the non-SAP systems and other services in your disaster recovery planning as
well. This document focuses on the recovery planning for SAP applications. But you can
expand the size and scope of the DR planning for dependent components to fit your
requirements.
Infrastructure components of DR solution for
SAP workload
An SAP workload running on Azure uses different infrastructure components to run a
business solution. To plan DR for such solution, it's essential that all infrastructure
components configured in the primary region are available, and can be configured in
the DR region as well. Following infrastructure components should be factored in when
designing DR solution for SAP workload on Azure.
Network
Compute
Storage
Network
ExpressRoute extends your on-premises network into the Microsoft cloud over a
private connection with the help of a connectivity provider. On designing disaster
recovery architecture, one must account for building a robust backend network
connectivity using geo-redundant ExpressRoute circuit. It's advised setting up at
least one ExpressRoute circuit from on-premises to the primary region, and the
other should connect to the disaster recovery region. Refer to the Designing of
Azure ExpressRoute for disaster recovery article, which describes different
scenarios to design disaster recovery for ExpressRoute.
7 Note
Virtual network and subnets span all availability zones in a region. For DR across
two regions, you need to configure separate virtual networks and subnets on the
disaster recovery region. Refer to About networking in Azure VM disaster recovery
to learn more on the networking setup on DR region.
Azure Standard Load Balancer provides networking elements for the high-
availability design of your SAP systems. For clustered systems, Standard Load
Balancer provides the virtual IP address for the cluster service, like ASCS/SCS
instances and databases running on VMs. To run highly available SAP system on
the DR site, a separate load balancer must be created and the cluster configuration
should be adjusted accordingly.
Azure Application Gateway is a web traffic load-balancer. With its Web Application
Firewall functionality, its well suited service to expose web applications to the
internet with improved security. Azure Application Gateway can service either
public (internet) or private clients, or both, depending on the configuration. After
failover, to accept similar incoming HTTPs traffic on DR region, a separate Azure
Application Gateway must be configured in the DR region.
Virtual machines
On Azure, different components of a single SAP system run on virtual machines
with different SKU types. For DR, protection of an application (SAP NetWeaver and
non-SAP) running on Azure VMs can be enabled by replicating components using
Azure Site Recovery to another Azure region or zone. With Azure Site Recovery,
Azure VMs are replicated continuously from primary to disaster recovery site.
Depending on the selected Azure DR region, the VM SKU type might not be
available on the DR site. You need to make sure that the required VM SKU types
are available in the Azure DRregion as well. Check Azure Products by Region to
see if the required VM family SKU type is available or not.
) Important
If SAP system is configured with flexible scale set with FD=1, then you need to
use PowerShell to set up Azure Site Recovery for disaster recovery. Currently,
it's the only method available to configure disaster recovery for VMs deployed
in scale set.
For databases running on Azure virtual machines, it's recommended to use native
database replication technology to synchronize data to the disaster recovery site.
The large VMs on which the databases are running might not be available in all
regions. If you're using availability zones for disaster recovery, you should check
that the respective VM SKUs are available in the zone of your disaster recovery site.
7 Note
With production applications running on the primary region at all time, reserve
instances are typically used to economize Azure costs. If using reserved
instances, you need to sign up for 1-year or a 3-year term commitment that might
not be cost effective for DR site. Also setting up Azure Site Recovery doesn’t
guarantee you the capacity of the required VM SKU during your failover. To make
sure that the VM SKU capacity is available, you can consider an option to enable
on-demand capacity reservation. It reserves compute capacity in an Azure region
or an Azure availability zone for any duration of time without commitment. Azure
Site Recovery is integrated with on-demand capacity reservation. With this
integration, you can use the power of capacity reservation with Azure Site Recovery
to reserve compute capacity in the DR site and guarantee your failovers. For more
information, read on-demand capacity reservation limitations and restrictions.
An Azure subscription has quotas for VM families (for example, Mv2 family) and
other resources. Sometimes organizations want to use different Azure subscription
for DR. Each subscription (primary and DR) might have different quotas assigned
for each VM family. Make sure that the subscription used for the DR site has
enough compute quotas available.
Storage
On enabling Azure Site Recovery for a VM to set up DR, the local managed disks
attached to the VMs are replicated to the DR region. During replication, the VM
disk writes are sent to a cache storage account in the source region. Data is sent
from there to the target region, and recovery points are generated from the data.
When you fail over a VM during DR, a recovery point is used to restore the VM in
the target region. But Azure Site Recovery doesn’t support all storages types that
are available in Azure. For more information, see Azure Site Recovery support
matrix for storages.
For SAP system running on Windows with Azure shared disk, you could use Azure
Site Recovery with Azure Shared Disk (preview) . As the feature is in public
preview, we don't recommend implementing the scenario for most critical SAP
production workloads. For more information on supported scenarios for Azure
Shared Disk, see Support matrix for shared disks in Azure VM disaster recovery
(preview)
In addition to Azure managed data disks attached to VMs, different Azure native
storage solutions are used to run SAP application on Azure. The DR approach for
each Azure storage solution might differ, as not all storage services available in
Azure are supported with Azure Site Recovery. Below are the list of storage type
that is typically used for SAP workload.
ノ Expand table
NFS on Azure files (LRS or Custom script to replicate data between two sites (for
ZRS) example, rsync)
NFS on Azure NetApp Files Use Cross-region replication of Azure NetApp Files volumes
Azure Shared Disk (LRS or Azure Site Recovery with Azure Shared Disk (in preview)
ZRS)
SMB on Azure files (LRS or Use RoboCopy to copy files between two sites
ZRS)
SMB on Azure NetApp Files Use Cross-region replication of Azure NetApp Files volumes
For custom built storage solutions like NFS cluster, you need to make sure the
appropriate DR strategy is in place.
Different native Azure storage services (like Azure Files, Azure NetApp Files) might
not be not available in all regions. So to have similar SAP setup on the DR region
after failover, ensure the respective storage service is offered in DR site. For more
information, check Azure Products by Region .
If you're using zone redundancy storage (ZRS) for Azure Files, and Azure Shared
Disk in your primary region, and you want to maintain same ZRS redundancy
option in DR region as well, refer to [Premium file shares ZRS support](Azure Files
zone-redundant storage (ZRS) support for premium file shares | Microsoft Learn),
and ZRS for managed disks document for ZRS support in Azure regions.
If using availability zones for disaster recovery, keep in mind the following points:
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files
feature isn't deployed in all Availability zones in an Azure region. So it might
happen that the Azure NetApp Files service isn't available in the chosen
availability zone for your DR strategy.
Cross region replication of Azure NetApp File volumes is only available in fixed
region pairs, not across zones.
If you configure your storage with Active Directory integration, similar setup
should be done on the DR site storage account as well.
Next steps
Disaster Recovery Guidelines for SAP workload
Azure to Azure disaster recovery architecture using Azure Site Recovery service
Disaster recovery guidelines for SAP
application
Article • 05/08/2024
To configure Disaster Recovery (DR) for SAP workload on Azure, you need to test, fine
tune and update the process regularly. Testing disaster recovery helps in identifying
sequence of dependent services that are required before you can trigger SAP workload
DR failover or start the system on the secondary site. Organizations usually have their
SAP systems connected to Active Directory (AD) and Domain Name System (DNS)
services to function correctly. When you set up DR for your SAP workload, ensure AD
and DNS services are functioning before you recover SAP and other non-SAP systems,
to ensure the application functions correctly. For guidance on protecting Active
Directory and DNS, learn how to protect Active Directory and DNS. The DR
recommendation for SAP application described in this document is at abstract level. You
need to design your DR strategy based on your specific setup and document the end-
to-end scenario.
For SAP systems running on virtual machines, you can use Azure Site Recovery to create
a disaster recovery plan. Following is the recommended disaster recovery approach for
each component of an SAP system. Standalone non-NetWeaver SAP engines such as
TREX and non-SAP applications aren't covered in this document.
ノ Expand table
Components Recommendation
Shared Storage Replicate content, using appropriate method per storage type
To achieve DR for highly available SAP Web Dispatcher setup in primary region, you can
use Azure Site Recovery. For parallel web dispatchers (option 2) running in primary
region, you can configure Azure Site Recovery to achieve DR. But for SAP Web
Dispatcher configured using option 1 in primary region, you need to make some
additional changes after failover to have similar HA setup on the DR region. As the
configuration of SAP Web Dispatcher high availability with cluster solution is configured
in similar manner to SAP central services. Follow the same guidelines as mentioned for
SAP Central Services.
Configuring high availability for SAP Central Services protects resources and processes
from local incidents. To achieve DR for SAP Central Services, you can use Azure Site
Recovery. Azure Site Recovery replicates VMs and the attached managed disks, but
there are additional considerations for the DR strategy. Check the following section for
more information, based on the operating system used for SAP central services.
Windows
For SAP system, the redundancy of SPOF component in the primary region is
achieved by configuring high availability. To achieve similar high availability setup in
the disaster recovery region after failover, you need to consider additional points
like cluster reconfiguration, SAP shared directories availability, alongside of
replicating VMs and attached managed disk to DR site using Azure Site Recovery.
On Windows, the high availability of SAP application can be achieved using
Windows Server Failover Cluster (WSFC). The following diagram shows the different
components involved in configuring high availability of SAP central services with
WSFC. Each component must be evaluated to achieve similar high availability set up
in the DR site. If you configure SAP Web Dispatcher using WSFC, similar
consideration would apply as well.
Load balancer
Azure Site Recovery replicates VMs to the DR site, but it doesn’t replicate Azure
load balancer. You'll need to create a separate internal load balancer on DR site
beforehand or after failover. If you create internal load balancer beforehand, create
an empty backend pool and add VMs after the failover event.
If you configure a cluster with a cloud witness as its quorum mechanism, then you
need to create a separate storage account in the DR region. On the event of
failover, quorum setting must be updated with the new storage account name and
access keys.
If there's a failover, SAP ASCS/ERS VMs configured with WSFC don't work out-of-
the-box. Additional reconfiguration is required to start SAP system on the DR
region. Based on the type of your deployment (file share or shared disk), refer to
following blog to learn more on the additional steps to be performed in the DR
region.
On Windows, the high availability configuration of SAP central services (ASCS and
ERS) is set up with either a file share or shared disk. Depending on the type of
cluster disk, you need to implement the suitable method to replicate the data on
this disk type to the DR region. The replication methodology for each cluster disk
type is presented at abstract level. You need to confirm exact steps to replicate
storage and perform testing.
ノ Expand table
Azure Shared Disk Azure Site Recovery with Shared Disks (preview)
7 Note
Azure Site Recovery with shared disk is currently in public preview. So, we don't
recommend implementing the scenario for most critical SAP production
workloads
SAP Application Servers
In the primary region, the redundancy of the SAP application servers is achieved by
installing instances in multiple VMs. To have DR for SAP application servers, Azure Site
Recovery can be set up for each application server VM. For shared storages (transport
filesystem, interface data filesystem) that is attached to the application servers, follow
the appropriate DR practice based on the type of shared storage.
ノ Expand table
Database DR recommendation
For cost optimized solution, you can even use backup and restore option for database
DR strategy.
7 Note
*Azure backup support Oracle database using Azure VM backup for database
consistent snapshots.
Azure backup doesn’t support all Azure storages and databases that are used for
SAP workload.
Azure backup stores backups in recovery service vault, which replicates your data based
on the chosen replication type (LRS, ZRS, or GRS). For Geo-redundant storage (GRS),
your backup data is replicated to a paired secondary region. With cross region restore
feature enabled, you can restore data of the supported management type on the
secondary region.
Backup and restore are more traditional cost optimized approach but comes with a
trade-off of higher RTO. As you need to restore all the applications from the backup if
there's failover to DR region. So you need to analyze your business need and
accordingly design a DR strategy.
References
Tutorial: Set up disaster recovery for Azure VMs
Azure Backup service.
Considerations for Azure Virtual
Machines DBMS deployment for SAP
workload
Article • 02/10/2023
This guide is part of the documentation on how to implement and deploy SAP software
on Microsoft Azure. Before you read this guide, read the Planning and implementation
guide and articles the planning guide points you to. This document covers the generic
deployment aspects of SAP-related DBMS systems on Microsoft Azure virtual machines
(VMs) by using the Azure infrastructure as a service (IaaS) capabilities.
The paper complements the SAP installation documentation and SAP Notes, which
represent the primary resources for installations and deployments of SAP software on
given platforms.
Resources
There are other articles available on SAP workload on Azure. Start with SAP workload on
Azure: Get started and then choose your area of interest.
The following SAP Notes are related to SAP on Azure in regard to the area covered in
this document.
Note Title
number
2039619 SAP applications on Microsoft Azure using the Oracle database: Supported products
and versions
2233094 DB6: SAP applications on Azure using IBM DB2 for Linux, UNIX, and Windows:
Additional information
For information on all the SAP Notes for Linux, see the SAP community wiki .
You need a working knowledge of Microsoft Azure architecture and how Microsoft
Azure virtual machines are deployed and operated. For more information, see Azure
documentation.
In general, the Windows, Linux, and DBMS installation and configuration are essentially
the same as any virtual machine or bare metal machine you install on-premises. There
are some architecture and system management implementation decisions that are
different when you use Azure IaaS. This document explains the specific architectural and
system management differences to be prepared for when you use Azure IaaS.
For Azure block storage, the usage of Azure managed disks is mandatory. For details
about Azure managed disks read the article Introduction to managed disks for Azure
VMs.
A configuration that separates these components into five different volumes can result
in higher resiliency since excessive usage on one volume doesn't necessarily interfere
with the usage of other volumes as long as VM storage quota and limits aren't
exceeded.
The DBMS data and transaction/redo log files are stored in Azure supported block
storage or Azure NetApp Files. Azure Files or Azure Premium Files isn't supported as
storage for DBSM data and/or redo log files with SAP workload. They're stored in
separate disks and attached as logical disks to the original Azure operating system
image VM. For Linux deployments, different recommendations are documented. Read
the article Azure Storage types for SAP workload for the capabilities and the support of
the different storage types for your scenario. Specifically for SAP HANA start with the
article SAP HANA Azure virtual machine storage configurations.
When you plan your disk layout, find the best balance between these items:
VM SLAs.
Azure enforces an IOPS quota per data disk or NFS share. These quotas are different for
disks hosted on the different Azure block storage solutions or shares. I/O latency is also
different between these different storage types as well.
Each of the different VM types has a limited number of data disks that you can attach.
Another restriction is that only certain VM types can use, for example, premium storage.
Typically, you decide to use a certain VM type based on CPU and memory requirements.
You also need to consider the IOPS, latency, and disk throughput requirements that
usually are scaled with the number of disks or the type of premium storage disks v1. The
number of IOPS and the throughput to be achieved by each disk might dictate disk size,
especially with premium storage v1. With premium storage v2 or Ultra disk, you can
select provisioned IOPS and throughput independent of the disk capacity.
7 Note
For DBMS deployments, we highly recommend Azure premium storage (v1 and v2),
Ultra disk or Azure NetApp Files based NFS shares for any data, transaction log, or
redo files. It doesn't matter whether you want to deploy production or
nonproduction systems. Latency of Azure standard HDD or SSD isn't acceptable for
any type of production system.
7 Note
To maximize Azure's single VM SLA , all disks that are attached must be Azure
premium storage (v1 or v2) or Azure Ultra disk type, which includes the base VHD
(Azure premium storage).
7 Note
Hosting main database files, such as data and log files, of SAP databases on storage
hardware that's located in co-located third-party data centers adjacent to Azure
data centers isn't supported. Storage provided through software appliances hosted
in Azure VMs, are also not supported for this use case. For SAP DBMS workloads,
only storage that's represented as native Azure service is supported for the data
and transaction log files of SAP databases in general. Different DBMS might
support different Azure storage types. For more details check the article Azure
Storage types for SAP workload
The placement of the database files and the log and redo files and the type of Azure
Storage you use, is defined by IOPS, latency, and throughput requirements. Specifically
for Azure premium storage v1, to achieve enough IOPS, you might be forced to use
multiple disks or use a larger premium storage disk. If you use multiple disks, build a
software stripe across the disks that contain the data files or the log and redo files. In
such cases, the IOPS and the disk throughput SLAs of the underlying premium storage
disks or the maximum achievable IOPS of standard storage disks are accumulative for
the resulting stripe set.
If your IOPS requirement exceeds what a single VHD can provide, balance the IOPS that
is needed for the database files across a number of VHDs. The easiest way to distribute
the IOPS load across disks is to build a software stripe over the different disks. Then
place a number of data files of the SAP DBMS on the LUNs carved out of the software
stripe. The number of disks in the stripe is driven by IOPS demands, disk throughput
demands, and volume demands.
Windows
We recommend that you use Windows Storage Spaces to create stripe sets across
multiple Azure VHDs. Use at least Windows Server 2012 R2 or Windows Server 2016.
Linux
Only MDADM and Logical Volume Manager (LVM) are supported to build a software
RAID on Linux. For more information, see:
For Azure premium storage v2 and Ultra disk, striping may not necessary since you can
define IOPS and disk throughput independent of the size of the disk.
7 Note
Because Azure Storage keeps three images of the VHDs, it doesn't make sense to
configure a redundancy when you stripe. You only need to configure striping so
that the I/Os are distributed over the different VHDs.
) Important
Given the advantages of Azure Managed Disks, it is mandatory that you use Azure
Managed Disks for your DBMS deployments and SAP deployments in general.
If you happen to have SAP workload that isn't yet using managed disks, to convert from
unmanaged to managed disks, see:
The following recommendations assume these I/O characteristics for standard DBMS:
It's mostly a read workload against data files of a database. These reads are
performance critical for the DBMS system.
Writing against the data files occurs in bursts based on checkpoints or a constant
stream. Averaged over a day, there are fewer writes than reads. Opposite to reads
from data files, these writes are asynchronous and don't hold up any user
transactions.
There are hardly any reads from the transaction log or redo files. Exceptions are
large I/Os when you perform transaction log backups.
The main load against transaction or redo log files is writes. Dependent on the
nature of the workload, you can have I/Os as small as 4 KB or, in other cases, I/O
sizes of 1 MB or more.
All writes must be persisted on disk in a reliable fashion.
For Azure premium storage v1, the following caching options exist:
None
Read
Read/write
None + Write Accelerator, which is only for Azure M-Series VMs
Read + Write Accelerator, which is only for Azure M-Series VMs
For premium storage v1, we recommend that you use Read caching for data files of the
SAP database and choose No caching for the disks of log file(s).
For M-Series deployments, we recommend that you use Azure Write Accelerator only
for the disks of your log files. For details, restrictions, and deployment of Azure Write
Accelerator, see Enable Write Accelerator.
For premium storage v2, Ultra disk and Azure NetApp Files, no caching options are
offered.
For more information, see Understand the temporary drive on Windows VMs in Azure.
Windows
Linux
There are other redundancy methods. For more information, see Azure Storage
replication.
7 Note
Azure premium storage v1 and v2, Ultra disk and Azure NetApp Files are the
recommended type of storage for DBMS VMs and disks that store database and
log and redo files. With exception of premium storage v1, the only available
redundancy method for these storage types is LRS. As a result, you need to
configure database methods to enable database data replication into another
Azure region or availability zone. Database methods include SQL Server Always On,
Oracle Data Guard, and HANA System Replication.
VM node resiliency
Azure offers several different SLAs for VMs. For more information, see the most recent
release of SLA for Virtual Machines . Because the DBMS layer is critical to availability in
an SAP system, you need to understand availability sets, Availability Zones, and
maintenance events. For more information on these concepts, see Manage the
availability of Windows virtual machines in Azure and Manage the availability of Linux
virtual machines in Azure.
The minimum recommendation for production DBMS scenarios with an SAP workload is
to:
Deploy two VMs in a separate availability set in the same Azure region.
Run these two VMs in the same Azure virtual network and have NICs attached out
of the same subnets.
Use database methods to keep a hot standby with the second VM. Methods can
be SQL Server Always On, Oracle Data Guard, or HANA System Replication.
You also can deploy a third VM in another Azure region and use the same database
methods to supply an asynchronous replica in another Azure region.
For information on how to set up Azure availability sets, see this tutorial.
The virtual networks the SAP application is deployed into don't have access to the
internet.
The database VMs run in the same virtual network as the application layer,
separated in a different subnet from the SAP application layer.
The VMs within the virtual network have a static allocation of the private IP
address. For more information, see IP address types and allocation methods in
Azure.
Routing restrictions to and from the DBMS VMs aren't set with firewalls installed
on the local DBMS VMs. Instead, traffic routing is defined with network security
groups (NSGs).
To separate and isolate traffic to the DBMS VM, assign different NICs to the VM.
Every NIC gets a different IP address, and every NIC is assigned to a different
virtual network subnet. Every subnet has different NSG rules. The isolation or
separation of network traffic is a measure for routing. It's not used to set quotas
for network throughput.
7 Note
Other scenarios where network virtual appliances aren't supported are in:
Network virtual appliances in communication paths can easily double the network
latency between two communication partners. They also can restrict throughput in
critical paths between the SAP application layer and the DBMS layer. In some
customer scenarios, network virtual appliances can cause Pacemaker Linux clusters
to fail. These are cases where communications between the Linux Pacemaker cluster
nodes communicate to their SBD device through a network virtual appliance.
) Important
Another design that's not supported is the segregation of the SAP application layer
and the DBMS layer into different Azure virtual networks that aren't peered with
each other. We recommend that you segregate the SAP application layer and
DBMS layer by using subnets within an Azure virtual network instead of by using
different Azure virtual networks.
If you decide not to follow the recommendation and instead segregate the two
layers into different virtual networks, the two virtual networks must be peered.
Be aware that network traffic between two peered Azure virtual networks is subject
to transfer costs. Huge data volume that consists of many terabytes is exchanged
between the SAP application layer and the DBMS layer. You can accumulate
substantial costs if the SAP application layer and DBMS layer are segregated
between two peered Azure virtual networks.
Use two VMs for your production DBMS deployment within an Azure availability set or
between two Azure Availability Zones. Also use separate routing for the SAP application
layer and the management and operations traffic to the two DBMS VMs. See the
following image:
If there's a failover of the database node, there's no need for the SAP application to
reconfigure. Instead, the most common SAP application architectures reconnect against
the private virtual IP address. Meanwhile, the load balancer reacts to the node failover
by redirecting the traffic against the private virtual IP address to the second node.
Azure offers two different load balancer SKUs: a basic SKU and a standard SKU. Based
on the advantages in setup and functionality, you should use the Standard SKU of the
Azure load balancer. One of the large advantages of the Standard version of the load
balancer is that the data traffic isn't routed through the load balancer itself.
An example how you can configure an internal load balancer can be found in the article
Tutorial: Configure a SQL Server availability group on Azure Virtual Machines manually
7 Note
There are differences in behavior of the basic and standard SKU related to the
access of public IP addresses. The way how to work around the restrictions of the
Standard SKU to access public IP addresses is described in the document Public
endpoint connectivity for Virtual Machines using Azure Standard Load Balancer
in SAP high-availability scenarios
For more information on the deployment of components that deliver host data to
SAPOSCOL and SAP Host Agent and the life-cycle management of those components,
start with the article Implement the Azure VM extension for SAP solutions.
Next steps
For more information on a particular DBMS, see:
SQL Server Azure Virtual Machines DBMS deployment for SAP workload
IBM DB2 Azure Virtual Machines DBMS deployment for SAP workload
High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
with Pacemaker
High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
Introduction
This document helps in pointing you to the right resources for deploying HANA on
Azure virtual machines, including documents that you need to check before installing
SAP HANA on Azure VMs. The aim is to ensure you are able to perform the right steps
to achieve a supported configuration of SAP HANA on Azure.
7 Note
This guide describes deployments of SAP HANA into Azure VMs. For information
on how to deploy SAP HANA on HANA large instances, see How to install and
configure SAP HANA (Large Instances) on Azure.
Prerequisites
This guide also assumes that you're familiar with:
SAP HANA and SAP NetWeaver and how to install them on-premises.
How to install and operate SAP HANA and SAP application instances on Azure.
The concepts and procedures documented in:
Planning for SAP deployment on Azure, which includes Azure Virtual Network
planning and Azure Storage usage. See SAP NetWeaver on Azure Virtual
Machines - Planning and implementation guide
Deployment principles and ways to deploy VMs in Azure. See Azure Virtual
Machines deployment for SAP
High availability concepts for SAP HANA as documented in SAP HANA high
availability for Azure virtual machines
SAP support note #2814271 SAP HANA Backup fails on Azure with Checksum
Error
SAP support note #2753418 Potential Performance Degradation Due to Timer
Fallback
SAP support note #2791572 Performance Degradation Because of Missing
VDSO Support For Hyper-V in Azure
5. Based on the OS release that is supported for the virtual machine type of choice,
you need to check whether your desired SAP HANA release is supported with that
operating system release. Read SAP support note #2235581 for a support matrix
of SAP HANA releases with the different Operating System releases.
6. When you have found a valid combination of Azure VM type, operating system
release and SAP HANA release, you will need to check the SAP Product Availability
Matrix. In the SAP Availability Matrix, you can verify whether the SAP product you
want to run against your SAP HANA database is supported.
2. If you choose a guest OS image that requires you to bring your own license, you
will need to register this OS image with your subscription to enable you to
download and apply the latest patches. This step is going to require public internet
access, unless you set up your private instance of, for example, an SMT server in
Azure.
3. Decide the network configuration of the VM. You can get more information in the
document SAP HANA infrastructure configurations and operations on Azure. Keep
in mind that there are no network throughput quotas you can assign to virtual
network cards in Azure. As a result, the only purpose of directing traffic through
different vNICs is based on security considerations. We trust you to find a
supportable compromise between complexity of traffic routing through multiple
vNICs and the requirements enforced by security aspects.
4. Apply the latest patches to the operating system once the VM is deployed and
registered. Registered either with your own subscription. Or in case you chose an
image that includes operating system support the VM should have access to the
patches already.
5. Apply the tunings necessary for SAP HANA. These tunings are listed in the
following SAP support notes:
6. Select the Azure storage type and storage layout for the SAP HANA installation.
You are going to use either attached Azure disks or native Azure NFS shares. The
Azure storage types that are supported and the combinations of different Azure
storage types that can be used are documented in SAP HANA Azure virtual
machine storage configurations. Take the configurations documented as starting
point. For non-production systems, you might be able to configure lower
throughput or IOPS. For production systems, you might need to increase the
throughput and IOPS.
7. Make sure you have configured Azure Write Accelerator for your volumes that
contain the DBMS transaction logs or redo logs when using M-Series or Mv2-
Series VMs. Be aware of the limitations for Write Accelerator as documented.
7 Note
Not all the commands in the different sap-tune profiles or as described in the notes
might run successfully on Azure. Commands that would manipulate the power
mode of VMs usually return with an error since the power mode of the underlying
Azure host hardware can not be manipulated.
SAP Note 2191498 discusses SAP enhanced monitoring with Linux VMs on Azure
SAP Note 1102124 discusses information about SAPOSCOL on Linux
SAP Note 2178632 discusses key monitoring metrics for SAP on Microsoft Azure
Azure Virtual Machines deployment for SAP NetWeaver
For SAP HANA scale-out configurations using direct attached disks of Azure Premium
Storage or Ultra disk, read the specifics in the document SAP HANA infrastructure
configurations and operations on Azure
Next steps
Read the documentation:
This document provides guidance for configuring Azure infrastructure and operating
SAP HANA systems that are deployed on Azure native virtual machines (VMs). The
document also includes configuration information for SAP HANA scale-out for the
M128s VM SKU. This document isn't intended to replace the standard SAP
documentation, which includes the following content:
Prerequisites
To use this guide, you need basic knowledge of the following Azure components:
To learn more about SAP NetWeaver and other SAP components on Azure, see the SAP
on Azure section of the Azure documentation.
7 Note
For non-production scenarios, use the VM types that are listed in the SAP note
#1928533 . For the usage of Azure VMs for production scenarios, check for SAP
HANA certified VMs in the SAP published Certified IaaS Platforms list .
You also can deploy a complete installed SAP HANA platform on the Azure VM services
through the SAP Cloud platform . The installation process is described in Deploy SAP
S/4HANA or BW/4HANA on Azure.
) Important
In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image.
For more information, see Memory optimized virtual machine sizes.
) Important
) Important
Another design that is NOT supported is the segregation of the SAP application
layer and the DBMS layer into different Azure virtual networks that are not peered
with each other. It is recommended to segregate the SAP application layer and
DBMS layer using subnets within an Azure virtual network instead of using different
Azure virtual networks. If you decide not to follow the recommendation, and
instead segregate the two layers into different virtual network, the two virtual
networks need to be peered. Be aware that network traffic between two peered
Azure virtual networks are subject of transfer costs. With the huge data volume in
many Terabytes exchanged between the SAP application layer and DBMS layer
substantial costs can be accumulated if the SAP application layer and DBMS layer is
segregated between two peered Azure virtual networks.
If you deployed Jumpbox or management VMs in a separate subnet, you can define
multiple virtual network interface cards (vNICs) for the HANA VM, with each vNIC
assigned to different subnet. With the ability to have multiple vNICs, you can set up
network traffic separation, if necessary. For example, client traffic can be routed through
the primary vNIC and admin traffic is routed through a second vNIC.
You also assign static private IP addresses that are deployed for both virtual NICs.
7 Note
You should assign static IP addresses through Azure means to individual vNICs. You
should not assign static IP addresses within the guest OS to a vNIC. Some Azure
services like Azure Backup Service rely on the fact that at least the primary vNIC is
set to DHCP and not to static IP addresses. See also the document Troubleshoot
Azure virtual machine backup. If you need to assign multiple static IP addresses to
a VM, you need to assign multiple vNICs to a VM.
However, for deployments that are enduring, you need to create a virtual datacenter
network architecture in Azure. This architecture recommends the separation of the
Azure VNet Gateway that connects to on-premises into a separate Azure VNet. This
separate VNet should host all the traffic that leaves either to on-premises or to the
internet. This approach allows you to deploy software for auditing and logging traffic
that enters the virtual datacenter in Azure in this separate hub VNet. So you have one
VNet that hosts all the software and configurations that relate to in- and outgoing traffic
to your Azure deployment.
The articles Azure Virtual Datacenter: A Network Perspective and Azure Virtual
Datacenter and the Enterprise Control Plane give more information on the virtual
datacenter approach and related Azure VNet design.
7 Note
Traffic that flows between a hub VNet and spoke VNet using Azure VNet peering is
subject of additional costs . Based on those costs, you might need to consider
making compromises between running a strict hub and spoke network design and
running multiple Azure ExpressRoute Gateways that you connect to 'spokes' in
order to bypass VNet peering. However, Azure ExpressRoute Gateways introduce
additional costs as well. You also may encounter additional costs for third-party
software you use for network traffic logging, auditing, and monitoring. Dependent
on the costs for data exchange through VNet peering on the one side and costs
created by additional Azure ExpressRoute Gateways and additional software
licenses, you may decide for micro-segmentation within one VNet by using subnets
as isolation unit instead of VNets.
For an overview of the different methods for assigning IP addresses, see IP address
types and allocation methods in Azure.
For VMs running SAP HANA, you should work with static IP addresses assigned. Reason
is that some configuration attributes for HANA reference IP addresses.
Azure Network Security Groups (NSGs) are used to direct traffic that's routed to the SAP
HANA instance or the jumpbox. The NSGs and eventually Application Security Groups
are associated to the SAP HANA subnet and the Management subnet.
To deploy SAP HANA in Azure without a site-to-site connection, you still want to shield
the SAP HANA instance from the public internet and hide it behind a forward proxy. In
this basic scenario, the deployment relies on Azure built-in DNS services to resolve
hostnames. In a more complex deployment where public-facing IP addresses are used,
Azure built-in DNS services are especially important. Use Azure NSGs and Azure NVAs
to control, monitor the routing from the internet into your Azure VNet architecture in
Azure. The following image shows a rough schema for deploying SAP HANA without a
site-to-site connection in a hub and spoke VNet architecture:
Another description on how to use Azure NVAs to control and monitor access from
Internet without the hub and spoke VNet architecture can be found in the article Deploy
highly available network virtual appliances.
The minimum OS releases for deploying scale-out configurations in Azure VMs, check
the details of the entries in the particular VM SKU listed in the SAP HANA hardware
directory. Of a n-node OLAP scale-out configuration, one node functions as the main
node. The other nodes up to the limit of the certification act as worker node. More
standby nodes don't count into the number of certified nodes
7 Note
Azure VM scale-out deployments of SAP HANA with standby node are only
possible using the Azure NetApp Files storage. No other SAP HANA certified
Azure storage allows the configuration of SAP HANA standby nodes
For /hana/shared, we recommend the usage of Azure NetApp Files or Azure Files.
A typical basic design for a single node in a scale-out configuration, with /hana/shared
deployed on Azure NetApp Files, looks like:
The basic configuration of a VM node for SAP HANA scale-out looks like:
For /hana/shared, you use the native NFS service provided through Azure NetApp
Files or Azure Files.
All other disk volumes aren't shared among the different nodes and aren't based
on NFS. Installation configurations and steps for scale-out HANA installations with
non-shared /hana/data and /hana/log is provided further later in this document.
For HANA certified storage that can be used, check the article SAP HANA Azure
virtual machine storage configurations.
Sizing the volumes or disks, you need to check the document SAP HANA TDI Storage
Requirements , for the size required dependent on the number of worker nodes. The
document releases a formula you need to apply to get the required capacity of the
volume
The other design criteria that is displayed in the graphics of the single node
configuration for a scale-out SAP HANA VM is the VNet, or better the subnet
configuration. SAP highly recommends a separation of the client/application facing
traffic from the communications between the HANA nodes. As shown in the graphics,
this goal is achieved by having two different vNICs attached to the VM. Both vNICs are
in different subnets, have two different IP addresses. You then control the flow of traffic
with routing rules using NSGs or user-defined routes.
Particularly in Azure, there are no means and methods to enforce quality of service and
quotas on specific vNICs. As a result, the separation of client/application facing and
intra-node communication doesn't open any opportunities to prioritize one traffic
stream over the other. Instead the separation remains a measure of security in shielding
the intra-node communications of the scale-out configurations.
7 Note
SAP recommends separating network traffic to the client/application side and intra-
node traffic as described in this document. Therefore putting an architecture in
place as shown in the last graphics is recommended. Also consult your security and
compliance team for requirements that deviate from the recommendation
From a networking point of view the minimum required network architecture would
look like:
global.ini file. This parameter enables SAP HANA to run in scale-out without
shared /hana/data and /hana/log volumes between the nodes. Details are
documented in SAP Note #2080991 . If you're using NFS volumes based on ANF
for /hana/data and /hana/log, you don't need to make this change
After the eventual change in the global.ini parameter, restart the SAP HANA
instance
Add more worker nodes. For more information, see Add Hosts Using the
Command-Line Interface . Specify the internal network for SAP HANA inter-node
communication during the installation or afterwards using, for example, the local
hdblcm. For more detailed documentation, see SAP Note #2183363 .
To set up an SAP HANA scale-out system with a standby node, see the SUSE Linux
deployment instructions or the Red Hat deployment instructions.
SAP HANA Dynamic Tiering 2.0 isn't supported by SAP BW or S4HANA. Main use cases
right now are native HANA applications.
Overview
The picture below gives an overview regarding DT 2.0 support on Microsoft Azure.
There's a set of mandatory requirements, which has to be followed to comply with the
official certification:
DT 2.0 must be installed on a dedicated Azure VM. It may not run on the same VM
where SAP HANA runs
SAP HANA and DT 2.0 VMs must be deployed within the same Azure Vnet
The SAP HANA and DT 2.0 VMs must be deployed with Azure accelerated
networking enabled
Storage type for the DT 2.0 VMs must be Azure Premium Storage
Multiple Azure disks must be attached to the DT 2.0 VM
It's required to create a software raid / striped volume (either via lvm or mdadm)
using striping across the Azure disks
M64-32ms
E32sv3
For more information on the VM type description, see Azure VM sizes - Memory
Given the basic idea of DT 2.0, which is about offloading "warm" data in order to save
costs it makes sense to use corresponding VM sizes. There's no strict rule though
regarding the possible combinations. It depends on the specific customer workload.
ノ Expand table
M128ms M64-32ms
M128s M64-32ms
M64ms E32sv3
M64s E32sv3
All combinations of SAP HANA-certified M-series VMs with supported DT 2.0 VMs (M64-
32ms and E32sv3) are possible.
According to the specifications for the two Azure VM types, which are supported for DT
2.0, the maximum disk IO throughput limit for the VM looks like:
E32sv3: 768 MB/sec (uncached) which means a ratio of 48 MB/sec per physical
core
M64-32ms: 1000 MB/sec (uncached) which means a ratio of 62.5 MB/sec per
physical core
It's required to attach multiple Azure disks to the DT 2.0 VM and create a software raid
(striping) on OS level to achieve the max limit of disk throughput per VM. A single Azure
disk can't provide the throughput to reach the max VM limit in this regard. Azure
Premium storage is mandatory to run DT 2.0.
Details about available Azure disk types can be found on the Select a disk type for
Azure IaaS VMs - managed disks page
Details about creating software raid via mdadm can be found on the Configure
software RAID on a Linux VM page
Details about configuring LVM to create a striped volume for max throughput can
be found on the Configure LVM on a virtual machine running Linux page
Depending on size requirements, there are different options to reach the max
throughput of a VM. Here are possible data volume disk configurations for every DT 2.0
VM type to achieve the upper VM throughput limit. The E32sv3 VM should be
considered as an entry level for smaller workloads. In case it should turn out that it's not
fast enough it might be necessary to resize the VM to M64-32ms. As the M64-32ms VM
has much memory, the IO load might not reach the limit especially for read intensive
workloads. Therefore fewer disks in the stripe set might be sufficient depending on the
customer specific workload. But to be on the safe side the disk configurations below
were chosen to guarantee the maximum throughput:
ノ Expand table
VM SKU Disk Config 1 Disk Config Disk Config Disk Config 4 Disk Config 5
2 3
M64- 4 x P50 -> 16 4 x P40 -> 8 5 x P30 -> 5 7 x P20 -> 3.5 8 x P15 -> 2 TB
32ms TB TB TB TB
E32sv3 3 x P50 -> 12 3 x P40 -> 6 4 x P30 -> 4 5 x P20 -> 2.5 6 x P15 -> 1.5
TB TB TB TB TB
Regarding the size of the log volume a recommended starting point is a heuristic of 15%
of the data size. The creation of the log volume can be accomplished by using different
Azure disk types depending on cost and throughput requirements. For the log volume,
high I/O throughput is required.
When using the VM type M64-32ms, it's mandatory to enable Write Accelerator. Azure
Write Accelerator provides optimal disk write latency for the transaction log (only
available for M-series). There are some items to consider though like the maximum
number of disks per VM type. Details about Write Accelerator can be found on the
Azure Write Accelerator page
ノ Expand table
data volume size and disk log volume and disk type log volume and disk type
type config 1 config 2
Like for SAP HANA scale-out, the /hana/shared directory has to be shared between the
SAP HANA VM and the DT 2.0 VM. The same architecture as for SAP HANA scale-out
using dedicated VMs, which act as a highly available NFS server is recommended. In
order to provide a shared backup volume, the identical design can be used. But it's up
to the customer if HA would be necessary or if it's sufficient to just use a dedicated VM
with enough storage capacity to act as a backup server.
Maintain the private and static IP address of the VM that hosts SAP HANA in the
SAProuter configuration.
Configure the NSG of the subnet that hosts the HANA VM to allow traffic through
TCP/IP port 3299.
If you're connecting to Azure through the internet, and you don't have an SAP router for
the VM with SAP HANA, then you need to install the component. Install SAProuter in a
separate VM in the Management subnet. The following image shows a rough schema
for deploying SAP HANA without a site-to-site connection and with SAProuter:
Be sure to install SAProuter in a separate VM and not in your Jumpbox VM. The separate
VM must have a static IP address. To connect your SAProuter to the SAProuter that is
hosted by SAP, contact SAP for an IP address. (The SAProuter that is hosted by SAP is
the counterpart of the SAProuter instance that you install on your VM.) Use the IP
address from SAP to configure your SAProuter instance. In the configuration settings,
the only necessary port is TCP port 3299.
For more information on how to set up and maintain remote support connections
through SAProuter, see the SAP documentation .
Next Steps
Get familiar with the articles as listed
This document covers several different areas to consider when deploying SQL Server for
SAP workload in Azure IaaS. As a precondition to this document, you should have read
the document Considerations for Azure Virtual Machines DBMS deployment for SAP
workload and other guides in the SAP workload on Azure documentation.
) Important
The scope of this document is the Windows version on SQL Server. SAP is not
supporting the Linux version of SQL Server with any of the SAP software. The
document is not discussing Microsoft Azure SQL Database, which is a Platform as a
Service offer of the Microsoft Azure Platform. The discussion in this paper is about
running the SQL Server product as it's known for on-premises deployments in
Azure Virtual Machines, leveraging the Infrastructure as a Service capability of
Azure. Database capabilities and functionality between these two offers are
different and should not be mixed up with each other. For more information, see
Azure SQL Database .
In general, you should consider using the most recent SQL Server releases to run SAP
workload in Azure IaaS. The latest SQL Server releases offer better integration into some
of the Azure services and functionality. Or have changes that optimize operations in an
Azure IaaS infrastructure.
General documentation about SQL Server running in Azure VMs can be found in these
articles:
Not all the content and statements made in the general SQL Server in Azure VM
documentation applies to SAP workload. But, the documentation gives a good
impression on the principles. an example for functionality not supported for SAP
workload is the usage of FCI clustering.
There's some SQL Server in IaaS specific information you should know before
continuing:
SQL Version Support: Even with SAP Note #1928533 stating that the minimum
supported SQL Server release is SQL Server 2008 R2, the window of supported SQL
Server versions on Azure is also dictated by SQL Server's lifecycle. SQL Server 2012
extended maintenance ended mid of 2022. As a result, the current minimum
release for newly deployed systems should be SQL Server 2014. The more recent,
the better. The latest SQL Server releases offer better integration into some of the
Azure services and functionality. Or have changes that optimize operations in an
Azure IaaS infrastructure.
Using Images from Azure Marketplace: The fastest way to deploy a new Microsoft
Azure VM is to use an image from the Azure Marketplace. There are images in the
Azure Marketplace, which contain the most recent SQL Server releases. The images
where SQL Server already is installed can't be immediately used for SAP NetWeaver
applications. The reason is the default SQL Server collation is installed within those
images and not the collation required by SAP NetWeaver systems. In order to use
such images, check the steps documented in chapter Using a SQL Server image
out of the Microsoft Azure Marketplace.
SQL Server multi-instance support within a single Azure VM: This deployment
method is supported. However, be aware of resource limitations, especially around
network and storage bandwidth of the VM type that you're using. Detailed
information is available in article Sizes for virtual machines in Azure. These quota
limitations might prevent you to implement the same multi-instance architecture
as you can implement on-premises. As of the configuration and interference of
sharing the resources available within a single VM, the same considerations as on-
premises need to be taken into account.
Multiple SAP databases in one single SQL Server instance in a single VM:
Configurations like these are supported. Considerations of multiple SAP databases
sharing the shared resources of a single SQL Server instance are the same as for
on-premises deployments. Keep other limits like number of disks that can be
attached to a specific VM type in mind. Or network and storage quota limits of
specific VM types as detailed Sizes for virtual machines in Azure.
With all SAP certified VM types (see SAP Note #1928533 ), tempdb data, and log
files can be placed on the non-persisted D:\ drive.
With SQL Server releases, where SQL Server installs tempdb with one data file by
default, it's recommended to use multiple tempdb data files. Be aware D:\ drive
volumes are different in size and capabilities based on the VM type. For exact sizes
of the D:\ drive of the different VMs, check the article Sizes for Windows virtual
machines in Azure.
These configurations enable tempdb to consume more space and more important more
I/O operations per second (IOPS) and storage bandwidth than the system drive is able
to provide. The nonpersistent D:\ drive also offers better I/O latency and throughput. In
order to determine the proper tempdb size, you can check the tempdb sizes on existing
systems.
7 Note
in case you place tempdb data files and log file into a folder on D:\ drive that you
created, you need to make sure that the folder does exist after a VM reboot. Since
the D:\ drive can be freshly initialized after a VM reboot all file and directory
structures could be wiped out. A possibility to recreate eventual directory structures
on D:\ drive before the start of the SQL Server service is documented in this
article .
A VM configuration, which runs SQL Server with an SAP database and where tempdb
data and tempdb logfile are placed on the D:\ drive and Azure premium storage v1 or
v2 would look like:
The diagram displays a simple case. As eluded to in the article Considerations for Azure
Virtual Machines DBMS deployment for SAP workload, Azure storage type, number, and
size of disks is dependent from different factors. But in general we recommend:
For smaller and mid-range deployments, using one large volume, which contains
the SQL Server data files. Reason behind this configuration is that it's easier to deal
with different I/O workloads in case the SQL Server data files don't have the same
free space. Whereas in large deployments, especially deployments where the
customer moved with a heterogenous database migration to SQL Server in Azure,
we used separate disks and then distributed the data files across those disks. Such
an architecture is only successful when each disk has the same number of data
files, all the data files are the same size, and roughly have the same free space.
Use the D:\drive for tempdb as long as performance is good enough. If the overall
workload is limited in performance by tempdb located on the D:\ drive, you need
to move tempdb to Azure premium storage v1 or v2, or Ultra disk as
recommended in this article.
SQL Server proportional fill mechanism distributes reads and writes to all datafiles
evenly provided all SQL Server data files are the same size and have the same frees
pace. SAP on SQL Server will deliver the best performance when reads and writes are
distributed evenly across all available datafiles. If a database has too few datafiles or the
existing data files are highly unbalanced, the best method to correct is an R3load export
and import. An R3load export and import involves downtime and should only be done if
there's an obvious performance problem that needs to be resolved. If the datafiles are
only moderately different sizes, increase all datafiles to the same size, and SQL Server
will rebalance data over time. SQL Server will automatically grow datafiles evenly if trace
flag 1117 is set or if SQL Server 2016 or higher is used.
To avoid that the restore or creation of databases is initializing the data files by zeroing
the content of the files, make sure that the user context the SQL Server service is
running in has the User Right Perform volume maintenance tasks. For more
information, see Database instant file initialization.
There are several ways to back up and restore SQL Server databases in Azure. To get the
best overview and details, read the document Backup and restore for SQL Server on
Azure VMs. The article covers several different possibilities.
The SQL Server non-evaluation versions acquire higher costs than a 'Windows-
only' VM deployed from Azure Marketplace. To compare prices, see Windows
Virtual Machines Pricing and SQL Server Enterprise Virtual Machines Pricing .
You only can use SQL Server releases, which are supported by SAP.
The collation of the SQL Server instance, which is installed in the VMs offered in
the Azure Marketplace isn't the collation SAP NetWeaver requires the SQL Server
instance to run. You can change the collation though with the directions in the
following section.
The process should only take a few minutes. In order to make sure whether the step
ended up with the correct result, perform the following steps:
Output
Latin1-General, binary code point comparison sort for Unicode Data, SQL
Server Sort Order 40 on Code Page 850 for non-Unicode Data
If the result is different, STOP any deployment and investigate why the setup command
didn't work as expected. Deployment of SAP NetWeaver applications onto SQL Server
instance with different SQL Server codepages than the one mentioned is NOT supported
for NetWeaver deployments.
The SQL Server log shipping functionality was hardly used in Azure to achieve high
availability within one Azure region. However in the following scenarios SAP customers
were using log shipping successful with Azure:
Disaster Recovery scenarios from one Azure region into another Azure region
Disaster Recovery configuration from on-premises into an Azure region
Cut-over scenarios from on-premises to Azure. In those cases, log shipping is used
to synchronize the new DBMS deployment in Azure with the ongoing production
system on-premises. At the time of cutting over, production is shut down and it's
made sure that the last and latest transaction log backups got transferred to the
Azure DBMS deployment. Then the Azure DBMS deployment is opened up for
production.
Using an Availability Group Listener is only possible with Windows Server 2012 or
higher as guest OS of the VM. For Windows Server 2012, ensure that the update to
enable SQL Server Availability Group Listeners on Windows Server 2008 R2 and
Windows Server 2012-based Microsoft Azure virtual machines has been applied.
For Windows Server 2008 R2, this patch doesn't exist. In this case, Always On
would need to be used in the same manner as Database Mirroring. By specifying a
failover partner in the connections string (done through the SAP default.pfl
parameter dbs/mss/server - see SAP Note #965908 ).
Using an Availability Group Listener, you need to connect the Database VMs to a
dedicated Load Balancer. You should assign static IP addresses to the network
interfaces of those VMs in the Always On configuration (defining a static IP address
is described in this article). Static IP addresses compared to DHCP are preventing
the assignment of new IP addresses in cases where both VMs might be stopped.
There are special steps required when building the WSFC cluster configuration
where the cluster needs a special IP address assigned, because Azure with its
current functionality would assign the cluster name the same IP address as the
node the cluster is created on. This behavior means a manual step must be
performed to assign a different IP address to the cluster.
The Availability Group Listener is going to be created in Azure with TCP/IP
endpoints, which are assigned to the VMs running the primary and secondary
replicas of the Availability group.
There might be a need to secure these endpoints with ACLs.
Detailed documentation on deploying Always On with SQL Server in Azure VMs lists like:
7 Note
SQL Server Always On is the most common used high availability and disaster recovery
functionality used in Azure for SAP workload deployments. Most customers use Always
On for high availability within a single Azure Region. If the deployment is restricted to
two nodes only, you have two choices for connectivity:
Using the Availability Group Listener. With the Availability Group Listener, you're
required to deploy an Azure load balancer.
With SQL Server 2016 SP3, SQL Server 2017 CU 25, or SQL Server 2019 CU8 or
more recent SQL Server releases on Windows Server 2016 or later you can use the
Direct Network Name (DNN) listener instead of an Azure load balancer. DNN is
eliminating the requirement to us an Azure load balancer.
Using the connectivity parameters of SQL Server Database Mirroring should only be
considered for round of investigating issues with the other two methods. In this case,
you need to configure the connectivity of the SAP applications in a way where both
node names are named. Exact details of such an SAP side configuration is documented
in SAP Note #965908 . By using this option, you would have no need to configure an
Availability Group listener. And with that no Azure load balancer and with that could
investigate issues of those components. But recall, this option only works if you restrict
your Availability Group to span two instances.
Most customers are using the SQL Server Always On functionality for disaster recovery
functionality between Azure regions. Several customers also use the ability to perform
backups from a secondary replica.
In cases where you move SAP SQL Server databases from on-premises into Azure, we
recommend testing on which infrastructure you can get the encryption applied fastest.
For this case, keep these facts in mind:
You can't define how many threads are used to apply data encryption to the
database. The number of threads is majorly dependent on the number of disk
volumes the SQL Server data and log files are distributed over. Means the more
distinct volumes (drive letters), the more threads will be engaged in parallel to
perform the encryption. Such a configuration contradicts a bit with earlier disk
configuration suggestion on building one or a smaller number of storage spaces
for the SQL Server database files in Azure VMs. A configuration with a few volumes
would lead to a few threads executing the encryption. A single thread encrypting is
reading 64 KB extents, encrypts it and then write a record into the transaction log
file, telling that the extent got encrypted. As a result the load on the transaction
log is moderate.
In older SQL Server releases, backup compression didn't get efficiency anymore
when you encrypted your SQL Server database. This behavior could develop into
an issue when your plan was to encrypt your SQL Server database on-premises and
then copy a backup into Azure to restore the database in Azure. SQL Server
backup compression can achieve a compression ratio of factor 4.
With SQL Server 2016, SQL Server introduced new functionality that allows
compressing backup of encrypted databases as well in an efficient manner. See
this blog for some details.
More details to use Azure Key Vault for SQL Server TDE lists like:
Configure Azure Key Vault integration for SQL Server on Azure VMs (Resource
Manager).
More Questions From Customers About SQL Server Transparent Data Encryption –
TDE + Azure Key Vault.
) Important
Using SQL Server TDE, especially with Azure key Vault, it's recommended to use the
latest patches of SQL Server 2014, SQL Server 2016, and SQL Server 2017. Reason is
that based on customer feedback, optimizations and fixes got applied to the code.
As an example, check KBA #4058175 .
An example of a configuration for a little SQL Server instance with a database size
between 50 GB – 250 GB could look like
Accelerated Enable
Networking
# of data files 4
# of log files 1
# and type of data Premium storage v1: 2 x P10 (RAID0) Cache = Read Only for
disks Premium storage v2: 2 x 150 GiB (RAID0) - premium storage v1
default IOPS and throughput
Configuration DBMS VM Comments
Accelerated Enable
Networking
# of data files 8
# of log files 1
Format block 64 KB
size
# and type of Premium storage v1: 4 x P20 (RAID0) Cache = Read Only
data disks Premium storage v2: 4 x 100 GiB - 200 GiB (RAID0) - for premium storage
default IOPS and 25 MB/sec extra throughput per disk v1
Accelerated Enable
Networking
# of data devices 16
# of log devices 1
# and type of Premium storage v1: 4 x P30 (RAID0) Cache = Read Only for
data disks Premium storage v2: 4 x 250 GiB - 500 GiB - plus premium storage v1
2,000 IOPS and 75 MB/sec throughput per disk
An example of a configuration for a larger SQL Server instance with a database size
between 2,000 GB and 4,000 GB, such as a larger SAP Business Suite system, could look
like
Accelerated Enable
Networking
# of data devices 24
# of log devices 1
# and type of Premium storage v1: 4 x P30 (RAID0) Cache = Read Only for
data disks Premium storage v2: 4 x 500 GiB - 800 GiB - plus premium storage v1
2500 IOPS and 100 MB/sec throughput per disk
An example of a configuration for a large SQL Server instance with a database size of 4
TB+, such as a large globally used SAP Business Suite system, could look like
Accelerated Enable
Networking
# of data devices 32
# of log devices 1
Configuration DBMS VM Comments
# and type of Premium storage v1: 4+ x P40 (RAID0) Cache = Read Only
data disks Premium storage v2: 4+ x 1,000 GiB - 4,000 GiB - for premium storage
plus 4,500 IOPS and 125 MB/sec throughput per disk v1
# of data files 32
# of log files 1
Configuration DBMS VM Comments
# and type of data disks Premium storage v1: 16 x P40 Cache = Read Only
# and type of log disks Premium storage v1: 1 x P60 Using Write
Accelerator
1. Use the latest DBMS release, like SQL Server 2019, that has the most advantages in
Azure.
2. Carefully plan your SAP system landscape in Azure to balance the data file layout
and Azure restrictions:
Don't have too many disks, but have enough to ensure you can reach your
required IOPS.
Only stripe across disks if you need to achieve a higher throughput.
3. Never install software or put any files that require persistence on the D:\ drive as
it's non-permanent and anything on this drive can be lost at a Windows reboot or
VM restart.
4. Use your DBMS vendor's HA/DR solution to replicate database data.
5. Always use Name Resolution, don't rely on IP addresses.
6. Using SQL Server TDE, apply the latest SQL Server patches.
7. Be careful using SQL Server images from the Azure Marketplace. If you use the SQL
Server one, you must change the instance collation before installing any SAP
NetWeaver system on it.
8. Install and configure the SAP Host Monitoring for Azure as described in
Deployment Guide.
Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
Azure Virtual Machines Oracle database
deployment for SAP workload
Article • 04/21/2024
This document covers several different areas to consider when deploying Oracle
Database for SAP workload in Azure IaaS. Before you read this document, we
recommend you read Considerations for Azure Virtual Machines DBMS deployment for
SAP workload. We also recommend that you read other guides in the SAP workload on
Azure documentation.
You can find information about Oracle versions and corresponding OS versions that are
supported for running SAP on Oracle on Azure in SAP Note 2039619 .
General information about running SAP Business Suite on Oracle can be found at SAP
on Oracle . Oracle supports to run Oracle databases on Microsoft Azure. For more
information about general support for Windows Hyper-V and Azure, check the Oracle
and Microsoft Azure FAQ .
ノ Expand table
1738053 SAPinst for Oracle ASM installation SAP ONE Support Launchpad
2896926 ASM disk group compatibility NetWeaver SAP ONE Support Launchpad
1550133 Using Oracle Automatic Storage Management (ASM) with SAP NetWeaver based
Products SAP ONE Support Launchpad ]
888626 Redo log layout for high-end systems SAP ONE Support Launchpad
105047 Support for Oracle functions in the SAP environment SAP ONE Support Launchpad
974876 Oracle Transparent Data Encryption (TDE) SAP ONE Support Launchpad
2936683 Oracle Linux 8: SAP Installation and Upgrade SAP ONE Support Launchpad
1672954 Oracle 11g, 12c, 18c and 19c: Usage of hugepages on Linux
Note Note title
number
The specific scenario of SAP applications using Oracle Databases is supported as well.
Details are discussed in the next part of the document.
1. Use the most recent Oracle Linux version available (Oracle Linux 8.6 or higher).
2. Use the most recent Oracle Database version available with the latest SAP Bundle
Patch (SBP) (Oracle 19 Patch 15 or higher) 2799920 - Patches for 19c: Database .
3. Use Automatic Storage Management (ASM) for small, medium, and large sized
databases on block storage.
4. Azure Premium Storage SSD should be used. Don't use Standard or other storage
types.
5. ASM removes the requirement for Mirror Log. Follow the guidance from Oracle in
Note 888626 - Redo log layout for high-end systems .
6. Use ASMLib and don't use udev.
7. Azure NetApp Files deployments should use Oracle dNFS (Oracle’s own high
performance Direct NFS solution).
8. Large Oracle databases benefit greatly from large System Global Area (SGA) sizes.
Large customers should deploy on Azure M-series with 4 TB or more RAM size
For information about which Oracle versions and corresponding OS versions are
supported for running SAP on Oracle on Azure Virtual Machines, see SAP
Note 2039619 .
General information about running SAP Business Suite on Oracle can be found in
the SAP on Oracle community page . SAP on Oracle on Azure is only supported on
Oracle Linux (and not Suse or Red Hat) for application and database servers. ASCS/ERS
servers can use RHEL/SUSE because Oracle client isn't installed or used on these VMs.
Application Servers (PAS/AAS) shouldn't be installed on these VMs. Refer to SAP Note
3074643 - OLNX: FAQ: if Pacemaker for Oracle Linux is supported in SAP Environment .
Oracle Real Application Cluster (RAC) isn't supported on Azure because RAC would
require Multicast networking.
Storage configuration
There are two recommended storage deployment patterns for SAP on Oracle on Azure:
Customers currently running Oracle databases on EXT4 or XFS file systems with Logical
Volume Manager (LVM) are encouraged to move to ASM. There are considerable
performance, administration, and reliability advantages to running on ASM compared to
LVM. ASM reduces complexity, improves supportability, and makes administration tasks
simpler. This documentation contains links for Oracle Database Administrators (DBAs) to
learn how to install and manage ASM.
Azure provides multiple storage solutions. The table below details the support status
ノ Expand table
Storage type Oracle Sector Oracle Linux 8.x or Windows Server 2019
support Size higher
Block Storage
Type
Standard Not
HDD supported
Network
Storage
Types
1
512e is supported on Premium SSD v2 for Windows systems. 512e configurations are't
recommended for Linux customers. Migrate to 4K Native using procedure in MOS
512/512e sector size to 4K Native Review (Doc ID 1133713.1)
Other considerations that apply list like:
1. No support for DIRECTIO with 4K Native sector size. Recommended settings for
FILESYSTEMIO_OPTIONS for LVM configurations:
2. Oracle 19c and higher fully supports 4K Native sector size with both ASM and LVM
3. Oracle 19c and higher on Linux – when moving from 512e storage to 4K Native
storage Log sector sizes must be changed
4. To migrate from 512/512e sector size to 4K Native Review (Doc ID 1133713.1) – see
section "Offline Migration to 4KB Sector Disks"
5. SAPInst writes to the pfile during installation. If the $ORACLE_HOME/dbs is on a 4K
disk set filesystemio_options=asynch and see the Section "Datafile Support of 4kB
Sector Disks" in MOS Supporting 4K Sector Disks (Doc ID 1133713.1)
6. No support for ASM on Windows platforms
7. No support for 4K Native sector size for Log volume on Windows platforms. SSDv2
and Ultra Disk must be changed to 512e via the "Edit Disk" pencil icon in the Azure
Portal
8. 4K Native sector size is supported only on Data volumes for Windows platforms.
4K isn't supported for Log volumes on Windows
9. We recommend reviewing these MOS articles:
Oracle Linux: File System's Buffer Cache versus Direct I/O (Doc ID 462072.1)
Supporting 4K Sector Disks (Doc ID 1133713.1)
Using 4k Redo Logs on Flash, 4k-Disk and SSD-based Storage (Doc ID
1681266.1)
Things To Consider For Setting filesystemio_options And disk_asynch_io (Doc
ID 1987437.1)
The following ASM limits exist for Oracle Database 12c or later:
511 disk groups, 10,000 ASM disks in a Disk Group, 65,530 ASM disks in a storage
system, 1 million files for each Disk Group. More info here: Performance and Scalability
Considerations for Disk Groups (oracle.com)
Review the ASM documentation in the relevant SAP Installation Guide for Oracle
available from https://help.sap.com/viewer/nwguidefinder
Variant 1 – small to medium data volumes up to 3 TB,
restore time not critical
Customer has small or medium sized databases where backup and/or restore +
Recovery of all databases can be accomplished using RMAN in a timely fashion.
Example: When a complete Oracle ASM disk group, with data files, from one or more
databases is broken and all data files from all databases need to be restored to a newly
created Oracle ASM disk group using RMAN.
ノ Expand table
Control file (first copy) To increase database size, add extra P30
disks
Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap
techniques in combination.
ノ Expand table
Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap
techniques in combination. In this variant, each relevant database file type is separated
to different Oracle ASM disk groups.
ノ Expand table
ASM Disk Group Stores Azure Storage
Name
+OLOG Online redo logs (first copy) 3-8 x P20 (512 GiB) or P30 (1 TiB)
+ARCH Control file (second copy) 3-8 x P20 (512 GiB) or P30 (1 TiB)
+RECO Control file (third copy) 3 x P30 (1 TiB), P40 (2 TiB) or P50 (4 TiB)
7 Note
Azure Host Disk Cache for the DATA ASM Disk Group can be set to either Read
Only or None. All other ASM Disk Groups should be set to None. On BW or SCM a
separate ASM Disk Group for TEMP can be considered for large or busy systems.
ASM adds a disk to the disk group: asmca -silent -addDisk -diskGroupName DATA -disk
'/dev/sdd1'
ASM automatically rebalances the data. To check rebalancing run this command.
Disk performance can be monitored from inside Oracle Enterprise Manager and via
external tools. Documentation, which might help is available here:
OS level monitoring tools can't monitor ASM disks as there's no recognizable file
system. Freespace monitoring must be done from within Oracle.
SAP on Oracle with ASM on Microsoft Azure - Part1 - Microsoft Tech Community
Oracle19c DB [ ASM ] installation on [ Oracle Linux 8.3 ] [ Grid | ASM | UDEV | OEL
8.3 ] [ VMware ] - YouTube
ASM Administrator's Guide (oracle.com)
Oracle for SAP Development Update (May 2022)
Performance and Scalability Considerations for Disk Groups (oracle.com)
Migrating to Oracle ASM with Oracle Enterprise Manager
Using RMAN to migrate to ASM | The Oracle Mentor (wordpress.com)
What is Oracle ASM to Azure IaaS? - Simple Talk (red-gate.com)
ASM Command-Line Utility (ASMCMD) (oracle.com)
Useful asmcmd commands - DBACLASS DBACLASS
Installing and Configuring Oracle ASMLIB Software
Azure NetApp Files (ANF) with Oracle dNFS
(Direct NFS)
The combination of Azure VMs and ANF is a robust and proven combination
implemented by many customers on an exceptionally large scale.
Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files - Microsoft Tech
Community
Even though the ANF is highly redundant, Oracle still requires a mirrored redo-logfile
volume. The recommendation is to create two separate volumes and configure origlogA
together with mirrlogB and origlogB together with mirrlogA. In this case, you make use
of a distributed load balancing of the redo-logfiles.
The mount option "nconnect" isn't recommended when the dNFS client is configured.
dNFS manages the IO channel and makes use of multiple sessions, so this option is
obsolete and can cause manifold issues. The dNFS client is going to ignore the mount
options and is going to handle the IO directly.
Both NFS versions (v3 and v4.1) with ANF are supported for the Oracle binaries, data-
and log-files.
We highly recommend using the Oracle dNFS client for all Oracle volumes.
ノ Expand table
NFSv3 rw,vers=3,rsize=262144,wsize=262144,hard,timeo=600,noatime
NFSv4.1 rw,vers=4.1,rsize=262144,wsize=262144,hard,timeo=600,noatime
ANF Backup
With ANF, some key features are available like consistent snapshot-based backups, low
latency, and remarkably high performance. From version 6 of our AzAcSnap tool Azure
Application Consistent Snapshot tool for ANF, Oracle databases can be configured for
consistent database snapshots.
Those snapshots remain on the actual data volume and must be copied away using ANF
CRR (Cross Region Replication) Cross-region replication of ANF or other backup tools.
Note that: when creating LVM the "-i" option must be used to evenly distribute data
across the number of disks in the LVM group.
ノ Expand table
ノ Expand table
Log write times can be improved on Azure M-Series VMs by enabling Write Accelerator.
Enable Azure Write Accelerator for the Azure Premium Storage disks used by the ASM
Disk Group for online redo log files. For more information, see Write Accelerator.
Using Write Accelerator is optional but can be enabled if the AWR report indicates
higher than expected log write times.
1. Ensure the Disk Throughput and IOPS is sufficient for the workload and at least
equal to the aggregate throughput of the disks
2. Consider enabling paid bursting especially for Redo Log disk(s)
3. For ANF, the Network throughput is important as all storage traffic is counted as
"Network" rather than Disk throughput
4. Review this blog for Network tuning for M-series Optimizing Network Throughput
on Azure M-series VMs HCMT (microsoft.com)
5. Review this link that describes how to use an AWR report to select the correct
Azure VM
6. Azure Intel Ev5 Edv5 and Edsv5-series - Azure Virtual Machines |Microsoft Docs
7. Azure AMD Eadsv5 Easv5 and Eadsv5-series - Azure Virtual Machines |Microsoft
Docs
8. Azure M-series/Msv2-series M-series - Azure Virtual Machines |Microsoft Docs and
Msv2/Mdsv2 Medium Memory Series - Azure Virtual Machines | Microsoft Docs
9. Azure Mv2 Mv2-series - Azure Virtual Machines | Microsoft Docs
Backup/restore
For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same
way as they are on bare metal and Hyper-V. Oracle Recovery Manager (RMAN) is also
supported for backups to disk and restores from disk.
For more information about how you can use Azure Backup and Recovery services for
Oracle databases, see:
Back up and recover an Oracle Database 12c database on an Azure Linux virtual
machine
Azure Backup service is also supporting Oracle backups as described in the
article Back up and recover an Oracle Database 19c database on an Azure Linux
VM using Azure Backup.
High availability
Oracle Data Guard is supported for high availability and disaster recovery purposes. To
achieve automatic failover in Data Guard, you need to use Fast-Start Failover (FSFA). The
Observer functionality (FSFA) triggers the failover. If you don't use FSFA, you can only
use a manual failover configuration. For more information, see Implement Oracle Data
Guard on an Azure Linux virtual machine.
Disaster Recovery aspects for Oracle databases in Azure are presented in the
article Disaster recovery for an Oracle Database 12c database in an Azure environment.
Another good Oracle whitepaper Setting up Oracle 12c Data Guard for SAP Customers
Large SAP customers running on High Memory Azure VMs greatly benefit from
HugePages as described in this article
NUMA systems vm.min_free_kbytes should be set to 524288 * <# of NUMA nodes>. See
Oracle Linux : Recommended Value of vm.min_free_kbytes Kernel Tuning Parameter
(Doc ID 2501269.1...
Oracle web console Oracle Linux: Install Cockpit Web Console on Oracle Linux
Upstream Cockpit Project — Cockpit Project (cockpit-project.org)
Oracle Linux 8: Package Management made easy with free videos | Oracle Linux Blog
Oracle® Linux 8 Managing Software on Oracle Linux - Chapter 1 Yum DNF
Memory and NUMA configurations can be tested and benchmarked with a useful tool -
Oracle Real Application Testing (RAT)
Oracle Real Application Testing: What Is It and How Do You Use It? (aemcorp.com)
Information on UDEV Log Corruption issue Oracle Redolog corruption on Azure | Oracle
in the field (wordpress.com)
Data corruption on Hyper-V or Azure when running Oracle ASM - Red Hat Customer
Portal
Set up Oracle ASM on an Azure Linux virtual machine - Azure Virtual Machines |
Microsoft Docs
1. The following Windows releases are recommended: Windows Server 2022 (only
from Oracle Database 19.13.0 on) Windows Server 2019 (only from Oracle
Database 19.5.0 on)
2. There's no support for ASM on Windows. Windows Storage Spaces should be used
to aggregate disks for optimal performance
3. Install the Oracle Home on a dedicated independent disk (don't install Oracle
Home on the C: Drive)
4. All disks must be formatted NTFS
5. Follow the Windows Tuning guide from Oracle and enable large pages, lock pages
in memory and other Windows specific settings
At the time, of writing ASM for Windows customers on Azure isn't supported. The SAP
Software Provisioning Manager (SWPM) for Windows doesn't support ASM currently.
The disk selection for hosting Oracle's online redo logs is driven by IOPS requirements.
It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as
the volume, IOPS, and throughput satisfy the requirements.
ノ Expand table
Next steps
Read the article
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
IBM Db2 Azure Virtual Machines DBMS
deployment for SAP workload
Article • 03/08/2024
With Microsoft Azure, you can migrate your existing SAP application running on IBM Db2 for Linux, UNIX, and
Windows (LUW) to Azure virtual machines. With SAP on IBM Db2 for LUW, administrators and developers can
still use the same development and administration tools, which are available on-premises. General information
about running SAP Business Suite on IBM Db2 for LUW is available via the SAP Community Network (SCN) in
SAP on IBM Db2 for Linux, UNIX, and Windows .
For more information and updates about SAP on Db2 for LUW on Azure, see SAP Note 2233094 .
There are various articles for SAP workload on Azure. We recommend beginning with Get started with SAP on
Azure VMs and then read about other areas of interest.
The following SAP Notes are related to SAP on Azure regarding the area covered in this document:
ノ Expand table
2233094 DB6: SAP Applications on Azure Using IBM DB2 for Linux, UNIX, and Windows - Additional Information
As a preread to this document, review Considerations for Azure Virtual Machines DBMS deployment for SAP
workload. Review other guides in the SAP workload on Azure.
For information about supported SAP products and Azure VM(Virtual Machines) types, refer to SAP Note
1928533 .
IBM Db2 for Linux, UNIX, and Windows Configuration
Guidelines for SAP Installations in Azure VMs
Storage Configuration
For an overview of Azure storage types for SAP workload, consult the article Azure Storage types for SAP
workload All database files must be stored on mounted disks of Azure block storage (Windows: NTFS, Linux:
xfs, supported as of Db2 11.1, or ext3).
Remote shared volumes like the Azure services in the listed scenarios are NOT supported for Db2 database
files:
Remote shared volumes like the Azure services in the listed scenarios are supported for Db2 database files:
Hosting Linux guest OS based Db2 data and log files on NFS shares hosted on Azure NetApp Files is
supported!
If you're using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in
Considerations for Azure Virtual Machines DBMS deployment for SAP workload apply to deployments with the
Db2 DBMS as well.
As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The
exact quotas are depending on the VM type used. A list of VM types with their quotas can be found here
(Linux) and here (Windows).
As long as the current IOPS quota per disk is sufficient, it's possible to store all the database files on one single
mounted disk. Whereas you always should separate the data files and transaction log files on different
disks/VHDs.
For performance considerations, also refer to chapter 'Data Safety and Performance Considerations for
Database Directories' in SAP installation guides.
Alternatively, you can use Windows Storage Pools, which are only available in Windows Server 2012 and higher
as described Considerations for Azure Virtual Machines DBMS deployment for SAP workload. On Linux you can
use LVM or mdadm to create one large logical device over multiple disks.
For Azure M-Series VM, you can reduce by factors the latency writing into the transaction logs, compared to
Azure Premium storage performance, when using Azure Write Accelerator. Therefore, you should deploy Azure
Write Accelerator for one or more VHDs that form the volume for the Db2 transaction logs. Details can be read
in the document Write Accelerator.
IBM Db2 LUW 11.5 released support for 4-KB sector size. Though you need to enable the usage of 4-KB sector
size with 11.5 by the configurations setting of db2set DB2_4K_DEVICE_SUPPORT=ON as documented in:
For older Db2 versions, a 512 Byte sector size must be used. Premium SSD disks are 4-KB native and have 512
Byte emulation. Ultra disk uses 4-KB sector size by default. You can enable 512 Byte sector size during creation
of Ultra disk. Details are available Using Azure ultra disks. This 512 Byte sector size is a prerequisite for IBM
Db2 LUW versions lower than 11.5.
On Windows using Storage pools for Db2 storage paths for log_dir , sapdata and saptmp directories, you
must specify a physical disk sector size of 512 Bytes. When using Windows Storage Pools, you must create the
storage pools manually via command line interface using the parameter -LogicalSectorSizeDefault . For more
information, see New-StoragePool.
Following is a baseline configuration for various sizes and uses of SAP on Db2 deployments from small to
large. The list is based on Azure premium storage. However, Azure Ultra disk is fully supported with Db2 as
well and can be used as well. Use the values for capacity, burst throughput, and burst IOPS to define the Ultra
disk configuration. You can limit the IOPS for the /db2/ <SID> /log_dir at around 5000 IOPS.
Extra small SAP system: database size 50 - 200 GB: example Solution Manager
ノ Expand table
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name Premium Disks put [GB] IOPS Through- size
/ Size Disk [MB/s] put [GB]
vCPU: 4 /db2/ <SID> /sapdata P10 2 1,000 200 256 7,000 340 256 ReadOnly
KB
Small SAP system: database size 200 - 750 GB: small Business Suite
ノ Expand table
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]
vCPU: 16 /db2/ <SID> /sapdata P15 4 4,400 500 1.024 14,000 680 256 ReadOnly
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]
KB
RAM: /db2/ <SID> /saptmp P6 2 480 100 128 7,000 340 128
128 GiB KB
Medium SAP system: database size 500 - 1000 GB: small Business Suite
ノ Expand table
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]
vCPU: 32 /db2/ <SID> /sapdata P30 2 10,000 400 2.048 10,000 400 256 ReadOnly
KB
RAM: /db2/ <SID> /saptmp P10 2 1,000 200 256 7,000 340 128
256 GiB KB
Large SAP system: database size 750 - 2000 GB: Business Suite
ノ Expand table
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name / Premium Disks put [GB] IOPS Through- size
Size Disk [MB/s] put [GB]
vCPU: 64 /db2/ <SID> /sapdata P30 4 20,000 800 4.096 20,000 800 256 ReadOnly
KB
RAM: /db2/ <SID> /saptmp P15 2 2,200 250 512 7,000 340 128
504 GiB KB
Large multi-terabyte SAP system: database size 2 TB+: Global Business Suite system
ノ Expand table
VM Db2 mount point Azure # of IOPS Through- Size Burst Burst Stripe Caching
Name Premium Disks put [GB] IOPS Through- size
/ Size Disk [MB/s] put [GB]
vCPU: /db2/ <SID> /sapdata P40 4 30,000 1.000 8.192 30,000 1.000 256 ReadOnly
128 KB
RAM: /db2/ <SID> /saptmp P20 2 4,600 300 1.024 7,000 340 128
2,048 KB
GiB
/db2/ <SID> /log_dir P30 4 20,000 800 4.096 20,000 800 64 Write-
KB Accelerator
Shared volume for saptmp1, sapmnt, usr_sap, <sid> _home, db2 <sid> _home, db2_software
One data volume for sapdata1 to sapdatan
One log volume for the redo log directory
One volume for the log archives and backups
A fifth potential volume could be an ANF volume that you use for more long-term backups that you use to
snapshot and store the snapshots in Azure Blob store.
As of mount options, mounting those volumes could look like (you need to replace <SID> and <sid> by the
SID of your SAP system):
vi /etc/idmapd.conf
# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
7 Note
Backup/Restore
The backup/restore functionality for IBM Db2 for LUW is supported in the same way as on standard Windows
Server Operating Systems and Hyper-V.
Make sure that you have a valid database backup strategy in place.
As in bare-metal deployments, backup/restore performance depends on how many volumes can be read in
parallel and what the throughput of those volumes might be. In addition, the CPU consumption used by
backup compression may play a significant role on VMs with up to eight CPU threads. Therefore, one can
assume:
The fewer the number of disks used to store the database devices, the smaller the overall throughput in
reading
The smaller the number of CPU threads in the VM, the more severe the impact of backup compression
The fewer targets (Stripe Directories, disks) to write the backup to, the lower the throughput
To increase the number of targets to write to, two options can be used/combined depending on your needs:
Striping the backup target volume over multiple disks to improve the IOPS throughput on that striped
volume
Using more than one target directory to write the backup to
7 Note
Db2 on Windows doesn't support the Windows VSS technology. As a result, the application consistent VM
backup of Azure Backup Service can't be leveraged for VMs the Db2 DBMS is deployed in.
Linux Pacemaker
) Important
For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using Pacemaker from IBM.
SLES: High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker
RHEL: High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
Db2 high availability disaster recovery (HADR) is supported. If the virtual machines of the HA configuration
have working name resolution, the setup in Azure doesn't differ from any setup that is done on-premises. It
isn't recommended to rely on IP resolution only.
Don't use Geo-Replication for the storage accounts that store the database disks. For more information, see
the document Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
Accelerated Networking
For Db2 deployments on Windows, we highly recommend using the Azure functionality of Accelerated
Networking as described in the document Azure Accelerated Networking . Also consider recommendations
made in Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
If the IOPS or I/O throughput of a single Azure VHD isn't sufficient, you can use LVM (Logical Volume
Manager) or MDADM as described in the document Considerations for Azure Virtual Machines DBMS
deployment for SAP workload to create one large logical device over multiple disks. For the disks containing
the Db2 storage paths for your sapdata and saptmp directories, you must specify a physical disk sector size of
512 KB.
Other
All other general areas like Azure Availability Sets or SAP monitoring apply for deployments of VMs with the
IBM Database as well. These general areas we describe in Considerations for Azure Virtual Machines DBMS
deployment for SAP workload.
Next steps
Read the article:
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP ASE Azure Virtual Machines DBMS
deployment for SAP workload
Article • 03/27/2023
In this document, covers several different areas to consider when deploying SAP ASE in
Azure IaaS. As a precondition to this document, you should have read the document
Considerations for Azure Virtual Machines DBMS deployment for SAP workload and
other guides in the SAP workload on Azure documentation. This document covers SAP
ASE running on Linux and on Windows Operating Systems. The minimum supported
release on Azure is SAP ASE 16.0.02 (Release 16 Support Pack 2). It's recommended to
deploy the latest version of SAP and the latest Patch Level. As a minimum SAP ASE
16.0.03.07 (Release 16 Support Pack 3 Patch Level 7) is recommended. The most recent
version of SAP can be found in Targeted ASE 16.0 Release Schedule and CR list
Information .
Additional information about release support with SAP applications or installation media
location are found, besides in the SAP Product Availability Matrix in these locations:
Remark: Throughout documentation within and outside the SAP world, the name of the
product is referenced as Sybase ASE or SAP ASE or in some cases both. In order to stay
consistent, we use the name SAP ASE in this documentation.
Microsoft Azure offers numerous different virtual machine types that allow you to run
smallest SAP systems and landscapes up to large SAP systems and landscapes with
thousands of users. SAP sizing SAPS numbers of the different SAP certified VM SKUs is
provided in SAP support note #1928533 .
Documentation to install SAP ASE on Windows can be found in the SAP ASE Installation
Guide for Windows
Lock Pages in Memory is a setting that will prevent the SAP ASE database buffer from
being paged out. This setting is useful for large busy systems with a high memory
demand. Contact BC-DB-SYB for more information.
cat /proc/meminfo
The page size is typically 2048 KB. For details see the article Huge Pages on Linux
The SAP ASE transaction log disk write performance may be improved by enabling the
M-series Write Accelerator. Write Accelerator should be tested carefully with SAP ASE
due to the way that SAP ASE performs Log Writes. Review SAP support note #2816580
and consider running a performance test.
Write Accelerator is designed for transaction log disk only. The disk level cache should
be set to NONE. Don't be surprised if Azure Write Accelerator doesn't show similar
improvements as with other DBMS. Based on the way, SAP ASE writes into the
transaction log, it could be that there's little to no acceleration by Azure Write
Accelerator.
Separate disks are recommended for Data devices and Log Devices. The system
databases sybsecurity and saptools don't require dedicated disks and can be placed on
the disks containing the SAP database data and log devices
7 Note
The examples given below are for illustrative purposes and can be modified based on
individual needs. Due to the design of SAP ASE, the number of data devices isn't as
critical as with other databases. The number of data devices detailed in this document is
a guide only. The configurations suggested should be treated as what they're. They are
starting points for you. But they are configurations that are going to need some fine-
tuning to your workload and cost efficiencies.
An example of a configuration for a little SAP ASE DB Server with a database size
between 50 GB – 250 GB could look like
# of data 4 4 ---
devices
# of log 1 1 ---
devices
# and type of Premium storage v1: 2 x P10 Premium storage v1: 2 x P10 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 2 x 150 Premium storage v2: 2 x 150
GiB (RAID0) - default IOPS and GiB (RAID 0) - default IOPS and
throughput throughput
# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 128 Premium storage v2: 1 x 128 NONE
GiB - default IOPS and GiB - default IOPS and
throughput throughput
# of backup 4 4 ---
devices
An example of a configuration for a small SAP ASE DB Server with a database size
between 250 GB – 750 GB, such as a smaller SAP Business Suite system, could look like
# of data 8 8 ---
devices
# of log 1 1 ---
devices
# and type of Premium storage v1: 4 x P20 Premium storage v1: 4 x P20 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 4 x 100 Premium storage v2: 4 x 100
GiB - 200 GiB (RAID0) - default GiB- 200 GiB (RAID0) - default
IOPS and 25 MB/sec extra IOPS and 25 MB/sec extra per
throughput per disk disk throughput
# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 200 Premium storage v2: 1 x 200 NONE
GiB - default IOPS and GiB - default IOPS and
throughput throughput
# of backup 4 4 ---
devices
Configuration Windows Linux Comments
An example of a configuration for a medium SAP ASE DB Server with a database size
between 750 GB – 2,000 GB, such as a larger SAP Business Suite system, could look like
# of data 16 16 ---
devices
# of log 1 1 ---
devices
# and type of Premium storage v1: 4 x P30 Premium storage v1: 4 x P30 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 4 x 250 Premium storage v2: 4 x 250
GiB - 500 GiB - plus 2,000 IOPS GiB - 500 GiB - plus 2,000 IOPS
and 75 MB/sec throughput per and 75 MB/sec throughput per
disk disk
Configuration Windows Linux Comments
# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 400 Premium storage v2: 1 x 400 NONE
GiB - default IOPS and GiB - default IOPS and 75
75MB/sec extra throughput MB/sec extra throughput
# of backup 4 4 ---
devices
An example of a configuration for a larger SAP ASE DB Server with a database size
between 2,000 GB – 4,000 GB, such as a larger SAP Business Suite system, could look like
VM Type E96(d)s_v5 (96 vCPU/672 GiB E96(d)s_v5 (96 vCPU/672 GiB ---
RAM) RAM)
# of data 16 16 ---
devices
# of log 1 1 ---
devices
# and type of Premium storage v1: 4 x P30 Premium storage v1: 4 x P30 Cache =
data disks (RAID0) (RAID0) Read Only
Premium storage v2: 4 x 500 Premium storage v2: 4 x 500
GiB - 1,000 GiB - plus 2,500 GiB - 1,000 GiB - plus 2,500
IOPS and 100 MB/sec IOPS and 100 MB/sec
throughput per disk throughput per disk
# and type of Premium storage v1: 1 x P20 Premium storage v1: 1 x P20 Cache =
log disks Premium storage v2: 1 x 400 Premium storage v2: 1 x 400 NONE
GiB - plus 1,000 IOPS and GiB - plus 1,000 IOPS and 75
75MB/sec extra throughput MB/sec extra throughput
# of backup 4 4 ---
devices
An example of a configuration for a large SAP ASE DB Server with a database size of 4
TB+, such as a larger globally used SAP Business Suite system, could look like
VM Type M-Series (1.0 to 4.0 TB RAM) M-Series (1.0 to 4.0 TB RAM) ---
# of data 32 32 ---
devices
# of log 1 1 ---
devices
# and type of Premium storage v1: 4+ x P30 Premium storage v1: 4+ x P30 Cache = Read
data disks (RAID0) (RAID0) Only,
Premium storage v2: 4+ x Premium storage v2: 4+ x Consider
1,000 GiB - 4,000 GiB - plus 1,000 GiB - 4,000 GiB - plus Azure Ultra
3,000 IOPS and 125 MB/sec 3,000 IOPS and 125 MB/sec disk
throughput per disk throughput per disk
# and type of Premium storage v1: 1 x P30 Premium storage v1: 1 x P30 Consider
log disks Premium storage v2: 1 x 500 Premium storage v2: 1 x 500 Write
GiB - plus 2,000 IOPS and 125 GiB - plus 2,000 IOPS and 125 Accelerator or
MB/sec throughput MB/sec throughput Azure Ultra
disk
# of backup 16 16 ---
devices
NFS v4.1 volumes hosted Azure NetApp Files is another alternative to use for SAP ASE
database storage. The principle structure of such a configuration should look like
In the example, the SID of the database was A11. The sizes and the performance tiers of
the Azure NetApp Files based volumes are dependent on the database volume and the
IOPS and throughput you require. For sapdata and saplog, we recommend starting with
the Ultra performance tier to be able to provide enough bandwidth. For many non-
production deployments, the Premium performance tier can be sufficient. For more
details on specific sizing and limitations of Azure NetApp Files for database usage, read
the chapter Sizing for HANA database on Azure NetApp Files in NFS v4.1 volumes on
Azure NetApp Files for SAP HANA.
Don't use drive D:\ or /temp space as database or log dump destination.
HA Aware with Fault Manager - The SAP Kernel is an “HA Aware” application and
knows about the primary and secondary SAP ASE servers. There are no close
integrations between the SAP ASE “HA Aware“ solution and Azure, the Azure
Internal load balancer isn't used. The solution is documented in the SAP ASE HADR
Users Guide
Floating IP with Fault Manager – This solution can be used for SAP Business Suite
and non-SAP Business Suite applications. This solution utilizes the Azure ILB and
the SAP ASE database engine provides a Probe Port. The Fault Manager will call
SAPHostAgent to start or stop a secondary Floating IP on the ASE hosts. This
solution is documented in SAP note #3086679 - SYB: Fault Manager: floating IP
address on Microsoft Azure
7 Note
The failover times and other characteristics of either HA Aware or Floating IP
solutions are similar. When deciding between these two solutions customers should
perform their own testing and evaluation including factors such as planned and
unplanned failover times and other operational procedures.
7 Note
If a SAP ASE database is encrypted then Backup Dump Compression will not work.
See also SAP support note #2680905
As with on-premises systems several steps are required to enable all SAP NetWeaver
functionality used by the Webdynpro implementation of the DBACockpit. Follow SAP
support note #1245200 to enable the usage of webdynpros and generate the
required ones. When following the instructions in the above notes, you also configure
the Internet Communication Manager ( ICM ) along with the ports to be used for http and
https connections. The default setting for http looks like:
icm/server_port_0 = PROT=HTTP,PORT=8000,PROCTIMEOUT=600,TIMEOUT=600
icm/server_port_1 = PROT=HTTPS,PORT=443$$,PROCTIMEOUT=600,TIMEOUT=600
https://<fullyqualifiedhostname>:44300/sap/bc/webdynpro/sap/dba_cockpit
http://<fullyqualifiedhostname>:8000/sap/bc/webdynpro/sap/dba_cockpit
Depending on how the Azure Virtual Machine hosting the SAP system is connected to
your AD and DNS, you need to make sure that ICM is using a fully qualified hostname
that can be resolved on the machine where you're opening the DBACockpit from. See
SAP support note #773830 to understand how ICM determines the fully qualified host
name based on profile parameters and set parameter icm/host_name_full explicitly if
necessary.
If you deployed the VM in a Cloud-Only scenario without cross-premises connectivity
between on-premises and Azure, you need to define a public IP address and a
domainlabel . The format of the public DNS name of the VM looks like:
Setting the SAP profile parameter icm/host_name_full to the DNS name of the Azure VM
the link might look similar to:
https://mydomainlabel.westeurope.cloudapp.net:44300/sap/bc/webdynpro/sap/dba
_cockpit
http://mydomainlabel.westeurope.cloudapp.net:8000/sap/bc/webdynpro/sap/dba_c
ockpit
Add Inbound rules to the Network Security Group in the Azure portal for the
TCP/IP ports used to communicate with ICM
Add Inbound rules to the Windows Firewall configuration for the TCP/IP ports used
to communicate with the ICM
Further information about DBA Cockpit for SAP ASE can be found in the following SAP
Notes:
are helpful. Another useful document is SAP Applications on SAP Adaptive Server
Enterprise Best Practices for Migration and Runtime .
Next steps
Check the article SAP workloads on Azure: planning and deployment checklist
SAP MaxDB, liveCache, and Content
Server deployment on Azure VMs
Article • 02/10/2023
This document covers several different areas to consider when deploying MaxDB,
liveCache, and Content Server in Azure IaaS. As a precondition to this document, you
should have read the document Considerations for Azure Virtual Machines DBMS
deployment for SAP workload as well as other guides in the SAP workload on Azure
documentation.
It is highly recommended to use the newest version of the operating system Microsoft
Windows, which is Microsoft Windows 2016.
Storage configuration
Azure storage best practices for SAP MaxDB follow the general recommendations
mentioned in chapter Storage structure of a VM for RDBMS Deployments.
) Important
Like other databases, SAP MaxDB also has data and log files. However, in SAP
MaxDB terminology the correct term is "volume" (not "file"). For example, there are
SAP MaxDB data volumes and log volumes. Do not confuse these with OS disk
volumes.
If you use Azure Storage accounts, set the Azure storage account that holds the
SAP MaxDB data and log volumes (data and log files) to Local Redundant Storage
(LRS) as specified in Considerations for Azure Virtual Machines DBMS deployment
for SAP workload.
Separate the IO path for SAP MaxDB data volumes (data files) from the IO path for
log volumes (log files). It means that SAP MaxDB data volumes (data files) have to
be installed on one logical drive and SAP MaxDB log volumes (log files) have to be
installed on another logical drive.
Set the proper caching type for each disk, depending on whether you use it for
SAP MaxDB data or log volumes (data and log files), and whether you use Azure
Standard or Azure Premium Storage, as described in Considerations for Azure
Virtual Machines DBMS deployment for SAP workload.
As long as the current IOPS quota per disk satisfies the requirements, it is possible
to store all the data volumes on a single mounted disk, and also store all database
log volumes on another single mounted disk.
If more IOPS and/or space are required, it is recommended to use Microsoft
Window Storage Pools (only available in Microsoft Windows Server 2012 and
higher) to create one large logical device over multiple mounted disks. For more
details, see also Considerations for Azure Virtual Machines DBMS deployment for
SAP workload. This approach simplifies the administration overhead to manage the
disk space and avoids the effort of manually distributing files across multiple
mounted disks.
it is highly recommended to use Azure Premium Storage for MaxDB deployments.
Backup and Restore
When deploying SAP MaxDB into Azure, you must review your backup methodology.
Even if the system is not a productive system, the SAP database hosted by SAP MaxDB
must be backed up periodically. Since Azure Storage keeps three images, a backup is
now less important in terms of protecting your system against storage failure and more
important operational or administrative failures. The primary reason for maintaining a
proper backup and restore plan is so that you can compensate for logical or manual
errors by providing point-in-time recovery capabilities. So the goal is to either use
backups to restore the database to a certain point in time or to use the backups in
Azure to seed another system by copying the existing database.
Backing up and restoring a database in Azure works the same way as it does for on-
premises systems, so you can use standard SAP MaxDB backup/restore tools, which are
described in one of the SAP MaxDB documentation documents listed in SAP Note
767598 .
To increase the number of targets to write to, there are two options that you can use,
possibly in combination, depending on your needs:
Striping a volume over multiple mounted disks has been discussed earlier in
Considerations for Azure Virtual Machines DBMS deployment for SAP workload.
Other considerations
All other general areas such as Azure Availability Sets or SAP monitoring also apply as
described in Considerations for Azure Virtual Machines DBMS deployment for SAP
workload. for deployments of VMs with the SAP MaxDB database. Other SAP MaxDB-
specific settings are transparent to Azure VMs and are described in different documents
listed in SAP Note 767598 and in these SAP Notes:
826037
1139904
1173395
It is highly recommended to use the newest version of the operating system Microsoft
Windows Server.
As SAP liveCache is an application that performs huge calculations, the amount and
speed of RAM and CPU has a major influence on SAP liveCache performance.
For the Azure VM types supported by SAP (SAP Note 1928533 ), all virtual CPU
resources allocated to the VM are backed by dedicated physical CPU resources of the
hypervisor. No overprovisioning (and therefore no competition for CPU resources) takes
place.
Similarly, for all Azure VM instance types supported by SAP, the VM memory is 100%
mapped to the physical memory - over-provisioning (over-commitment), for example, is
not used.
From this perspective, it is highly recommended to use the most recent Dv2, Dv3, Ev3,
and M-series VMs. The choice of the different VM types depends on the memory you
need for liveCache and the CPU resources you need. As with all other DBMS
deployments it is advisable to leverage Azure Premium Storage for performance critical
volumes.
As SAP liveCache is based on SAP MaxDB technology, all the Azure storage best practice
recommendations mentioned for SAP MaxDB described in this document are also valid
for SAP liveCache.
backup and restore, including performance considerations, are already described in the
relevant SAP MaxDB chapters in this document.
Other considerations
All other general areas are already described in the relevant SAP MaxDB chapter.
It is highly recommended to use the newest version of SAP Content Server, and the
newest version of Microsoft IIS.
Check the latest supported versions of SAP Content Server and Microsoft IIS in the SAP
Product Availability Matrix (PAM) .
If you configure SAP Content Server to store files in the file system, it is recommended
to use a dedicated logical drive. Using Windows Storage Spaces enables you to also
increase logical disk size and IOPS throughput, as described in Considerations for Azure
Virtual Machines DBMS deployment for SAP workload.
Backup / Restore
If you configure the SAP Content Server to store files in the SAP MaxDB database, the
backup/restore procedure and performance considerations are already described in SAP
MaxDB chapters of this document.
If you configure the SAP Content Server to store files in the file system, one option is to
execute manual backup/restore of the whole file structure where the documents are
located. Similar to SAP MaxDB backup/restore, it is recommended to have a dedicated
disk volume for backup purpose.
Other
Other SAP Content Server-specific settings are transparent to Azure VMs and are
described in various documents and SAP Notes:
SAP NetWeaver
SAP Note 1619726
SAP BW NLS implementation guide with
SAP IQ on Azure
Article • 06/19/2023
Over the years, customers running the SAP Business Warehouse (BW) system see an
exponential growth in database size, which increases compute cost. To achieve the right
balance of cost and performance, customers can use near-line storage (NLS) to migrate
historical data.
The NLS implementation based on SAP IQ is the standard method by SAP to move
historical data from a primary database (SAP HANA or AnyDB). The integration of SAP
IQ makes it possible to separate frequently accessed data from infrequently accessed
data, which makes less resource demand in the SAP BW system.
This guide provides guidelines for planning, deploying, and configuring SAP BW NLS
with SAP IQ on Azure. This guide covers common Azure services and features that are
relevant for SAP IQ NLS deployment and doesn't cover any NLS partner solutions.
This guide doesn't replace SAP's standard documentation on NLS deployment with SAP
IQ. Instead, it complements the official installation and administration documentation.
Solution overview
In an operative SAP BW system, the volume of data increases constantly because of
business and legal requirements. The large volume of data can affect the performance
of the system and increase the administration effort, which results in the need to
implement a data-aging strategy.
If you want to keep the amount of data in your SAP BW system without deleting, you
can use data archiving. The data is first moved to archive or near-line storage and then
deleted from the SAP BW system. You can either access the data directly or load it back
as required, depending on how the data has been archived.
SAP BW users can use SAP IQ as a near-line storage solution. The adapter for SAP IQ as
a near-line solution is delivered with the SAP BW system. With NLS implemented,
frequently used data is stored in an SAP BW online database (SAP HANA or AnyDB).
Infrequently accessed data is stored in SAP IQ, which reduces the cost to manage data
and improves the performance of the SAP BW system. To ensure consistency between
online data and near-line data, the archived partitions are locked and are read-only.
SAP IQ supports two types of architecture: simplex and multiplex. In a simplex
architecture, a single instance of an SAP IQ server runs on a single virtual machine. Files
might be located on a host machine or on a network storage device.
) Important
For the SAP NLS solution, only simplex architecture is available and evaluated by
SAP.
In Azure, the SAP IQ server must be implemented on a separate virtual machine (VM).
We don't recommend installing SAP IQ software on an existing server that already has
other database instances running, because SAP IQ uses complete CPU and memory for
its own usage. One SAP IQ server can be used for multiple SAP NLS implementations.
Support matrix
The support matrix for an SAP IQ NLS solution includes:
Operating system: SAP IQ is certified at the operating system level only. You can
run an SAP IQ certified operating system in an Azure environment as long as it's
compatible to run on Azure infrastructure. For more information, see SAP note
2133194 .
SAP BW compatibility: Near-line storage for SAP IQ is released only for SAP BW
systems that already run under Unicode. SAP note 1796393 contains information
about SAP BW.
Storage: In Azure, SAP IQ supports premium managed disks (Windows and Linux),
Azure shared disks (Windows only), and Azure NetApp Files (Linux only).
For more up-to-date information based on your SAP IQ release, see the Product
Availability Matrix .
Sizing
Sizing of SAP IQ is confined to CPU, memory, and storage. You can find general sizing
guidelines for SAP IQ on Azure in SAP note 1951789 . The sizing recommendation that
you get by following the guidelines needs to be mapped to certified Azure virtual
machine types for SAP. SAP note 1928533 provides the list of supported SAP products
and Azure VM types.
The SAP IQ sizing guide and sizing worksheet mentioned in SAP note 1951789 were
developed for the native usage of an SAP IQ database. Because they don't reflect the
resources for the planning of an SAP IQ database, you might end up with unused
resources for SAP NLS.
Azure resources
Regions
If you're already running your SAP systems on Azure, you've probably identified your
region. SAP IQ deployment must be in the same region as your SAP BW system for
which you're implementing the NLS solution.
To determine the architecture of SAP IQ, you need to ensure that the services required
by SAP IQ, like Azure NetApp Files (NFS for Linux only), are available in that region. To
check the service availability in your region, see the Products available by region
webpage.
Deployment options
To achieve redundancy of SAP systems in an Azure infrastructure, your application needs
to be deployed in either flexible scale set, availability zones, or availability sets. Although
you can achieve SAP IQ high availability by using the SAP IQ multiplex architecture, the
multiplex architecture doesn't meet the requirements of the NLS solution.
To achieve high availability for the SAP IQ simplex architecture, you need to configure a
two-node cluster with a custom solution. The two-node SAP IQ cluster can be deployed
in flexible scale set with FD=1, availability zones or availability sets. However, it is
advised to configure zone redundant storage when setting up a highly available solution
across availability zones.
Virtual machines
Based on SAP IQ sizing, you need to map your requirements to Azure virtual machines.
This approach is supported in Azure for SAP products. SAP note 1928533 is a good
starting point that lists supported Azure VM types for SAP products on Windows and
Linux.
Beyond the selection of only supported VM types, you also need to check whether those
VM types are available in specific regions. You can check the availability of VM types on
the Products available by region webpage. To choose the pricing model, see Azure
virtual machines for SAP workload.
Tip
For production systems, we recommend that you use E-Series virtual machines
because of their core-to-memory ratio.
Storage
Azure Storage has various storage types available for customers. You can find details
about them in the article What disk types are available in Azure?.
Some of the storage types in Azure have limited use for SAP scenarios, but other types
are well suited or optimized for specific SAP workload scenarios. For more information,
see the Azure Storage types for SAP workload guide. It highlights the storage options
that are suited for SAP.
For SAP IQ on Azure, you can use the following Azure storage types. The choice
depends on your operating system (Windows or Linux) and deployment method
(standalone or highly available).
Shared disks are a new feature for Azure managed disks that allow you to attach a
managed disk to multiple VMs simultaneously. Shared managed disks don't
natively offer a fully managed file system that can be accessed through SMB or
NFS. You need to use a cluster manager like a Windows Server failover cluster
(WSFC), which handles cluster node communication and write locking.
SAP IQ deployment on Linux can use Azure NetApp Files as a file system (NFS
protocol) to install a standalone or a highly available solution. This storage offering
isn't available in all regions. For up-to-date information, see the Products available
by region webpage. SAP IQ deployment architecture with Azure NetApp Files is
discussed in the article Deploy SAP IQ-NLS HA solution using Azure NetApp Files
on SUSE Linux Enterprise Server .
The following table lists the recommendations for each storage type based on the
operating system:
Networking
Azure provides a network infrastructure that allows the mapping of all scenarios that can
be realized for an SAP BW system that uses SAP IQ as near-line storage. These scenarios
include connecting to on-premises systems, connecting to systems in different virtual
networks, and others. For more information, see Microsoft Azure networking for SAP
workloads .
Technically, you can achieve SAP IQ high availability by using a multiplex server
architecture, but the multiplex architecture doesn't meet the requirements of the
NLS solution. For simplex server architecture, SAP doesn't provide any features or
procedures to run SAP IQ in a high-availability configuration.
To set up SAP IQ high availability on Windows for simplex server architecture, you
need to set up a custom solution that requires extra configuration, like a Windows
Server failover cluster and shared disks. One such custom solution for SAP IQ on
Windows is described in detail in Deploy SAP IQ NLS HA solution using Azure
shared disk on Windows Server .
Depending on your SAP IQ database size, you can schedule your database backup
from any of the backup scenarios. But if you're using SAP IQ with the NLS interface
delivered by SAP, you might want to automate the backup process for an SAP IQ
database. Automation ensures that the SAP IQ database can always be recovered to
a consistent state without loss of data that's moved between the primary database
and the SAP IQ database. For details on setting up automation for SAP IQ near-line
storage, see SAP note 2741824 - How to setup backup automation for SAP IQ Cold
Store/Near-line Storage .
For a large SAP IQ database, you can use virtual backups. For more information, see
Virtual Backups , Introduction Virtual Backup in SAP Sybase IQ . Also see SAP
note 2461985 - How to Backup Large SAP IQ Database .
If you're using a network drive (SMB protocol) to back up and restore an SAP IQ
server on Windows, be sure to use the UNC path for backup. Three backslashes
( \\\ ) are required when you're using a UNC path for backup and restore:
SQL
Disaster recovery
This section explains the strategy to provide disaster recovery (DR) protection for the
SAP IQ NLS solution. It complements the Set up disaster recovery for SAP article, which
represents the primary resources for an overall SAP DR approach. The process described
in that article is presented at an abstract level. You need to validate the exact steps and
thoroughly test your DR strategy.
For SAP IQ, see SAP note 2566083 , which describes methods to implement a DR
environment safely. In Azure, you can also use Azure Site Recovery for an SAP IQ DR
strategy. The strategy for SAP IQ DR depends on the way it's deployed in Azure, and it
should also be in line with your SAP BW system.
You can use Azure Site Recovery to replicate a standalone SAP IQ virtual machine in the
secondary region. It replicates the servers and all the attached managed disks to the
secondary region so that if a disaster or an outage occurs, you can easily fail over to
your replicated environment and continue working. To start replicating the SAP IQ VMs
to the Azure DR region, follow the guidance in Replicate a virtual machine to Azure.
Whether you need the same highly available SAP IQ system on the DR site.
Whether a standalone SAP IQ instance will suffice for your business requirements.
If you need a standalone SAP IQ instance on a DR site, you can use Azure Site Recovery
to replicate a primary SAP IQ virtual machine in the secondary region. It replicates the
servers and all the local attached managed disks to the secondary region, but it won't
replicate an Azure shared disk or a network drive like Azure NetApp Files.
To copy data from Azure a shared disk or a network drive, you can use any file-base
copy tool to replicate data between Azure regions. For more information on how to
copy an Azure NetApp Files volume in another region, see FAQs about Azure NetApp
Files.
Next steps
Set up disaster recovery for a multi-tier SAP app deployment
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Set up Pacemaker on Red Hat Enterprise
Linux in Azure
Article • 10/12/2023
This article describes how to configure a basic Pacemaker cluster on Red Hat Enterprise
Server (RHEL). The instructions cover RHEL 7, RHEL 8, and RHEL 9.
Prerequisites
Read the following SAP Notes and papers first:
Cluster installation
7 Note
Red Hat doesn't support a software-emulated watchdog. Red Hat doesn't support
SBD on cloud platforms. For more information, see Support Policies for RHEL
High-Availability Clusters - sbd and fence_sbd .
The only supported fencing mechanism for Pacemaker RHEL clusters on Azure is an
Azure fence agent.
Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9
are marked in the document.
1. [A] Register. This step is optional. If you're using RHEL SAP HA-enabled images,
this step isn't required.
Bash
2. [A] Enable RHEL for SAP repos. This step is optional. If you're using RHEL SAP HA-
enabled images, this step isn't required.
Bash
Bash
sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat
) Important
We recommend the following versions of the Azure fence agent (or later) for
customers to benefit from a faster failover time, if a resource stop fails or the
cluster nodes can't communicate with each other anymore:
RHEL 7.7 or higher use the latest available version of fence-agents package.
) Important
We recommend the following versions of the Azure fence agent (or later) for
customers who want to use managed identities for Azure resources instead of
service principal names for the fence agent:
) Important
fence-agents-4.10.0-20.el9_0.7
fence-agents-common-4.10.0-20.el9_0.6
ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm
Check the version of the Azure fence agent. If necessary, update it to the minimum
required version or later.
Bash
) Important
If you need to update the Azure fence agent, and if you're using a custom
role, make sure to update the custom role to include the action powerOff. For
more information, see Create a custom role for the fence agent.
4. If you're deploying on RHEL 9, also install the resource agents for cloud
deployment.
Bash
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands.
) Important
If you're using hostnames in the cluster configuration, it's vital to have reliable
hostname resolution. The cluster communication fails if the names aren't
available, which can lead to cluster failover delays.
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.
text
Bash
Add the following firewall rules to all cluster communication between the cluster
nodes.
Bash
Run the following commands to enable the Pacemaker service and start it.
Bash
Run the following commands to authenticate the nodes and create the cluster. Set
the token to 30000 to allow memory preserving maintenance. For more
information, see this article for Linux.
Bash
sudo pcs cluster auth prod-cl1-0 prod-cl1-1 -u hacluster
sudo pcs cluster setup --name nw1-azr prod-cl1-0 prod-cl1-1 --token
30000
sudo pcs cluster start --all
If you're building a cluster on RHEL 8.x/RHEL 9.x, use the following commands:
Bash
Bash
# Run the following command until the status of both nodes is online
sudo pcs status
Bash
Tip
If you're building a multinode cluster, that is, a cluster with more than two
nodes, don't set the votes to 2.
Bash
Managed identity
Use the following content for the input file. You need to adapt the content to your
subscriptions, that is, replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and yyyyyyyy-
yyyy-yyyy-yyyy-yyyyyyyyyyyy with the IDs of your subscription. If you only have one
JSON
{
"Name": "Linux Fence Agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
],
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}
Managed identity
Assign the custom role Linux Fence Agent Role that was created in the last section
to each managed identity of the cluster VMs. Each VM system-assigned managed
identity needs the role assigned for every cluster VM's resource. For more
information, see Assign a managed identity access to a resource by using the Azure
portal. Verify that each VM's managed identity role assignment contains all the
cluster VMs.
) Important
Bash
sudo pcs property set stonith-timeout=900
7 Note
The option pcmk_host_map is only required in the command if the RHEL hostnames
and the Azure VM names are not identical. Specify the mapping in the format
hostname:vm-name. Refer to the bold section in the command. For more
information, see What format should I use to specify node mappings to fencing
devices in pcmk_host_map? .
Managed identity
For RHEL 7.x, use the following command to configure the fence device:
Bash
For RHEL 8.x/9.x, use the following command to configure the fence device:
Bash
If you're using a fencing device based on service principal configuration, read Change
from SPN to MSI for Pacemaker clusters by using Azure fencing and learn how to
convert to managed identity configuration.
Tip
To avoid fence races within a two-node pacemaker cluster, you can configure
the priority-fencing-delay cluster property. This property introduces
additional delay in fencing a node that has higher total resource priority when
a split-brain scenario occurs. For more information, see Can Pacemaker fence
the cluster node with the fewest running resources? .
The property priority-fencing-delay is applicable for Pacemaker version
2.0.4-6.el8 or higher and on a two-node cluster. If you configure the
priority-fencing-delay cluster property, you don't need to set the
The monitoring and fencing operations are deserialized. As a result, if there's a longer
running monitoring operation and simultaneous fencing event, there's no delay to the
cluster failover because the monitoring operation is already running.
Tip
The Azure fence agent requires outbound connectivity to public endpoints. For
more information along with possible solutions, see Public endpoint connectivity
for VMs using standard ILB.
Configure Pacemaker for Azure scheduled
events
Azure offers scheduled events. Scheduled events are sent via the metadata service and
allow time for the application to prepare for such events.
The Pacemaker resource agent azure-events-az monitors for scheduled Azure events. If
events are detected and the resource agent determines that another cluster node is
available, it sets a cluster health attribute.
When the cluster health attribute is set for a node, the location constraint triggers and
all resources with names that don't start with health- are migrated away from the node
with the scheduled event. After the affected cluster node is free of running cluster
resources, the scheduled event is acknowledged and can execute its action, such as a
restart.
1. [A] Make sure that the package for the azure-events-az agent is already installed
and up to date.
Bash
Bash
Bash
) Important
Don't define any other resources in the cluster starting with health- besides
the resources described in the next steps.
4. [1] Set the initial value of the cluster attributes. Run for each cluster node and for
scale-out environments including majority maker VM.
Bash
5. [1] Configure the resources in Pacemaker. Make sure the resources start with
health-azure .
Bash
Bash
7. Clear any errors during enablement and verify that the health-azure-events
resources have started successfully on all cluster nodes.
Bash
First-time query execution for scheduled events can take up to two minutes.
Pacemaker testing with scheduled events can use reboot or redeploy actions for
the cluster VMs. For more information, see Scheduled events.
Optional fencing configuration
Tip
This section is only applicable if you want to configure the special fencing device
fence_kdump .
If you need to collect diagnostic information within the VM, it might be useful to
configure another fencing device based on the fence agent fence_kdump . The
fence_kdump agent can detect that a node entered kdump crash recovery and can allow
the crash recovery service to complete before other fencing methods are invoked. Note
that fence_kdump isn't a replacement for traditional fence mechanisms, like the Azure
fence agent, when you're using Azure VMs.
) Important
If a crash dump is successfully detected, the fencing is delayed until the crash
recovery service completes. If the failed node is unreachable or if it doesn't
respond, the fencing is delayed by time determined, the configured number of
iterations, and the fence_kdump timeout. For more information, see How do I
configure fence_kdump in a Red Hat Pacemaker cluster? .
The following Red Hat KB articles contain important information about configuring
fence_kdump fencing:
Bash
Bash
Bash
Bash
5. [A] Allow the required ports for fence_kdump through the firewall.
Bash
firewall-cmd --add-port=7410/udp
firewall-cmd --add-port=7410/udp --permanent
6. [A] Ensure that the initramfs image file contains the fence_kdump and hosts files.
For more information, see How do I configure fence_kdump in a Red Hat
Pacemaker cluster? .
Bash
Bash
vi /etc/kdump.conf
# On node prod-cl1-0 make sure the following line is added
fence_kdump_nodes prod-cl1-1
# On node prod-cl1-1 make sure the following line is added
fence_kdump_nodes prod-cl1-0
) Important
If the cluster is already in productive use, plan the test accordingly because
crashing a node has an impact on the application.
Bash
Next steps
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
VMs, see High Availability of SAP HANA on Azure Virtual Machines.
High availability of SAP HANA on Azure
VMs on Red Hat Enterprise Linux
Article • 04/08/2024
For on-premises development, you can use either HANA System Replication or shared
storage to establish high availability (HA) for SAP HANA. On Azure Virtual Machines,
HANA System Replication on Azure is currently the only supported HA function.
SAP HANA Replication consists of one primary node and at least one secondary node.
Changes to the data on the primary node are replicated to the secondary node
synchronously or asynchronously.
This article describes how to deploy and configure virtual machines (VMs), install the
cluster framework, and install and configure SAP HANA System Replication.
In the example configurations, installation commands, instance number 03, and HANA
System ID HN1 are used.
Prerequisites
Read the following SAP Notes and papers first:
Overview
To achieve HA, SAP HANA is installed on two VMs. The data is replicated by using HANA
System Replication.
The SAP HANA System Replication setup uses a dedicated virtual hostname and virtual
IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
presented configuration shows a load balancer with:
Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the
HANA system. You can deploy a VM in any one of the availability options: virtual
machine scale set, availability zone, or availability set.
) Important
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.
Azure portal
Follow the steps in Create load balancer to set up a standard load balancer for a
high-availability SAP system by using the Azure portal. During the setup of the load
balancer, consider the following points:
7 Note
For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
We recommend that you use LVM for volumes that store data and log files. The
following example assumes that the VMs have four data disks attached that are
used to create two volumes.
Bash
ls /dev/disk/azure/scsi1/lun*
Example output:
Output
/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2 /dev/disk/azure/scsi1/lun3
Create physical volumes for all the disks that you want to use:
Bash
Bash
Create the logical volumes. A linear volume is created when you use lvcreate
without the -i switch. We suggest that you create a striped volume for better I/O
performance. Align the stripe sizes to the values documented in SAP HANA VM
storage configurations. The -i argument should be the number of the underlying
physical volumes, and the -I argument is the stripe size.
In this document, two physical volumes are used for the data volume, so the -i
switch argument is set to 2. The stripe size for the data volume is 256KiB. One
physical volume is used for the log volume, so no -i or -I switches are explicitly
used for the log volume commands.
) Important
Use the -i switch and set it to the number of the underlying physical volume
when you use more than one physical volume for each data, log, or shared
volumes. Use the -I switch to specify the stripe size when you're creating a
striped volume. See SAP HANA VM storage configurations for recommended
storage configurations, including stripe sizes and number of disks. The
following layout examples don't necessarily meet the performance guidelines
for a particular system size. They're for illustration only.
Bash
Don't mount the directories by issuing mount commands. Instead, enter the
configurations into the fstab and issue a final mount -a to validate the syntax.
Start by creating the mount directories for each volume:
Bash
Next, create fstab entries for the three logical volumes by inserting the following
lines in the /etc/fstab file:
Bash
sudo mount -a
You can either use a DNS server or modify the /etc/hosts file on all nodes by
creating entries for all nodes like this in /etc/hosts :
To install SAP HANA System Replication, see Automating SAP HANA Scale-Up
System Replication using the RHEL HA Add-On .
Run the hdblcm program from the HANA DVD. Enter the following values at the
prompt:
a. Choose installation: Enter 1.
b. Select additional components for installation: Enter 1.
c. Enter Installation Path [/hana/shared]: Select Enter.
d. Enter Local Host Name [..]: Select Enter.
e. Do you want to add additional hosts to the system? (y/n) [n]: Select Enter.
f. Enter SAP HANA System ID: Enter the SID of HANA, for example: HN1.
g. Enter Instance Number [00]: Enter the HANA Instance number. Enter 03 if you
used the Azure template or followed the manual deployment section of this
article.
h. Select Database Mode / Enter Index [1]: Select Enter.
i. Select System Usage / Enter Index [4]: Select the system usage value.
j. Enter Location of Data Volumes [/hana/data]: Select Enter.
k. Enter Location of Log Volumes [/hana/log]: Select Enter.
l. Restrict maximum memory allocation? [n]: Select Enter.
m. Enter Certificate Host Name For Host '...' [...]: Select Enter.
n. Enter SAP Host Agent User (sapadm) Password: Enter the host agent user
password.
o. Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user
password again to confirm.
p. Enter System Administrator (hdbadm) Password: Enter the system
administrator password.
q. Confirm System Administrator (hdbadm) Password: Enter the system
administrator password again to confirm.
r. Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select
Enter.
s. Enter System Administrator Login Shell [/bin/sh]: Select Enter.
t. Enter System Administrator User ID [1001]: Select Enter.
u. Enter ID of User Group (sapsys) [79]: Select Enter.
v. Enter Database User (SYSTEM) Password: Enter the database user password.
w. Confirm Database User (SYSTEM) Password: Enter the database user password
again to confirm.
x. Restart system after machine reboot? [n]: Select Enter.
y. Do you want to continue? (y/n): Validate the summary. Enter y to continue.
Download the latest SAP Host Agent archive from the SAP Software Center and
run the following command to upgrade the agent. Replace the path to the archive
to point to the file that you downloaded:
Bash
Create the firewall rule for the Azure Load Balancer probe port.
Bash
Create firewall rules to allow HANA System Replication and client traffic. The
required ports are listed on TCP/IP Ports of All SAP Products . The following
commands are just an example to allow HANA 2.0 System Replication and client
traffic to database SYSTEMDB, HN1, and NW1.
Bash
If you're using SAP HANA 2.0 or MDC, create a tenant database for your SAP
NetWeaver system. Replace NW1 with the SID of your SAP system.
Bash
hdbsql -u SYSTEM -p "[passwd]" -i 03 -d SYSTEMDB 'CREATE DATABASE NW1
SYSTEM USER PASSWORD "<passwd>"'
Bash
Bash
Bash
Register the second node to start the system replication. Run the following
command as <hanasid>adm:
Bash
Bash
Bash
Run the following command as root. Make sure to replace the values for HANA
System ID (for example, HN1), instance number (03), and any usernames, with the
values of your SAP HANA installation:
Bash
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'
Bash
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd
Bash
PATH="$PATH:/usr/sap/HN1/HDB03/exe"
hdbsql -d SYSTEMDB -u system -i 03 "BACKUP DATA USING FILE
('initialbackup')"
Bash
hdbsql -d HN1 -u system -i 03 "BACKUP DATA USING FILE
('initialbackup')"
Bash
su - hdbadm
hdbnsutil -sr_enable –-name=SITE1
Bash
HDB stop
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --
replicationMode=sync --name=SITE2
HDB start
) Important
With the systemd based SAP Startup Framework, SAP HANA instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As outlined in SAP Note 3189534 , any new installations
of SAP HANA SPS07 revision 70 or above, or updates to HANA systems to HANA
2.0 SPS07 revision 70 or above, SAP Startup framework will be automatically
registered with systemd.
1. [A] Install the SAP HANA resource agents on all nodes. Make sure to enable a
repository that contains the package. You don't need to enable more repositories,
if you're using an RHEL 8.x HA-enabled image.
Bash
2. [A] Install the HANA system replication hook . The hook needs to be installed on
both HANA DB nodes.
Tip
Bash
mkdir -p /hana/shared/myHooks
cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks
chown -R hn1adm:sapsys /hana/shared/myHooks
Bash
sapcontrol -nr 03 -function StopSystem
Output
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = info
3. [A] The cluster requires sudoers configuration on each cluster node for <sid>adm.
In this example, that's achieved by creating a new file. Use the visudo command to
edit the 20-saphana drop-in file as root .
Bash
Output
Bash
Bash
cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
Output
For more information on the implementation of the SAP HANA System Replication
hook, see Enable the SAP HA/DR provider hook .
Bash
7 Note
This article contains references to a term that Microsoft no longer uses. When the
term is removed from the software, we'll remove it from this article.
Bash
sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03
PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200
AUTOMATED_REGISTER=false \
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 op demote timeout=3600 \
master notify=true clone-max=2 clone-node-max=1 interleave=true
Bash
7 Note
Bash
) Important
Make sure that the cluster status is okay and that all of the resources are started. Which
node the resources are running on isn't important.
7 Note
The timeouts in the preceding configuration are only examples and might need to
be adapted to the specific HANA setup. For instance, you might need to increase
the start timeout, if it takes longer to start the SAP HANA database.
Use the command sudo pcs status to check the state of the cluster resources created:
Output
To support such a setup in a cluster, a second virtual IP address is required, which allows
clients to access the secondary read-enabled SAP HANA database. To ensure that the
secondary replication site can still be accessed after a takeover has occurred, the cluster
needs to move the virtual IP address around with the secondary SAPHana resource.
This section describes the other steps that are required to manage HANA active/read-
enabled system replication in a Red Hat HA cluster with a second virtual IP.
Before you proceed further, make sure that you've fully configured the Red Hat HA
cluster managing an SAP HANA database, as described in preceding segments of the
documentation.
Additional setup in Azure Load Balancer for active/read-
enabled setup
To proceed with more steps on provisioning a second virtual IP, make sure that you've
configured Azure Load Balancer as described in the Deploy Linux VMs manually via
Azure portal section.
1. For a standard load balancer, follow these steps on the same load balancer that
you created in an earlier section.
Open the load balancer, select frontend IP pool, and select Add.
Enter the name of the second front-end IP pool (for example, hana-
secondaryIP).
Set Assignment to Static and enter the IP address (for example, 10.0.0.14).
Select OK.
After the new front-end IP pool is created, note the pool IP address.
Open the load balancer, select health probes, and select Add.
Enter the name of the new health probe (for example, hana-secondaryhp).
Select TCP as the protocol and port 62603. Keep the Interval value set to 5
and the Unhealthy threshold value set to 2.
Select OK.
Open the load balancer, select load balancing rules, and select Add.
Enter the name of the new load balancer rule (for example, hana-
secondarylb).
Select the front-end IP address, the back-end pool, and the health probe that
you created earlier (for example, hana-secondaryIP, hana-backend, and
hana-secondaryhp).
Select HA Ports.
Make sure to enable Floating IP.
Select OK.
Bash
Bash
Make sure that the cluster status is okay and that all the resources are started. The
second virtual IP runs on the secondary site along with the SAPHana secondary
resource.
Output
In the next section, you can find the typical set of failover tests to run.
Be aware of the second virtual IP behavior while you're testing a HANA cluster
configured with read-enabled secondary:
1. When you migrate the SAPHana_HN1_03 cluster resource to the secondary site
hn1-db-1, the second virtual IP continues to run on the same site hn1-db-1. If
you've set AUTOMATED_REGISTER="true" for the resource and HANA system
replication is registered automatically on hn1-db-0, your second virtual IP also
moves to hn1-db-0.
2. On testing a server crash, the second virtual IP resources (secvip_HN1_03) and the
Azure Load Balancer port resource (secnc_HN1_03) run on the primary server
alongside the primary virtual IP resources. So, until the time that the secondary
server is down, applications that are connected to the read-enabled HANA
database connect to the primary HANA database. The behavior is expected
because you don't want applications that are connected to the read-enabled
HANA database to be inaccessible until the time the secondary server is
unavailable.
3. During failover and fallback of the second virtual IP address, the existing
connections on applications that use the second virtual IP to connect to the HANA
database might get interrupted.
The setup maximizes the time that the second virtual IP resource is assigned to a node
where a healthy SAP HANA instance is running.
Bash
Output
You can migrate the SAP HANA master node by running the following command as
root:
Bash
# On RHEL 7.x
pcs resource move SAPHana_HN1_03-master
# On RHEL 8.x
pcs resource move SAPHana_HN1_03-clone --master
The cluster would migrate the SAP HANA master node and the group containing virtual
IP address to hn1-db-1 .
After the migration is done, the sudo pcs status output looks like:
Output
With AUTOMATED_REGISTER="false" , the cluster would not restart the failed HANA
database or register it against the new primary on hn1-db-0 . In this case, configure the
HANA instance as secondary by running these commands, as hn1adm:
Bash
The migration creates location constraints that need to be deleted again. Run the
following command as root, or via sudo :
Bash
Monitor the state of the HANA resource by using pcs status . After HANA is started on
hn1-db-0 , the output should look like:
Output
Output
Run the firewall rule to block the communication on one of the nodes.
Bash
When cluster nodes can't communicate with each other, there's a risk of a split-brain
scenario. In such situations, cluster nodes try to simultaneously fence each other,
resulting in a fence race. To avoid such a situation, we recommend that you set the
priority-fencing-delay property in cluster configuration (applicable only for pacemaker-
2.0.4-6.el8 or higher).
Bash
# If the iptables rule set on the server gets reset after a reboot, the
rules will be cleared out. In case they have not been reset, please proceed
to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.5 -j DROP; iptables -D OUTPUT -d 10.0.0.5 -j
DROP
This article contains references to a term that Microsoft no longer uses. When the
term is removed from the software, we'll remove it from this article.
Output
You can test the setup of the Azure fencing agent by disabling the network interface on
the node where SAP HANA is running as Master. For a description on how to simulate a
network failure, see Red Hat Knowledge Base article 79523 .
In this example, we use the net_breaker script as root to block all access to the network:
Bash
The VM should now restart or stop depending on your cluster configuration. If you set
the stonith-action setting to off , the VM is stopped and the resources are migrated to
the running VM.
After you start the VM again, the SAP HANA resource fails to start as secondary if you
set AUTOMATED_REGISTER="false" . In this case, configure the HANA instance as secondary
by running this command as the hn1adm user:
Bash
# On RHEL 7.x
pcs resource cleanup SAPHana_HN1_03-master
# On RHEL 8.x
pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource
needs to be cleaned>
Output
Output
You can test a manual failover by stopping the cluster on the hn1-db-0 node, as root:
Bash
After the failover, you can start the cluster again. If you set AUTOMATED_REGISTER="false" ,
the SAP HANA resource on the hn1-db-0 node fails to start as secondary. In this case,
configure the HANA instance as secondary by running this command as root:
Bash
Bash
Then as root:
Bash
# On RHEL 7.x
pcs resource cleanup SAPHana_HN1_03-master
# On RHEL 8.x
pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource
needs to be cleaned>
Output
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
SAP HANA VM storage configurations
High availability of SAP HANA scale-up
with Azure NetApp Files on RHEL
Article • 01/17/2024
This article describes how to configure SAP HANA System Replication in scale-up
deployment, when the HANA file systems are mounted via NFS, by using Azure NetApp
Files. In the example configurations and installation commands, instance number 03 and
HANA System ID HN1 are used. SAP HANA System Replication consists of one primary
node and at least one secondary node.
When steps in this document are marked with the following prefixes, the meaning is as
follows:
Prerequisites
Read the following SAP Notes and papers first:
Overview
Traditionally in a scale-up environment, all file systems for SAP HANA are mounted from
local storage. Setting up high availability (HA) of SAP HANA System Replication on Red
Hat Enterprise Linux is published in Set up SAP HANA System Replication on RHEL.
To achieve SAP HANA HA of a scale-up system on Azure NetApp Files NFS shares, we
need some more resource configuration in the cluster, in order for HANA resources to
recover, when one node loses access to the NFS shares on Azure NetApp Files. The
cluster manages the NFS mounts, allowing it to monitor the health of the resources. The
dependencies between the file system mounts and the SAP HANA resources are
enforced.
.
SAP HANA file systems are mounted on NFS shares by using Azure NetApp Files on
each node. File systems /hana/data , /hana/log , and /hana/shared are unique to each
node.
10.32.2.4:/hanadb1-data-mnt00001 on /hana/data
10.32.2.4:/hanadb1-log-mnt00001 on /hana/log
10.32.2.4:/hanadb1-shared-mnt00001 on /hana/shared
10.32.2.4:/hanadb2-data-mnt00001 on /hana/data
10.32.2.4:/hanadb2-log-mnt00001 on /hana/log
10.32.2.4:/hanadb2-shared-mnt00001 on /hana/shared
7 Note
File systems /hana/shared , /hana/data , and /hana/log aren't shared between the
two nodes. Each cluster node has its own separate file systems.
The SAP HANA System Replication configuration uses a dedicated virtual hostname and
virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
configuration shown here has a load balancer with:
Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files availability by Azure region .
Important considerations
As you're creating your Azure NetApp Files volumes for SAP HANA scale-up systems, be
aware of the important considerations documented in NFS v4.1 volumes on Azure
NetApp Files for SAP HANA.
While you're designing the infrastructure for SAP HANA on Azure with Azure NetApp
Files, be aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for
SAP HANA.
The configuration in this article is presented with simple Azure NetApp Files volumes.
) Important
2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.
The HANA architecture shown in this article uses a single Azure NetApp Files
capacity pool at the Ultra service level. For HANA workloads on Azure, we
recommend using an Azure NetApp Files Ultra or Premium service Level.
4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.
As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. For example,
hanadb1-data-mnt00001 and hanadb1-log-mnt00001 are the volume names and
mnt00001 are the file paths for the Azure NetApp Files volumes.
On hanadb1:
On hanadb2:
7 Note
All commands to mount /hana/shared in this article are presented for NFSv4.1
/hana/shared volumes. If you deployed the /hana/shared volumes as NFSv3
volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3.
Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the
HANA system. You can deploy a VM in any one of the availability options: virtual
machine scale set, availability zone, or availability set.
) Important
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
7 Note
For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .
) Important
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity, unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Standard
Azure Load Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0. For more information, see Load Balancer health
probes and SAP Note 2382421 .
Bash
2. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, that is, defaultv4iddomain.com, and the
mapping is set to nobody.
Bash
Example output:
Output
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
) Important
Bash
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb1-log-mnt00001 /hana/log
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb1-data-mnt00001 /hana/data
Bash
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb2-log-mnt00001 /hana/log
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.32.2.4:/hanadb2-data-mnt00001 /hana/data
5. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
Bash
sudo nfsstat -m
Verify that the flag vers is set to 4.1. Example from hanadb1:
Output
Check nfs4_disable_idmapping .
Bash
Bash
Bash
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows you how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines in the /etc/hosts file. Change the IP address and
hostname to match your environment.
Output
10.32.0.4 hanadb1
10.32.0.5 hanadb2
2. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as
described in SAP Note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp
configuration settings.
Bash
sudo vi /etc/sysctl.d/91-NetApp-HANA.conf
Output
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
3. [A] Create the configuration file /etc/sysctl.d/ms-az.conf with more optimization
settings.
Bash
sudo vi /etc/sysctl.d/ms-az.conf
Output
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10
Tip
4. [A] Adjust the sunrpc settings, as recommended in SAP Note 3024346 - Linux
Kernel Settings for NetApp NFS .
Bash
sudo vi /etc/modprobe.d/sunrpc.conf
Output
Configure the OS as described in the following SAP Notes based on your RHEL
version:
Starting with HANA 2.0 SPS 01, MDC is the default option. When you install the
HANA system, SYSTEMDB and a tenant with the same SID are created together. In
some cases, you don't want the default tenant. If you don't want to create an initial
tenant along with the installation, you can follow SAP Note 2629711 .
Run the hdblcm program from the HANA DVD. Enter the following values at the
prompt:
a. Choose installation: Enter 1 (for install).
b. Select more components for installation: Enter 1.
c. Enter Installation Path [/hana/shared]: Select Enter to accept the default.
d. Enter Local Host Name [..]: Select Enter to accept the default. Do you want to
add additional hosts to the system? (y/n) [n]: n.
e. Enter SAP HANA System ID: Enter HN1.
f. Enter Instance Number [00]: Enter 03.
g. Select Database Mode / Enter Index [1]: Select Enter to accept the default.
h. Select System Usage / Enter Index [4]: Enter 4 (for custom).
i. Enter Location of Data Volumes [/hana/data]: Select Enter to accept the default.
j. Enter Location of Log Volumes [/hana/log]: Select Enter to accept the default.
k. Restrict maximum memory allocation? [n]: Select Enter to accept the default.
l. Enter Certificate Host Name For Host '...' [...]: Select Enter to accept the default.
m. Enter SAP Host Agent User (sapadm) Password: Enter the host agent user
password.
n. Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user
password again to confirm.
o. Enter System Administrator (hn1adm) Password: Enter the system administrator
password.
p. Confirm System Administrator (hn1adm) Password: Enter the system
administrator password again to confirm.
q. Enter System Administrator Home Directory [/usr/sap/HN1/home]: Select Enter
to accept the default.
r. Enter System Administrator Login Shell [/bin/sh]: Select Enter to accept the
default.
s. Enter System Administrator User ID [1001]: Select Enter to accept the default.
t. Enter ID of User Group (sapsys) [79]: Select Enter to accept the default.
u. Enter Database User (SYSTEM) Password: Enter the database user password.
v. Confirm Database User (SYSTEM) Password: Enter the database user password
again to confirm.
w. Restart system after machine reboot? [n]: Select Enter to accept the default.
x. Do you want to continue? (y/n): Validate the summary. Enter y to continue.
Download the latest SAP Host Agent archive from the SAP Software Center and
run the following command to upgrade the agent. Replace the path to the archive
to point to the file that you downloaded:
Bash
Create the firewall rule for the Azure Load Balancer probe port.
Bash
Cluster configuration
This section describes the steps required for a cluster to operate seamlessly when SAP
HANA is installed on NFS shares by using Azure NetApp Files.
) Important
With the systemd based SAP Startup Framework, SAP HANA instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As outlined in SAP Note 3189534 , any new installations
of SAP HANA SPS07 revision 70 or above, or updates to HANA systems to HANA
2.0 SPS07 revision 70 or above, SAP Startup framework will be automatically
registered with systemd.
Bash
2. [1] Create the file system resources for the hanadb1 mounts.
Bash
3. [2] Create the file system resources for the hanadb2 mounts.
Bash
The on-fail=fence attribute is also added to the monitor operation. With this
option, if the monitor operation fails on a node, that node is immediately fenced.
Without this option, the default behavior is to stop all resources that depend on
the failed resource, restart the failed resource, and then start all the resources that
depend on the failed resource.
Not only can this behavior take a long time when an SAPHana resource depends
on the failed resource, but it also can fail altogether. The SAPHana resource can't
stop successfully if the NFS server holding the HANA executables is inaccessible.
The suggested timeout values allow the cluster resources to withstand protocol-
specific pause, related to NFSv4.1 lease renewals. For more information, see NFS in
NetApp Best practice . The timeouts in the preceding configuration might need
to be adapted to the specific SAP setup.
For workloads that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.
Configure location constraints to ensure that the resources that manage hanadb1
unique mounts can never run on hanadb2, and vice versa.
Bash
The resource-discovery=never option is set because the unique mounts for each
node share the same mount point. For example, hana_data1 uses mount point
/hana/data , and hana_data2 also uses mount point /hana/data . Sharing the same
mount point can cause a false positive for a probe operation, when resource state
is checked at cluster startup, and it can in turn cause unnecessary recovery
behavior. To avoid this scenario, set resource-discovery=never .
Configure attribute resources. These attributes are set to true if all of a node's NFS
mounts ( /hana/data , /hana/log , and /hana/data ) are mounted. Otherwise, they're
set to false.
Bash
Configure ordering constraints so that a node's attribute resources start only after
all of the node's NFS mounts are mounted.
Bash
Tip
ordering dependencies among the file systems. All file systems must start
before hana_nfs1_active , but they don't need to start in any order relative to
each other. For more information, see How do I configure SAP HANA System
Replication in Scale-Up in a Pacemaker cluster when the HANA file systems
are on NFS shares
2. [1] Configure constraints between the SAP HANA resources and the NFS mounts.
Location rule constraints are set so that the SAP HANA resources can run on a
node only if all of the node's NFS mounts are mounted.
Bash
Bash
On RHEL 8.x/9.x:
Bash
Bash
7 Note
This article contains references to a term that Microsoft no longer uses. When
the term is removed from the software, we'll remove it from this article.
Bash
Example output:
Output
To ensure that the secondary replication site can still be accessed after a takeover has
occurred, the cluster needs to move the virtual IP address around with the secondary of
the SAPHana resource.
Before you proceed further, make sure you've fully configured Red Hat High Availability
Cluster managing SAP HANA database as described in the preceding sections of the
documentation.
Bash
2. Verify the cluster configuration for a failure scenario when a node loses access to
the NFS share ( /hana/shared ).
It's difficult to simulate a failure where one of the servers loses access to the NFS
share. As a test, you can remount the file system as read-only. This approach
validates that the cluster can fail over, if access to /hana/shared is lost on the
active node.
read/write operations on file systems, fails. It isn't able to write anything on the file
system and performs HANA resource failover. The same result is expected when
your HANA node loses access to the NFS shares.
Bash
Example output:
Output
You can place /hana/shared in read-only mode on the active cluster node by using
this command:
Bash
hanadb will either reboot or power off based on the action set on stonith ( pcs
property show stonith-action ). Once the server ( hanadb1 ) is down, the HANA
resource moves to hanadb2 . You can check the status of the cluster from hanadb2 .
Bash
Example output:
Output
We recommend that you thoroughly test the SAP HANA cluster configuration by
also performing the tests described in Set up SAP HANA System Replication on
RHEL.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
Deploy a SAP HANA scale-out system
with standby node on Azure VMs by
using Azure NetApp Files on Red Hat
Enterprise Linux
Article • 07/11/2023
This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration with standby on Azure Red Hat Enterprise Linux virtual machines (VMs), by
using Azure NetApp Files for the shared storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is
03 and the HANA system ID is HN1. The examples are based on HANA 2.0 SP4 and Red
Hat Enterprise Linux for SAP 7.6.
7 Note
This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we’ll remove them from this article.
Before you begin, refer to the following SAP notes and papers:
Overview
One method for achieving HANA high availability is by configuring host auto failover. To
configure host auto failover, you add one or more virtual machines to the HANA system
and configure them as standby nodes. When active node fails, a standby node
automatically takes over. In the presented configuration with Azure virtual machines,
you achieve auto failover by using NFS on Azure NetApp Files.
7 Note
The standby node needs access to all database volumes. The HANA volumes must
be mounted as NFSv4 volumes. The improved file lease-based locking mechanism
in the NFSv4 protocol is used for I/O fencing.
) Important
To build the supported configuration, you must deploy the HANA data and log
volumes as NFSv4.1 volumes and mount them by using the NFSv4.1 protocol. The
HANA host auto-failover configuration with standby node is not supported with
NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three
subnets are represented within one Azure virtual network:
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
client 10.9.1.0/26
storage 10.9.3.0/26
hana 10.9.2.0/26
Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files Availability by Azure Region .
Important considerations
As you're creating your Azure NetApp Files volumes for SAP HANA scale-out with stand
by nodes scenario, be aware of the important considerations documented in NFS v4.1
volumes on Azure NetApp Files for SAP HANA.
While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be
aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP
HANA.
The configuration in this article is presented with simple Azure NetApp Files Volumes.
) Important
The HANA architecture presented in this article uses a single Azure NetApp Files
capacity pool at the Ultra Service level. For HANA workloads on Azure, we
recommend using an Azure NetApp Files Ultra or Premium service Level.
4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.
As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. For example,
HN1-data-mnt00001, HN1-log-mnt00001, and so on, are the volume names and
nfs://10.9.0.4/HN1-data-mnt00001, nfs://10.9.0.4/HN1-log-mnt00001, and so on,
are the file paths for the Azure NetApp Files volumes.
In this example, we used a separate Azure NetApp Files volume for each HANA
data and log volume. For a more cost-optimized configuration on smaller or non-
productive systems, it's possible to place all data mounts on a single volume and
all logs mounts on a different single volume.
1. Create the Azure virtual network subnets in your Azure virtual network.
Each virtual machine has three network interfaces, which correspond to the three
Azure virtual network subnets ( client , storage and hana ).
For more information, see Create a Linux virtual machine in Azure with multiple
network interface cards.
) Important
For SAP HANA workloads, low latency is critical. To achieve low latency, work with
your Microsoft representative to ensure that the virtual machines and the Azure
NetApp Files volumes are deployed in close proximity. When you're onboarding
new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the
necessary information.
The next instructions assume that you've already created the resource group, the Azure
virtual network, and the three Azure virtual network subnets: client , storage and hana .
When you deploy the VMs, select the client subnet, so that the client network interface
is the primary interface on the VMs. You will also need to configure an explicit route to
the Azure NetApp Files delegated subnet via the storage subnet gateway.
) Important
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM
types you're using. For a list of SAP HANA certified VM types and OS releases for
those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS
releases for that type.
1. Create an availability set for SAP HANA. Make sure to set the max update domain.
a. Use a Red Hat Enterprise Linux image in the Azure gallery that's supported for
SAP HANA. We used a RHEL-SAP-HA 7.6 image in this example.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically
generated. In these instructions for simplicity we'll refer to the automatically
generated network interfaces, which are attached to the client Azure virtual
network subnet, as hanadb1-client, hanadb2-client, and hanadb3-client.
3. Create three network interfaces, one for each virtual machine, for the storage
virtual network subnet (in this example, hanadb1-storage, hanadb2-storage, and
hanadb3-storage).
4. Create three network interfaces, one for each virtual machine, for the hana virtual
network subnet (in this example, hanadb1-hana, hanadb2-hana, and hanadb3-
hana).
5. Attach the newly created virtual network interfaces to the corresponding virtual
machines by doing the following steps:
b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface drop-down list, select the already created network interfaces for
the storage and hana subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hanadb2 and hanadb3).
g. Leave the virtual machines in stopped state for now. Next, we'll enable
accelerated networking for all newly attached network interfaces.
6. Enable accelerated networking for the additional network interfaces for the
storage and hana subnets by doing the following steps:
a. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select it.
# Storage
10.9.3.4 hanadb1-storage
10.9.3.5 hanadb2-storage
10.9.3.6 hanadb3-storage
# Client
10.9.1.5 hanadb1
10.9.1.6 hanadb2
10.9.1.7 hanadb3
# Hana
10.9.2.4 hanadb1-hana
10.9.2.5 hanadb2-hana
10.9.2.6 hanadb3-hana
2. [A] Add a network route, so that the communication to the Azure NetApp Files
goes via the storage network interface.
In this example will use Networkmanager to configure the additional network route.
The following instructions assume that the storage network interface is eth1 .
First, determine the connection name for device eth1 . In this example the
connection name for device eth1 is Wired connection 1 .
# Execute as root
nmcli connection
# Result
#NAME UUID TYPE
DEVICE
#System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet
eth0
#Wired connection 1 4b0789d1-6146-32eb-83a1-94d61f8d60a7 ethernet
eth1
Then configure additional route to the Azure NetApp Files delegated network via
eth1 .
# Add the following route
# ANFDelegatedSubnet/cidr via StorageSubnetGW dev
StorageNetworkInterfaceDevice
nmcli connection modify "Wired connection 1" +ipv4.routes "10.9.0.0/26
10.9.3.1"
4. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.
vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
vi /etc/sysctl.d/ms-az.conf
# Add the following entries in the configuration file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10
Tip
5. [A] Adjust the sunrpc settings, as recommended in SAP note 3024346 - Linux
Kernel Settings for NetApp NFS .
vi /etc/modprobe.d/sunrpc.conf
# Insert the following line
options sunrpc tcp_max_slot_table_entries=128
7 Note
If installing HANA 2.0 SP04 you will be required to install package compat-sap-
c++-7 as described in SAP note 2593824 , before you can install SAP HANA.
mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1
3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, i.e. defaultv4iddomain.com and the mapping is
set to nobody.
) Important
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >>
/etc/modprobe.d/nfs.conf
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
10.9.0.4:/HN1-shared/shared /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount all volumes
sudo mount -a
For workloads, that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a
sudo vi /etc/fstab
# Add the following entries
10.9.0.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a
9. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from hanadb1
/hana/data/HN1/mnt00001 from 10.9.0.4:/HN1-data-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/log/HN1/mnt00002 from 10.9.0.4:/HN1-log-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/data/HN1/mnt00002 from 10.9.0.4:/HN1-data-mnt00002
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/log/HN1/mnt00001 from 10.9.0.4:/HN1-log-mnt00001
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/usr/sap/HN1 from 10.9.0.4:/HN1-shared/usr-sap-hanadb1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
/hana/shared from 10.9.0.4:/HN1-shared/shared
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.9.3.4,local_lock=none,addr=1
0.9.0.4
Installation
In this example for deploying SAP HANA in scale-out configuration with standby node
with Azure, we've used HANA 2.0 SP4.
2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3, without being
prompted for a password.
ssh root@hanadb2
ssh root@hanadb3
3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more
information, see SAP Note 2593824 .
4. [2], [3] Change ownership of SAP HANA data and log directories to hn1adm.
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1
5. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA
installation. You can re-enable it, after the HANA installation is done.
# Execute as root
systemctl stop firewalld
systemctl disable firewalld
HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation
and Update guide . In this example, we install SAP HANA scale-out with master,
one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use
the internal_network parameter and pass the address space for subnet, which is
used for the internal HANA inter-node communication.
./hdblcm --internal_network=10.9.2.0/26
Display global.ini, and ensure that the configuration for the internal SAP HANA
inter-node communication is in place. Verify the communication section. It should
have the address space for the hana subnet, and listeninterface should be set to
.internal . Verify the internal_hostname_resolution section. It should have the IP
addresses for the HANA virtual machines that belong to the hana subnet.
3. [1] Add host mapping to ensure that the client IP addresses are used for client
communication. Add section public_host_resolution , and add the corresponding
IP addresses from the client subnet.
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[public_hostname_resolution]
map_hanadb1 = 10.9.1.5
map_hanadb2 = 10.9.1.6
map_hanadb3 = 10.9.1.7
5. [1] Verify that the client interface will be using the IP addresses from the client
subnet for communication.
# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d
SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result
"hanadb3","net_publicname","10.9.1.7"
"hanadb2","net_publicname","10.9.1.6"
"hanadb1","net_publicname","10.9.1.5"
For information about how to verify the configuration, see SAP Note 2183363 -
Configuration of SAP HANA internal network .
Stop HANA
) Important
Create firewall rules to allow HANA inter node communication and client
traffic. The required ports are listed on TCP/IP Ports of All SAP
Products . The following commands are just an example. In this
scenario with used system number 03.
# Execute as root
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,
1129,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp --
permanent
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,
1129,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp
Start HANA
7. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the
following SAP HANA parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For more information, see I/O stack configuration for SAP HANA .
Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini .
For more information, see SAP Note 1999930 .
For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set
during the installation, as described in SAP Note 2267798 .
8. The storage that's used by Azure NetApp Files has a file size limitation of 16
terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it
won't automatically create a new data file when the file size limit of 16 TB is
reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will
result in errors and, eventually, in an index server crash.
) Important
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of
the storage subsystem, set the following parameters in global.ini .
datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP
Note 2400005 . Be aware of SAP Note 2631285 .
a. Before you simulate the node crash, run the following commands as hn1adm to
capture the status of the environment:
b. To simulate a node crash, run the following command as root on the worker
node, which is hanadb2 in this case:
c. Monitor the system for failover completion. When the failover has been
completed, capture the status, which should look like the following:
) Important
When a node experiences kernel panic, avoid delays with SAP HANA failover
by setting kernel.panic to 20 seconds on all HANA virtual machines. The
configuration is done in /etc/sysctl . Reboot the virtual machines to activate
the change. If this change isn't performed, failover can take 10 or more
minutes when a node is experiencing kernel panic.
a. Prior to the test, check the status of the environment by running the following
commands as hn1adm:
#Landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage | Storage
| Failover | Failover | NameServer | NameServer | IndexServer |
IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config | Actual
| Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | --------
- | -------- | -------- | ---------- | ---------- | ----------- | -----
------ | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker | slave
| worker | worker | default | default |
| hanadb3 | yes | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |
# Check the instance status
sapcontrol -nr 03 -function GetSystemInstanceList
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN
hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN
b. Run the following commands as hn1adm on the active master node, which is
hanadb1 in this case:
The standby node hanadb3 will take over as master node. Here is the resource
state after the failover test is completed:
c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine,
where the name server was killed). The hanadb1 node will rejoin the environment
and will keep its standby role.
After SAP HANA has started on hanadb1, expect the following status:
d. Again, kill the name server on the currently active master node (that is, on node
hanadb3).
Node hanadb1 will resume the role of master node. After the failover test has been
completed, the status will look like this:
e. Start SAP HANA on hanadb3, which will be ready to serve as a standby node.
After SAP HANA has started on hanadb3, the status looks like the following:
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs).
High availability of SAP HANA scale-out
system on Red Hat Enterprise Linux
Article • 01/17/2024
This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration. Specifically, the configuration uses HANA system replication (HSR) and
Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file
systems in the presented architecture are NFS mounted and are provided by Azure
NetApp Files or NFS share on Azure Files.
In the example configurations and installation commands, the HANA instance is 03 and
the HANA system ID is HN1 .
Prerequisites
Some readers will benefit from consulting a variety of SAP notes and resources before
proceeding further with the topics in this article:
Overview
To achieve HANA high availability for HANA scale-out installations, you can configure
HANA system replication, and protect the solution with a Pacemaker cluster to allow
automatic failover. When an active node fails, the cluster fails over the HANA resources
to the other site.
In the following diagram, there are three HANA nodes on each site, and a majority
maker node to prevent a "split-brain" scenario. The instructions can be adapted to
include more VMs as HANA DB nodes.
The HANA shared file system /hana/shared in the presented architecture can be
provided by Azure NetApp Files or NFS share on Azure Files. The HANA shared file
system is NFS mounted on each HANA node in the same HANA system replication site.
File systems /hana/data and /hana/log are local file systems and aren't shared between
the HANA DB nodes. SAP HANA will be installed in non-shared mode.
For recommended SAP HANA storage configurations, see SAP HANA Azure VMs storage
configurations.
) Important
If deploying all HANA file systems on Azure NetApp Files, for production systems,
where performance is a key, we recommend to evaluate and consider using Azure
NetApp Files application volume group for SAP HANA.
The preceding diagram shows three subnets represented within one Azure virtual
network, following the SAP HANA network recommendations:
Because /hana/data and /hana/log are deployed on local disks, it isn't necessary to
deploy separate subnet and separate virtual network cards for communication to the
storage.
If you're using Azure NetApp Files, the NFS volumes for /hana/shared , are deployed in a
separate subnet, delegated to Azure NetApp Files: anf 10.23.1.0/26.
Three virtual machines to serve as HANA DB nodes for HANA replication site
1: hana-s1-db1, hana-s1-db2 and hana-s1-db3.
Three virtual machines to serve as HANA DB nodes for HANA replication site
2: hana-s2-db1, hana-s2-db2 and hana-s2-db3.
A small virtual machine to serve as majority maker: hana-s-mm.
The VMs deployed as SAP DB HANA nodes should be certified by SAP for HANA,
as published in the SAP HANA hardware directory . When you're deploying the
HANA DB nodes, make sure to select accelerated network.
For the majority maker node, you can deploy a small VM, because this VM doesn't
run any of the SAP HANA resources. The majority maker VM is used in the cluster
configuration to achieve and odd number of cluster nodes in a split-brain scenario.
The majority maker VM only needs one virtual network interface in the client
subnet in this example.
Deploy local managed disks for /hana/data and /hana/log . The minimum
recommended storage configuration for /hana/data and /hana/log is described in
SAP HANA Azure VMs storage configurations.
Deploy the primary network interface for each VM in the client virtual network
subnet. When the VM is deployed via Azure portal, the network interface name is
automatically generated. In this article, we'll refer to the automatically generated,
primary network interfaces as hana-s1-db1-client, hana-s1-db2-client, hana-s1-
db3-client, and so on. These network interfaces are attached to the client Azure
virtual network subnet.
) Important
Make sure that the operating system you select is SAP-certified for SAP HANA
on the specific VM types that you're using. For a list of SAP HANA certified
VM types and operating system releases for those types, see SAP HANA
certified IaaS platforms . Drill into the details of the listed VM type to get
the complete list of SAP HANA-supported operating system releases for that
type.
2. Create six network interfaces, one for each HANA DB virtual machine, in the inter
virtual network subnet (in this example, hana-s1-db1-inter, hana-s1-db2-inter,
hana-s1-db3-inter, hana-s2-db1-inter, hana-s2-db2-inter, and hana-s2-db3-
inter).
3. Create six network interfaces, one for each HANA DB virtual machine, in the hsr
virtual network subnet (in this example, hana-s1-db1-hsr, hana-s1-db2-hsr, hana-
s1-db3-hsr, hana-s2-db1-hsr, hana-s2-db2-hsr, and hana-s2-db3-hsr).
4. Attach the newly created virtual network interfaces to the corresponding virtual
machines:
a. Go to the virtual machine in the Azure portal .
b. On the left pane, select Virtual Machines. Filter on the virtual machine name
(for example, hana-s1-db1), and then select the virtual machine.
c. On the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface dropdown list, select the already created network interfaces
for the inter and hsr subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hana-s1-db2, hana-s1-db3, hana-s2-db1, hana-s2-db2 and hana-s2-db3)
g. Leave the virtual machines in the stopped state for now.
5. Enable accelerated networking for the additional network interfaces for the inter
and hsr subnets by doing the following:
Azure CLI
7 Note
For HANA scale out, select the NIC for the client subnet when adding the
virtual machines in the backend pool.
The full set of command in Azure CLI and PowerShell adds the VMs with
primary NIC in the backend pool.
Azure Portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create frontend IP. Select the same virtual
network and subnet same as your DB virtual machines.
2. Backend Pool: Create backend pool and add DB VMs.
3. Inbound rules: Create load balancing rule. Follow the same steps for both
load balancing rules.
7 Note
) Important
7 Note
When you're using the standard load balancer, you should be aware of the
following limitation. When you place VMs without public IP addresses in the back-
end pool of an internal load balancer, there's no outbound internet connectivity. To
allow routing to public end points, you need to perform additional configuration.
For more information, see Public endpoint connectivity for Virtual Machines using
Azure Standard Load Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For details, see Load Balancer health probes and
Deploy NFS
There are two options for deploying Azure native NFS for /hana/shared . You can deploy
NFS volume on Azure NetApp Files or NFS share on Azure Files. Azure files support
NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.
The next sections describe the steps to deploy NFS - you'll need to select only one of
the options.
Tip
You chose to deploy /hana/shared on NFS share on Azure Files or NFS volume on
Azure NetApp Files.
In this example, you use the following Azure NetApp Files volumes:
In this example, the following Azure Files NFS shares were used:
1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries are added to /etc/hosts for this example.
Bash
# Client subnet
10.23.0.11 hana-s1-db1
10.23.0.12 hana-s1-db1
10.23.0.13 hana-s1-db2
10.23.0.14 hana-s2-db1
10.23.0.15 hana-s2-db2
10.23.0.16 hana-s2-db3
10.23.0.17 hana-s-mm
# Internode subnet
10.23.1.138 hana-s1-db1-inter
10.23.1.139 hana-s1-db2-inter
10.23.1.140 hana-s1-db3-inter
10.23.1.141 hana-s2-db1-inter
10.23.1.142 hana-s2-db2-inter
10.23.1.143 hana-s2-db3-inter
# HSR subnet
10.23.1.202 hana-s1-db1-hsr
10.23.1.203 hana-s1-db2-hsr
10.23.1.204 hana-s1-db3-hsr
10.23.1.205 hana-s2-db1-hsr
10.23.1.206 hana-s2-db2-hsr
10.23.1.207 hana-s2-db3-hsr
Bash
vi /etc/sysctl.d/ms-az.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10
Tip
to allow the SAP host agent to manage the port ranges. For more details, see
SAP note 2382421 .
Bash
Configure RHEL, as described in the Red Hat customer portal and in the
following SAP notes:
1. [AH] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.
Bash
vi /etc/sysctl.d/91-NetApp-HANA.conf
2. [AH] Adjust the sunrpc settings, as recommended in SAP note 3024346 - Linux
Kernel Settings for NetApp NFS .
Bash
vi /etc/modprobe.d/sunrpc.conf
Bash
mkdir -p /hana/shared
4. [AH] Verify the NFS domain setting. Make sure that the domain is configured as
the default Azure NetApp Files domain: defaultv4iddomain.com . Make sure the
mapping is set to nobody .
(This step is only needed if you're using Azure NetAppFiles NFS v4.1.)
) Important
configuration on the NFS client and the NFS server, the permissions for files
on Azure NetApp volumes that are mounted on the VMs will be displayed as
nobody .
Bash
Bash
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.9.0.4:/HN1-shared /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
For more information on how to change the nfs4_disable_idmapping parameter,
see the Red Hat customer portal .
6. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
Bash
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.23.1.7:/HN1-shared-s1 /hana/shared
7. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
Bash
sudo mount -o
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 10.23.1.7:/HN1-shared-s2 /hana/shared
8. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs, with NFS protocol version NFSv4.
Bash
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock=none,addr
=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr
=10.23.1.7
mkdir -p /hana/shared
2. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
Bash
sudo vi /etc/fstab
# Add the following entry
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount all volumes
sudo mount -a
3. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
Bash
sudo vi /etc/fstab
# Add the following entries
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount the volume
sudo mount -a
4. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs with NFS protocol version NFSv4.1.
Bash
sudo nfsstat -m
# Example from SITE 1, hana-s1-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,a
ddr=10.23.0.35
# Example from SITE 2, hana-s2-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,a
ddr=10.23.0.35
Set up the disk layout with Logical Volume Manager (LVM). The following example
assumes that each HANA virtual machine has three data disks attached, and that these
disks are used to create two volumes.
Bash
ls /dev/disk/azure/scsi1/lun*
Example output:
Bash
/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2
2. [AH] Create physical volumes for all of the disks that you want to use:
Bash
3. [AH] Create a volume group for the data files. Use one volume group for the log
files and one for the shared directory of SAP HANA:
Bash
4. [AH] Create the logical volumes. A linear volume is created when you use lvcreate
without the -i switch. We suggest that you create a striped volume for better I/O
performance. Align the stripe sizes to the values documented in SAP HANA VM
storage configurations. The -i argument should be the number of the underlying
physical volumes and the -I argument is the stripe size. In this article, two physical
volumes are used for the data volume, so the -i switch argument is set to 2 . The
stripe size for the data volume is 256 KiB . One physical volume is used for the log
volume, so you don't need to use explicit -i or -I switches for the log volume
commands.
) Important
Use the -i switch, and set it to the number of the underlying physical
volume, when you use more than one physical volume for each data or log
volume. Use the -I switch to specify the stripe size when you're creating a
striped volume. See SAP HANA VM storage configurations for recommended
storage configurations, including stripe sizes and number of disks.
Bash
5. [AH] Create the mount directories and copy the UUID of all of the logical volumes:
Bash
6. [AH] Create fstab entries for the logical volumes and mount:
Bash
sudo vi /etc/fstab
Bash
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_data_HN1-hana_data
/hana/data/HN1 xfs defaults,nofail 0 2
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_log_HN1-hana_log
/hana/log/HN1 xfs defaults,nofail 0 2
Bash
sudo mount -a
Installation
In this example for deploying SAP HANA in a scale-out configuration with HSR on Azure
VMs, you're using HANA 2.0 SP4.
Bash
3. [1] Verify that you can sign in hana-s1-db2 and hana-s1-db3 via secure shell (SSH),
without being prompted for a password. If that isn't the case, exchange ssh keys,
as documented in Using key-based authentication .
Bash
ssh root@hana-s1-db2
ssh root@hana-s1-db3
4. [2] Verify that you can sign in hana-s2-db2 and hana-s2-db3 via SSH, without
being prompted for a password. If that isn't the case, exchange ssh keys, as
documented in Using key-based authentication .
Bash
ssh root@hana-s2-db2
ssh root@hana-s2-db3
5. [AH] Install additional packages, which are required for HANA 2.0 SP4. For more
information, see SAP Note 2593824 for RHEL 7.
Bash
# If using RHEL 7
yum install libgcc_s1 libstdc++6 compat-sap-c++-7 libatomic1
# If using RHEL 8
yum install libatomic libtool-ltdl.x86_64
6. [A] Disable the firewall temporarily, so that it doesn't interfere with the HANA
installation. You can re-enable it after the HANA installation is done.
Bash
# Execute as root
systemctl stop firewalld
systemctl disable firewalld
a. Start the hdblcm program as root from the HANA installation software
directory. Use the internal_network parameter and pass the address space for
subnet, which is used for the internal HANA internode communication.
Bash
./hdblcm --internal_network=10.23.1.128/26
2. [2] Repeat the preceding step to install SAP HANA on the first node on SITE 2.
Display global.ini, and ensure that the configuration for the internal SAP HANA
internode communication is in place. Verify the communication section. It should
have the address space for the inter subnet, and listeninterface should be set
to .internal . Verify the internal_hostname_resolution section. It should have the
IP addresses for the HANA virtual machines that belong to the inter subnet.
Bash
sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
# Example from SITE1
[communication]
internal_network = 10.23.1.128/26
listeninterface = .internal
[internal_hostname_resolution]
10.23.1.138 = hana-s1-db1
10.23.1.139 = hana-s1-db2
10.23.1.140 = hana-s1-db3
Bash
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
[persistence]
basepath_shared = no
Bash
6. [1,2] Verify that the client interface uses the IP addresses from the client subnet
for communication.
Bash
# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB
'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result - example from SITE 2
"hana-s2-db1","net_publicname","10.23.0.14"
For information about how to verify the configuration, see SAP note 2183363 -
Configuration of SAP HANA internal network .
7. [AH] Change permissions on the data and log directories to avoid a HANA
installation error.
Bash
sudo chmod o+w -R /hana/data /hana/log
8. [1] Install the secondary HANA nodes. The example instructions in this step are for
SITE 1.
Bash
cd /hana/shared/HN1/hdblcm
./hdblcm
9. [2] Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.
Bash
Bash
Bash
Register the second site to start the system replication. Run the following
command as <hanasid>adm:
Bash
3. [1] Check the replication status and wait until all databases are in sync.
Bash
4. [1,2] Change the HANA configuration so that communication for HANA system
replication is directed though the HANA system replication virtual network
interfaces.
Bash
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function
StopSystem HDB
b. Edit global.ini to add the host mapping for HANA system replication. Use the IP
addresses from the hsr subnet.
Bash
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[system_replication_hostname_resolution]
10.23.1.202 = hana-s1-db1
10.23.1.203 = hana-s1-db2
10.23.1.204 = hana-s1-db3
10.23.1.205 = hana-s2-db1
10.23.1.206 = hana-s2-db2
10.23.1.207 = hana-s2-db3
Bash
For more information, see Host name resolution for system replication .
Bash
# Execute as root
systemctl start firewalld
systemctl enable firewalld
b. Open the necessary firewall ports. You will need to adjust the ports for your
HANA instance number.
) Important
Bash
# Execute as root
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,11
29,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp --
permanent
sudo firewall-cmd --zone=public --add-port=
{30301,30303,30306,30307,30313,30315,30317,30340,30341,30342,1128,11
29,40302,40301,40307,40303,40340,50313,50314,30310,30302}/tcp
) Important
Don't set quorum expected-votes to 2. This isn't a two-node cluster. Make sure that
the cluster property concurrent-fencing is enabled, so that node fencing is
deserialized.
Bash
2. [AH] Unmount file system /hana/shared , which was temporarily mounted for the
installation on all HANA DB VMs. Before you can unmount it, you need to stop any
processes and sessions that are using the file system.
Bash
umount /hana/shared
3. [1] Create the file system cluster resources for /hana/shared in the disabled state.
You use --disabled because you have to define the location constraints before the
mounts are enabled.
You chose to deploy /hana/shared' on NFS share on Azure Files or NFS volume on
Azure NetApp Files.
Bash
# clone the /hana/shared file system resources for both site1 and
site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1
interleave=true
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1
interleave=true
The suggested timeouts values allow the cluster resources to withstand protocol-
specific pause, related to NFSv4.1 lease renewals on Azure NetApp Files. For more
information see NFS in NetApp Best practice .
In this example, the '/hana/shared' file system is deployed on NFS on Azure
Files. Follow the steps in this section, only if you're using NFS on Azure Files.
Bash
# clone the /hana/shared file system resources for both site1 and
site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1
interleave=true
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1
interleave=true
The on-fail=fence attribute is also added to the monitor operation. With this
option, if the monitor operation fails on a node, that node is immediately
fenced. Without this option, the default behavior is to stop all resources that
depend on the failed resource, then restart the failed resource, and then start
all the resources that depend on the failed resource. Not only can this
behavior take a long time when an SAP HANA resource depends on the failed
resource, but it also can fail altogether. The SAP HANA resource can't stop
successfully, if the NFS share holding the HANA binaries is inaccessible.
The timeouts in the above configurations may need to be adapted to the
specific SAP setup.
4. [1] Configure and verify the node attributes. All SAP HANA DB nodes on replication
site 1 are assigned attribute S1 , and all SAP HANA DB nodes on replication site 2
are assigned attribute S2 .
Bash
5. [1] Configure the constraints that determine where the NFS file systems will be
mounted, and enable the file system resources.
Bash
When you enable the file system resources, the cluster will mount the
/hana/shared file systems.
6. [AH] Verify that the Azure NetApp Files volumes are mounted under /hana/shared ,
on all HANA DB VMs on both sites.
Bash
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,prot
o=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock
=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,prot
o=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock
=none,addr=10.23.1.7
Bash
sudo nfsstat -m
# Example from SITE 1, hana-s1-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,p
roto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_l
ock=none,addr=10.23.0.35
# Example from SITE 2, hana-s2-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,p
roto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_l
ock=none,addr=10.23.0.35
7. [1] Configure and clone the attribute resources, and configure the constraints, as
follows:
Bash
If your configuration includes file systems other than / hana/shared , and these
file systems are NFS mounted, then include the sequential=false option. This
option ensures that there are no ordering dependencies among the file
systems. All NFS mounted file systems must start before the corresponding
attribute resource, but they don't need to start in any order relative to each
other. For more information, see How do I configure SAP HANA scale-out
HSR in a Pacemaker cluster when the HANA file systems are NFS shares .
8. [1] Place Pacemaker in maintenance mode, in preparation for the creation of the
HANA cluster resources.
Bash
1. [A] Install the HANA scale-out resource agent on all cluster nodes, including the
majority maker.
Bash
7 Note
2. [1,2] Install the HANA system replication hook on one HANA DB node on each
system replication site. SAP HANA should still be down.
Bash
mkdir -p /hana/shared/myHooks
cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
chown -R hn1adm:sapsys /hana/shared/myHooks
b. Adjust global.ini .
Bash
# add to global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = info
3. [AH] The cluster requires sudoers configuration on the cluster node for <sid>adm.
In this example, you achieve this by creating a new file. Run the commands as
root .
Bash
Bash
5. [1] Verify the hook installation. Run as <sid>adm on the active HANA system
replication site.
Bash
cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
# Example entries
# 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.092016 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK
6. [1] Create the HANA cluster resources. Run the following commands as root .
Bash
If you're building a RHEL >= 8.x cluster, use the following commands:
Bash
7 Note
Bash
If you're building a RHEL >= 8.x cluster, use the following commands:
Bash
) Important
resume automatically.
Bash
Bash
If you're building a RHEL >= 8.x cluster, use the following commands:
Bash
7. [1] Place the cluster out of maintenance mode. Make sure that the cluster status is
ok , and that all of the resources are started.
Bash
The timeouts in the preceding configuration are just examples, and might
need to be adapted to the specific HANA setup. For instance, you might need
to increase the start timeout, if it takes longer to start the SAP HANA
database.
This section describes the additional steps you must take to manage this type of system
replication in a Red Hat high availability cluster, with a second virtual IP address.
Before proceeding further, make sure you have fully configured a Red Hat high
availability cluster, managing an SAP HANA database, as described earlier in this article.
Bash
Bash
# RHEL 8.x:
pcs constraint location g_ip_HN1_03 rule score=500 role=master
hana_hn1_roles eq "master1:master:worker:master" and hana_hn1_clone_state eq
PROMOTED
pcs constraint location g_secip_HN1_03 rule score=50 hana_hn1_roles eq
'master1:master:worker:master'
pcs constraint order promote SAPHana_HN1_HDB03-clone then start g_ip_HN1_03
pcs constraint order start g_ip_HN1_03 then start g_secip_HN1_03
pcs constraint colocation add g_secip_HN1_03 with Slave SAPHana_HN1_HDB03-
clone 5
# RHEL 7.x:
pcs constraint location g_ip_HN1_03 rule score=500 role=master
hana_hn1_roles eq "master1:master:worker:master" and hana_hn1_clone_state eq
PROMOTED
pcs constraint location g_secip_HN1_03 rule score=50 hana_hn1_roles eq
'master1:master:worker:master'
pcs constraint order promote msl_SAPHana_HN1_HDB03 then start g_ip_HN1_03
pcs constraint order start g_ip_HN1_03 then start g_secip_HN1_03
pcs constraint colocation add g_secip_HN1_03 with Slave
msl_SAPHana_HN1_HDB03 5
Make sure that the cluster status is ok , and that all of the resources are started. The
second virtual IP will run on the secondary site along with SAP HANA secondary
resource.
Bash
In the next section, you can find the typical set of failover tests to run.
When you're testing server crash, the second virtual IP resources (secvip_HN1_03)
and the Azure Load Balancer port resource (secnc_HN1_03) run on the primary
server, alongside the primary virtual IP resources. While the secondary server is
down, the applications that are connected to the read-enabled HANA database will
connect to the primary HANA database. This behavior is expected. It allows
applications that are connected to the read-enabled HANA database to operate
while a secondary server is unavailable.
During failover and fallback, the existing connections for applications that are
using the second virtual IP to connect to the HANA database might be interrupted.
Bash
Bash
#mode: PRIMARY
#site id: 1
#site name: HANA_S1
2. Verify the cluster configuration for a failure scenario, when a node loses access to
the NFS share ( /hana/shared ).
Expected result: When you block the access to the /hana/shared NFS mounted file
system on one of the primary site VMs, the monitoring operation that performs
read/write operation on file system, will fail, as it is not able to access the file
system and will trigger HANA resource failover. The same result is expected when
your HANA node loses access to the NFS share.
You can check the state of the cluster resources by running crm_mon or pcs status .
Resource state before starting the test:
Bash
# Output of crm_mon
#7 nodes configured
#45 resources configured
If using NFS on ANF, first confirm the IP address for the /hana/shared ANF
volume on the primary site. You can do that by running df -kh|grep
/hana/shared .
If using NFS on Azure Files, first determine the IP address of the private end
point for your storage account.
Then, set up a temporary firewall rule to block access to the IP address of the
/hana/shared NFS file system by executing the following command on one of the
Bash
The HANA VM that lost access to /hana/shared should restart or stop, depending
on the cluster configuration. The cluster resources are migrated to the other HANA
system replication site.
If the cluster hasn't started on the VM that was restarted, start the cluster by
running the following:
Bash
When the cluster starts, file system /hana/shared is automatically mounted. If you
set AUTOMATED_REGISTER="false" , you will need to configure SAP HANA system
replication on the secondary site. In this case, you can run these commands to
reconfigure SAP HANA as secondary.
Bash
Bash
# Output of crm_mon
#7 nodes configured
#45 resources configured
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#Active resources:
It's a good idea to test the SAP HANA cluster configuration thoroughly, by also
performing the tests documented in HA for SAP HANA on Azure VMs on RHEL.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure VMs.
Set up Pacemaker on SUSE Linux
Enterprise Server in Azure
Article • 04/08/2024
This article discusses how to set up Pacemaker on SUSE Linux Enterprise Server (SLES) in
Azure.
Overview
In Azure, you have two options for setting up fencing in the Pacemaker cluster for SLES.
You can use an Azure fence agent, which restarts a failed node via the Azure APIs, or you
can use SBD device.
The SBD device requires at least one additional virtual machine (VM) that acts as
an Internet Small Computer System Interface (iSCSI) target server and provides an
SBD device. These iSCSI target servers can, however, be shared with other
Pacemaker clusters. The advantage of using an SBD device is that if you're already
using SBD devices on-premises, they don't require any changes to how you
operate the Pacemaker cluster.
You can use up to three SBD devices for a Pacemaker cluster to allow an SBD
device to become unavailable (for example, during OS patching of the iSCSI target
server). If you want to use more than one SBD device per Pacemaker, be sure to
deploy multiple iSCSI target servers and connect one SBD from each iSCSI target
server. We recommend using either one SBD device or three. Pacemaker can't
automatically fence a cluster node if only two SBD devices are configured and one
of them is unavailable. If you want to be able to fence when one iSCSI target server
is down, you have to use three SBD devices and, therefore, three iSCSI target
servers. That's the most resilient configuration when you're using SBDs.
) Important
When you're planning and deploying Linux Pacemaker clustered nodes and
SBD devices, do not allow the routing between your virtual machines and the
VMs that are hosting the SBD devices to pass through any other devices, such
as a network virtual appliance (NVA) .
Maintenance events and other issues with the NVA can have a negative
impact on the stability and reliability of the overall cluster configuration. For
more information, see User-defined routing rules.
To configure an SBD device, you need to attach at least one Azure shared disk to
all virtual machines that are part of Pacemaker cluster. The advantage of SBD
device using an Azure shared disk is that you don't need to deploy additional
virtual machines.
Here are some important considerations about SBD devices when you're using an
Azure shared disk:
An Azure shared disk with Premium SSD is supported as an SBD device.
SBD devices that use an Azure shared disk are supported on SLES High
Availability 15 SP01 and later.
SBD devices that use an Azure premium shared disk are supported on locally
redundant storage (LRS) and zone-redundant storage (ZRS).
Depending on the type of your deployment, choose the appropriate redundant
storage for an Azure shared disk as your SBD device.
An SBD device using LRS for Azure premium shared disk (skuName -
Premium_LRS) is only supported with deployment in availability set.
An SBD device using ZRS for an Azure premium shared disk (skuName -
Premium_ZRS) is recommended with deployment in availability zones.
A ZRS for managed disk is currently unavailable in all regions with availability
zones. For more information, review the ZRS "Limitations" section in
Redundancy options for managed disks.
The Azure shared disk that you use for SBD devices doesn't need to be large.
The maxShares value determines how many cluster nodes can use the shared
disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-
node cluster such as SAP ASCS/ERS or SAP HANA scale-up.
For HANA scale-out with HANA system replication (HSR) and Pacemaker, you
can use an Azure shared disk for SBD devices in clusters with up to four nodes
per replication site because of the current limit of maxShares.
We do not recommend attaching an Azure shared disk SBD device across
Pacemaker clusters.
If you use multiple Azure shared disk SBD devices, check on the limit for a
maximum number of data disks that can be attached to a VM.
For more information about limitations for Azure shared disks, carefully review
the "Limitations" section of Azure shared disk documentation.
1. Deploy new SLES 12 SP3 or higher virtual machines and connect to them via SSH.
The machines don't need to be large. Virtual machine sizes Standard_E2s_v3 or
Standard_D2s_v3 are sufficient. Be sure to use Premium storage for the OS disk.
a. Update SLES.
Bash
7 Note
You might need to reboot the OS after you upgrade or update the OS.
b. Remove packages.
To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following
packages. You can ignore errors about packages that can't be found.
Bash
Bash
Bash
In the following instructions, replace adjust the hostnames of your cluster nodes and the
SID of your SAP system.
Bash
Bash
3. Create the SBD device for the ASCS server of SAP System NW1.
Bash
4. Create the SBD device for the database cluster of SAP System NW1.
Bash
Bash
Bash
sudo targetcli ls
o- /
.......................................................................
................................... [...]
o- backstores
.......................................................................
........................ [...]
| o- block
.......................................................................
............ [Storage Objects: 0]
| o- fileio
.......................................................................
........... [Storage Objects: 3]
| | o- sbdascsnw1 ................................................
[/sbd/sbdascsnw1 (50.0MiB) write-thru activated]
| | | o- alua
.......................................................................
............. [ALUA Groups: 1]
| | | o- default_tg_pt_gp
........................................................ [ALUA state:
Active/optimized]
| | o- sbddbnw1 ....................................................
[/sbd/sbddbnw1 (50.0MiB) write-thru activated]
| | | o- alua
.......................................................................
............. [ALUA Groups: 1]
| | | o- default_tg_pt_gp
........................................................ [ALUA state:
Active/optimized]
| | o- sbdnfs ........................................................
[/sbd/sbdnfs (50.0MiB) write-thru activated]
| | o- alua
.......................................................................
............. [ALUA Groups: 1]
| | o- default_tg_pt_gp
........................................................ [ALUA state:
Active/optimized]
| o- pscsi
.......................................................................
............ [Storage Objects: 0]
| o- ramdisk
.......................................................................
.......... [Storage Objects: 0]
o- iscsi
.......................................................................
...................... [Targets: 3]
| o- iqn.2006-04.ascsnw1.local:ascsnw1
..................................................................
[TPGs: 1]
| | o- tpg1
.......................................................................
......... [no-gen-acls, no-auth]
| | o- acls
.......................................................................
.................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0
............................................... [Mapped LUNs: 1]
| | | | o- mapped_lun0
............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | | o- iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1
............................................... [Mapped LUNs: 1]
| | | o- mapped_lun0
............................................................ [lun0
fileio/sbdascsnw1 (rw)]
| | o- luns
.......................................................................
.................... [LUNs: 1]
| | | o- lun0 ..........................................
[fileio/sbdascsnw1 (/sbd/sbdascsnw1) (default_tg_pt_gp)]
| | o- portals
.......................................................................
.............. [Portals: 1]
| | o- 0.0.0.0:3260
.......................................................................
............... [OK]
| o- iqn.2006-04.dbnw1.local:dbnw1
......................................................................
[TPGs: 1]
| | o- tpg1
.......................................................................
......... [no-gen-acls, no-auth]
| | o- acls
.......................................................................
.................... [ACLs: 2]
| | | o- iqn.2006-04.nw1-db-0.local:nw1-db-0
................................................... [Mapped LUNs: 1]
| | | | o- mapped_lun0
.............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | | o- iqn.2006-04.nw1-db-1.local:nw1-db-1
................................................... [Mapped LUNs: 1]
| | | o- mapped_lun0
.............................................................. [lun0
fileio/sbddbnw1 (rw)]
| | o- luns
.......................................................................
.................... [LUNs: 1]
| | | o- lun0 ..............................................
[fileio/sbddbnw1 (/sbd/sbddbnw1) (default_tg_pt_gp)]
| | o- portals
.......................................................................
.............. [Portals: 1]
| | o- 0.0.0.0:3260
.......................................................................
............... [OK]
| o- iqn.2006-04.nfs.local:nfs
.......................................................................
... [TPGs: 1]
| o- tpg1
.......................................................................
......... [no-gen-acls, no-auth]
| o- acls
.......................................................................
.................... [ACLs: 2]
| | o- iqn.2006-04.nfs-0.local:nfs-0
......................................................... [Mapped LUNs:
1]
| | | o- mapped_lun0
................................................................ [lun0
fileio/sbdnfs (rw)]
| | o- iqn.2006-04.nfs-1.local:nfs-1
......................................................... [Mapped LUNs:
1]
| | o- mapped_lun0
................................................................ [lun0
fileio/sbdnfs (rw)]
| o- luns
.......................................................................
.................... [LUNs: 1]
| | o- lun0 ..................................................
[fileio/sbdnfs (/sbd/sbdnfs) (default_tg_pt_gp)]
| o- portals
.......................................................................
.............. [Portals: 1]
| o- 0.0.0.0:3260
.......................................................................
............... [OK]
o- loopback
.......................................................................
................... [Targets: 0]
o- vhost
.......................................................................
...................... [Targets: 0]
o- xen-pvscsi
.......................................................................
................. [Targets: 0]
7 Note
[A]: Applies to all nodes.
[1]: Applies only to node 1.
[2]: Applies only to node 2.
Bash
2. [A] Connect to the iSCSI devices. First, enable the iSCSI and SBD services.
Bash
Bash
sudo vi /etc/iscsi/initiatorname.iscsi
4. [1] Change the contents of the file to match the access control lists (ACLs) you used
when you created the iSCSI device on the iSCSI target server (for example, for the
NFS server).
Bash
InitiatorName=iqn.2006-04.nfs-0.local:nfs-0
Bash
sudo vi /etc/iscsi/initiatorname.iscsi
6. [2] Change the contents of the file to match the ACLs you used when you created
the iSCSI device on the iSCSI target server.
Bash
InitiatorName=iqn.2006-04.nfs-1.local:nfs-1
Bash
8. [A] Connect the iSCSI devices. In the following example, 10.0.0.17 is the IP address
of the iSCSI target server, and 3260 is the default port. iqn.2006-04.nfs.local:nfs is
one of the target names that's listed when you run the first command, iscsiadm -m
discovery .
Bash
9. [A] If you want to use multiple SBD devices, also connect to the second iSCSI
target server.
Bash
10. [A] If you want to use multiple SBD devices, also connect to the third iSCSI target
server.
Bash
Bash
lsscsi
Bash
The command lists three device IDs for every SBD device. We recommend using
the ID that starts with scsi-3. In the preceding example, the IDs are:
/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03
/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df
/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf
a. Use the device ID of the iSCSI devices to create the new SBD devices on the first
cluster node.
Bash
b. Also create the second and third SBD devices if you want to use more than one.
Bash
Bash
sudo vi /etc/sysconfig/sbd
b. Change the property of the SBD device, enable the Pacemaker integration, and
change the start mode of SBD.
Bash
[...]
SBD_DEVICE="/dev/disk/by-id/scsi-
36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-
360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-
36001405f88f30e7c9684678bc87fe7bf"
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]
7 Note
Bash
Bash
Bash
$ResourceGroup = "MyResourceGroup"
$Location = "MyAzureRegion"
2. Define the size of the disk based on available disk size for Premium SSDs. In this
example, P1 disk size of 4G is mentioned.
Bash
$DiskSizeInGB = 4
$DiskName = "SBD-disk1"
3. With parameter -MaxSharesCount, define the maximum number of cluster nodes
to attach the shared disk for the SBD device.
Bash
$ShareNodes = 2
4. For an SBD device that uses LRS for an Azure premium shared disk, use the
following storage SkuName:
Bash
$SkuName = "Premium_LRS"
5. For an SBD device that uses ZRS for an Azure premium shared disk, use the
following storage SkuName:
Bash
$SkuName = "Premium_ZRS"
Bash
Bash
$VM1 = "prod-cl1-0"
$VM2 = "prod-cl1-1"
Bash
Bash
If you want to deploy resources by using the Azure CLI or the Azure portal, you can also
refer to Deploy a ZRS disk.
Bash
Bash
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 1G 0 part /boot
├─sda4 8:4 0 28.5G 0 part /
sdb 8:16 0 256G 0 disk
├─sdb1 8:17 0 256G 0 part /mnt
sdc 8:32 0 4G 0 disk
sr0 11:0 1 1024M 0 rom
# lsscsi
[1:0:0:0] cd/dvd Msft Virtual CD/ROM 1.0 /dev/sr0
[2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda
[3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb
[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc
The commands list device IDs for the SBD device. We recommend using the ID that
starts with scsi-3. In the preceding example, the ID is /dev/disk/by-id/scsi-
3600224804208a67da8073b2a9728af19.
Use the device ID from step 2 to create the new SBD devices on the first cluster
node.
Bash
Bash
sudo vi /etc/sysconfig/sbd
b. Change the property of the SBD device, enable the Pacemaker integration, and
change the start mode of the SBD device.
Bash
[...]
SBD_DEVICE="/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19"
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]
7 Note
If the SBD_DELAY_START property value is set to "no", change the value to
"yes". You must also check the SBD service file to ensure that the value of
TimeoutStartSec is greater than the value of SBD_DELAY_START. For more
information, see SBD file configuraton
Bash
Bash
Managed identity
Use the following content for the input file. You need to adapt the content to your
subscriptions. That is, replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and yyyyyyyy-yyyy-
yyyy-yyyy-yyyyyyyyyyyy with your own subscription IDs. If you have only one
subscription, remove the second entry under AssignableScopes.
JSON
{
"Name": "Linux fence agent Role",
"description": "Allows to power-off and start virtual machines",
"assignableScopes": [
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
],
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}
Managed identity
Assign the custom role "Linux Fence Agent Role" that was created in the last
chapter to each managed identity of the cluster VMs. Each VM system-assigned
managed identity needs the role assigned for every cluster VM's resource. For
detailed steps, see Assign a managed identity access to a resource by using the
Azure portal. Verify each VM's managed identity role assignment contains all cluster
VMs.
) Important
7 Note
Bash
7 Note
On SLES 15 SP4 check the version of crmsh and pacemaker package, and
make sure that the miniumum version requirements are met:
crmsh-4.4.0+20221028.3e41444-150400.3.9.1 or later
pacemaker-2.1.2+20211124.ada5c3b36-150400.4.6.1 or later
2. [A] Install the component, which you need for the cluster resources.
Bash
3. [A] Install the azure-lb component, which you need for the cluster resources.
Bash
7 Note
Check the version of the resource-agents package, and make sure that the
minimum version requirements are met:
SLES 12 SP4/SP5: The version must be resource-agents-
4.3.018.a7fb5035-3.30.1 or later.
SLES 15/15 SP1: The version must be resource-agents-
4.3.0184.6ee15eb2-4.13.1 or later.
a. Pacemaker occasionally creates many processes, which can exhaust the allowed
number. When this happens, a heartbeat between the cluster nodes might fail and
lead to a failover of your resources. We recommend increasing the maximum
number of allowed processes by setting the following parameter:
Bash
b. Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .
Bash
sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800
c. Make sure vm.swappiness is set to 10 to reduce swap usage and favor memory.
Bash
sudo vi /etc/sysctl.conf
# Change/set the following setting
vm.swappiness = 10
5. [A] Check the cloud-netconfig-azure package version.
Tip
Bash
# Change CLOUD_NETCONFIG_MANAGE
# CLOUD_NETCONFIG_MANAGE="yes"
CLOUD_NETCONFIG_MANAGE="no"
Bash
sudo ssh-keygen
Bash
sudo ssh-keygen
# Insert the public key you copied in the last step into the authorized
keys file on the second server
sudo vi /root/.ssh/authorized_keys
Bash
# insert the public key you copied in the last step into the authorized
keys file on the first server
sudo vi /root/.ssh/authorized_keys
9. [A] Install the fence-agents package if you're using a fencing device, based on the
Azure fence agent.
Bash
) Important
) Important
Earlier versions will not work correctly with a managed identity configuration.
10. [A] Install the Azure Python SDK and Azure Identity Python module.
Bash
Bash
# You might need to activate the public cloud extension first. In this
example, the SUSEConnect command is for SLES 15 SP1
SUSEConnect -p sle-module-public-cloud/15.1/x86_64
sudo zypper install python3-azure-mgmt-compute
sudo zypper install python3-azure-identity
) Important
Depending on your version and image type, you might need to activate the
public cloud extension for your OS release before you can install the Azure
Python SDK. You can check the extension by running SUSEConnect ---list-
extensions . To achieve the faster failover times with the Azure fence agent:
) Important
Bash
sudo vi /etc/hosts
Insert the following lines in the /etc/hosts. Change the IP address and hostname to
match your environment.
text
If you're using SBD devices for fencing (for either the iSCSI target server or
Azure shared disk):
Bash
Bash
Bash
Bash
Bash
sudo vi /etc/corosync/corosync.conf
a. Check the following section in the file and adjust, if the values aren't there or are
different. Be sure to change the token to 30000 to allow memory-preserving
maintenance. For more information, see the "Maintenance for virtual machines in
Azure" article for Linux or Windows.
text
[...]
token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20
interface {
[...]
}
transport: udpu
}
nodelist {
node {
ring0_addr:10.0.0.6
}
node {
ring0_addr:10.0.0.7
}
}
logging {
[...]
}
quorum {
# Enable and configure quorum subsystem (default: off)
# See also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Bash
Tip
To avoid fence races within a two-node pacemaker cluster, you can configure
additional "priority-fencing-delay" cluster property. This property introduces
additional delay in fencing a node that has higher total resource priority when
a split-brain scenario occurs. For additional details, see SUSE Linux Enterprise
Server high availability extension administration guide .
The instruction on setting "priority-fencing-delay" cluster property can be
found in respective SAP ASCS/ERS (applicable only on ENSA2) and SAP HANA
scale-up high availability document.
1. [1] If you're using an SBD device (iSCSI target server or Azure shared disk) as a
fencing device, run the following commands. Enable the use of a fencing device,
and set the fence delay.
Bash
2. [1] If you're using an Azure fence agent for fencing, run the following commands.
After you've assigned roles to both cluster nodes, you can configure the fencing
devices in the cluster.
Bash
7 Note
Managed identity
Bash
# Adjust the command with your subscription ID and resource group of the
VM
If you're using fencing device, based on service principal configuration, read Change
from SPN to MSI for Pacemaker clusters using Azure fencing and learn how to convert
to managed identity configuration.
) Important
Tip
The Azure fence agent requires outbound connectivity to the public endpoints, as
documented, along with possible solutions, in Public endpoint connectivity for
VMs using standard ILB.
) Important
Previously, this document described the use of resource agent azure-events .
New resource agent azure-events-az fully supports Azure environments
deployed in different availability zones. It is recommended to utilize the newer
azure-events-az agent for all SAP highly available systems with Pacemaker.
1. [A] Make sure that the package for the azure-events agent is already installed and
up to date.
Bash
Bash
3. [1] Set the pacemaker cluster health node strategy and constraint
Bash
) Important
Don't define any other resources in the cluster starting with "health-", besides
the resources described in the next steps of the documentation.
4. [1] Set initial value of the cluster attributes. Run for each cluster node. For scale-out
environments including majority maker VM.
Bash
5. [1] Configure the resources in Pacemaker. Important: The resources must start with
'health-azure'.
Bash
7 Note
Bash
7. Clear any errors during enablement and verify that the health-azure-events
resources have started successfully on all cluster nodes.
Bash
First time query execution for scheduled events can take up to 2 minutes.
Pacemaker testing with scheduled events can use reboot or redeploy actions for
the cluster VMs. For more information, see scheduled events documentation.
7 Note
After you've configured the Pacemaker resources for the azure-events agent,
if you place the cluster in or out of maintenance mode, you might get warning
messages such as:
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server
for SAP applications
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High availability of SAP HANA on Azure Virtual Machines
High availability for SAP HANA on
Azure VMs on SUSE Linux Enterprise
Server
Article • 04/08/2024
To establish high availability in an on-premises SAP HANA deployment, you can use
either SAP HANA system replication or shared storage.
Currently on Azure virtual machines (VMs), SAP HANA system replication on Azure is the
only supported high availability function.
SAP HANA system replication consists of one primary node and at least one secondary
node. Changes to the data on the primary node are replicated to the secondary node
synchronously or asynchronously.
This article describes how to deploy and configure the VMs, install the cluster
framework, and install and configure SAP HANA system replication.
Before you begin, read the following SAP Notes and papers:
The SAP HANA system replication setup uses a dedicated virtual host name and virtual
IP addresses. In Azure, you need a load balancer to deploy a virtual IP address.
The preceding figure shows an example load balancer that has these configurations:
Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported
for HANA system. You can deploy VM in any one of the availability options - virtual
machine scale set, availability zone, or availability set.
) Important
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.
Azure Portal
Follow the steps in Create load balancer to set up a standard load balancer for a
high-availability SAP system by using the Azure portal. During the setup of the load
balancer, consider the following points:
1. Frontend IP Configuration: Create a front-end IP. Select the same virtual
network and subnet name as your database virtual machines.
2. Backend Pool: Create a back-end pool and add database VMs.
3. Inbound rules: Create a load-balancing rule. Follow the same steps for both
load-balancing rules.
7 Note
For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .
) Important
7 Note
When VMs that don't have public IP addresses are placed in the back-end pool of
an internal (no public IP address) standard instance of Azure Load Balancer, the
default configuration is no outbound internet connectivity. You can take extra steps
to allow routing to public endpoints. For details on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs by using Azure Standard
Load Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs that are placed behind Azure
Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set
parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer
health probes or SAP note 2382421 .
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , update saptune version to 3.1.1 or higher. For more
details, see saptune 3.1.1 – Do I Need to Update? .
Replace <placeholders> with the values for your SAP HANA installation.
1. [A] Set up the disk layout by using Logical Volume Manager (LVM).
We recommend that you use LVM for volumes that store data and log files. The
following example assumes that the VMs have four attached data disks that are
used to create two volumes.
/dev/disk/azure/scsi1/lun*
Example output:
Output
/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2 /dev/disk/azure/scsi1/lun3
b. Create physical volumes for all the disks that you want to use:
Bash
c. Create a volume group for the data files. Use one volume group for the log files
and one volume group for the shared directory of SAP HANA:
Bash
A linear volume is created when you use lvcreate without the -i switch. We
suggest that you create a striped volume for better I/O performance. Align the
stripe sizes to the values that are described in SAP HANA VM storage
configurations. The -i argument should be the number of underlying physical
volumes, and the -I argument is the stripe size.
For example, if two physical volumes are used for the data volume, the -i
switch argument is set to 2, and the stripe size for the data volume is 256KiB.
One physical volume is used for the log volume, so no -i or -I switches are
explicitly used for the log volume commands.
) Important
When you use more than one physical volume for each data volume, log
volume, or shared volume, use the -i switch and set it the number of
underlying physical volumes. When you create a striped volume, use the -
I switch to specify the stripe size.
Bash
sudo lvcreate <-i number of physical volumes> <-I stripe size for
the data volume> -l 100%FREE -n hana_data vg_hana_data_<HANA SID>
sudo lvcreate -l 100%FREE -n hana_log vg_hana_log_<HANA SID>
sudo lvcreate -l 100%FREE -n hana_shared vg_hana_shared_<HANA SID>
sudo mkfs.xfs /dev/vg_hana_data_<HANA SID>/hana_data
sudo mkfs.xfs /dev/vg_hana_log_<HANA SID>/hana_log
sudo mkfs.xfs /dev/vg_hana_shared_<HANA SID>/hana_shared
e. Create the mount directories and copy the universally unique identifier (UUID)
of all the logical volumes:
Bash
f. Edit the /etc/fstab file to create fstab entries for the three logical volumes:
Bash
sudo vi /etc/fstab
Bash
Bash
sudo mount -a
For demo systems, you can place your HANA data and log files on one disk.
Bash
Bash
Bash
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows you how to use the /etc/hosts file. Replace the IP addresses and the
host names in the following commands.
sudo vi /etc/hosts
b. Insert the following lines in the /etc/hosts file. Change the IP addresses and host
names to match your environment.
Bash
10.0.0.5 hn1-db-0
10.0.0.6 hn1-db-1
Bash
To install SAP HANA system replication, review chapter 4 in the SAP HANA SR
Performance Optimized Scenario guide.
5. [A] Run the hdblcm program from the HANA installation media.
Download the latest SAP host agent archive from the SAP Software Center . Run
the following command to upgrade the agent. Replace the path to the archive to
point to the file that you downloaded.
Bash
Replace <placeholders> with the values for your SAP HANA installation.
1. [1] Create the tenant database.
If you're using SAP HANA 2.0 or SAP HANA MDC, create a tenant database for
your SAP NetWeaver system.
Bash
Bash
Then, copy the system public key infrastructure (PKI) files to the secondary site:
Bash
Bash
Replace <placeholders> with the values for your SAP HANA installation.
Bash
Bash
Bash
PATH="$PATH:/usr/sap/<HANA SID>/HDB<instance number>/exe"
hdbsql -d SYSTEMDB -u system -i <instance number> "BACKUP DATA USING
FILE ('<name of initial backup file>')"
Bash
Bash
su - hdbadm
hdbnsutil -sr_enable --name=<site 1>
Bash
The susChkSrv hook extends the functionality of the main SAPHanaSR HA provider. It
acts when the HANA process hdbindexserver crashes. If a single process crashes, HANA
typically tries to restart it. Restarting the indexserver process can take a long time,
during which the HANA database isn't responsive.
With susChkSrv implemented, an immediate and configurable action is executed. The
action triggers a failover in the configured timeout period instead of waiting for the
hdbindexserver process to restart on the same node.
1. [A] Install the HANA system replication hook. The hook must be installed on both
HANA database nodes.
Tip
The SAPHanaSR Python hook can be implemented only for HANA 2.0. The
SAPHanaSR package must be at least version 0.153.
The susChkSrv Python hook requires SAP HANA 2.0 SP5, and SAPHanaSR
version 0.161.1_BF or later must be installed.
Bash
b. Adjust global.ini on each cluster node. If the requirements for the susChkSrv
hook aren't met, remove the entire [ha_dr_provider_suschksrv] block from the
following parameters.
Bash
# add to global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /usr/share/SAPHanaSR
execution_order = 1
[ha_dr_provider_suschksrv]
provider = susChkSrv
path = /usr/share/SAPHanaSR
execution_order = 3
action_on_lost = fence
[trace]
ha_dr_saphanasr = info
2. [A] The cluster requires sudoers configuration on each cluster node for <SAP
SID>adm. In this example, that's achieved by creating a new file.
Bash
For details about implementing the SAP HANA system replication hook, see Set up
HANA HA/DR providers .
Bash
Run the following command as <SAP SID>adm on the active HANA system
replication site:
Bash
cdtrace
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
# Example output
# 2021-04-08 22:18:15.877583 ha_dr_SAPHanaSR SFAIL
# 2021-04-08 22:18:46.531564 ha_dr_SAPHanaSR SFAIL
# 2021-04-08 22:21:26.816573 ha_dr_SAPHanaSR SOK
Bash
cdtrace
egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)'
nameserver_suschksrv.trc
# Example output
# 2022-11-03 18:06:21.116728 susChkSrv.init() version 0.7.7,
parameter info: action_on_lost=fence stop_timeout=20 kill_signal=9
# 2022-11-03 18:06:27.613588 START: indexserver event looks like
graceful tenant start
# 2022-11-03 18:07:56.143766 START: indexserver event looks like
graceful tenant start (indexserver started)
Bash
) Important
In recent testing, netcat stops responding to requests due to a backlog and
because of its limitation of handling only one connection. The netcat resource
stops listening to the Azure Load Balancer requests, and the floating IP becomes
unavailable.
For existing Pacemaker clusters, if your configuration was already changed to use
socat as described in Azure Load Balancer Detection Hardening , you don't
need to immediately switch to the azure-lb resource agent.
7 Note
This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we'll remove them from this article.
Bash
# Replace <placeholders> with your instance number, HANA system ID, and the
front-end IP address of the Azure load balancer.
# Clean up the HANA resources. The HANA resources might have failed because
of a known issue.
sudo crm resource cleanup rsc_SAPHana_<HANA SID>_HDB<instance number>
) Important
We recommend that you set AUTOMATED_REGISTER to false only while you complete
thorough failover tests, to prevent a failed primary instance from automatically
registering as secondary. When the failover tests are successfully completed, set
AUTOMATED_REGISTER to true , so that after takeover, system replication
automatically resumes.
Make sure that the cluster status is OK and that all the resources started. It doesn't
matter which node the resources are running on.
Bash
sudo crm_mon -r
To support this setup in a cluster, a second virtual IP address is required so that clients
can access the secondary read-enabled SAP HANA database. To ensure that the
secondary replication site can still be accessed after a takeover, the cluster needs to
move the virtual IP address around with the secondary of the SAPHana resource.
This section describes the extra steps that are required to manage a HANA active/read-
enabled system replication in a SUSE high availability cluster that uses a second virtual
IP address.
Before you proceed, make sure that you have fully configured the SUSE high availability
cluster that manages SAP HANA database as described in earlier sections.
Set up the load balancer for active/read-enabled system
replication
To proceed with extra steps to provision the second virtual IP, make sure that you
configured Azure Load Balancer as described in Deploy Linux VMs manually via Azure
portal.
For the standard load balancer, complete these extra steps on the same load balancer
that you created earlier.
Bash
Bash
Make sure that the cluster status is OK and that all the resources started. The second
virtual IP runs on the secondary site along with the SAPHana secondary resource.
Bash
sudo crm_mon -r
The next section describes the typical set of failover tests to execute.
Considerations when you test a HANA cluster that's configured with a read-enabled
secondary:
When you test a server crash, the second virtual IP resources ( rsc_secip_<HANA
SID>_HDB<instance number> ) and the Azure load balancer port resource
When the secondary server is available and the cluster services are online, the
second virtual IP and port resources automatically move to the secondary server,
even though HANA system replication might not be registered as secondary. Make
sure that you register the secondary HANA database as read-enabled before you
start cluster services on that server. You can configure the HANA instance cluster
resource to automatically register the secondary by setting the parameter
AUTOMATED_REGISTER="true" .
During failover and fallback, the existing connections for applications, which are
then using the second virtual IP to connect to the HANA database, might be
interrupted.
a migration test), and that HANA is in sync state, for example, by running SAPHanaSR-
showAttr .
Bash
hn1-db-0:~ # SAPHanaSR-showAttr
Sites srHook
----------------
SITE2 SOK
Global cib-time
--------------------------------
global Mon Aug 13 11:26:04 2018
Hosts clone_state lpa_hn1_lpt node_state op_mode remoteHost roles
score site srmode sync_state version vhost
----------------------------------------------------------------------------
----------------------------------------------------------------------------
---------------------
hn1-db-0 PROMOTED 1534159564 online logreplay nws-hana-vm-1
4:P:master1:master:worker:master 150 SITE1 sync PRIM
2.00.030.00.1522209842 nws-hana-vm-0
hn1-db-1 DEMOTED 30 online logreplay nws-hana-vm-0
4:S:master1:master:worker:master 100 SITE2 sync SOK
2.00.030.00.1522209842 nws-hana-vm-1
You can migrate the SAP HANA master node by running the following command:
Bash
The cluster would migrate the SAP HANA master node and the group containing virtual
IP address to hn1-db-1 .
When the migration is finished, the crm_mon -r output looks like this example:
Bash
With AUTOMATED_REGISTER="false" , the cluster would not restart the failed HANA
database or register it against the new primary on hn1-db-0 . In this case, configure the
HANA instance as secondary by running this command:
Bash
su - <hana sid>adm
Bash
You also need to clean up the state of the secondary node resource:
Bash
Monitor the state of the HANA resource by using crm_mon -r . When HANA is started on
hn1-db-0 , the output looks like this example:
Bash
Bash
Bash
When cluster nodes can't communicate to each other, there's a risk of a split-brain
scenario. In such situations, cluster nodes will try to simultaneously fence each other,
resulting in fence race.
Additionally, to ensure that the node running the HANA master takes priority and wins
the fence race in a split brain scenario, it's recommended to set priority-fencing-delay
property in the cluster configuration. By enabling priority-fencing-delay property, the
cluster can introduce an additional delay in the fencing action specifically on the node
hosting HANA master resource, allowing the node to win the fence race.
Bash
# If the iptables rule set on the server gets reset after a reboot, the
rules will be cleared out. In case they have not been reset, please proceed
to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.5 -j DROP; iptables -D OUTPUT -d 10.0.0.5 -j
DROP
Bash
hn1-db-0:~ # ps aux | grep sbd
root 1912 0.0 0.0 85420 11740 ? SL 12:25 0:00 sbd:
inquisitor
root 1929 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd:
watcher: /dev/disk/by-id/scsi-360014056f268462316e4681b704a9f73 - slot: 0 -
uuid: 7b862dba-e7f7-4800-92ed-f76a4e3978c8
root 1930 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd:
watcher: /dev/disk/by-id/scsi-360014059bc9ea4e4bac4b18808299aaf - slot: 0 -
uuid: 5813ee04-b75c-482e-805e-3b1e22ba16cd
root 1931 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd:
watcher: /dev/disk/by-id/scsi-36001405b8dddd44eb3647908def6621c - slot: 0 -
uuid: 986ed8f8-947d-4396-8aec-b933b75e904c
root 1932 0.0 0.0 90524 16656 ? SL 12:25 0:00 sbd:
watcher: Pacemaker
root 1933 0.0 0.0 102708 28260 ? SL 12:25 0:00 sbd:
watcher: Cluster
root 13877 0.0 0.0 9292 1572 pts/0 S+ 12:27 0:00 grep sbd
The <HANA SID>-db-<database 1> cluster node reboots. The Pacemaker service might not
restart. Make sure that you start it again.
Bash
After the failover, you can start the service again. If you set AUTOMATED_REGISTER="false" ,
the SAP HANA resource on the hn1-db-0 node fails to start as secondary.
In this case, configure the HANA instance as secondary by running this command:
Bash
SUSE tests
) Important
Make sure that the OS that you select is SAP certified for SAP HANA on the specific
VM types you plan to use. You can look up SAP HANA-certified VM types and their
OS releases in SAP HANA Certified IaaS Platforms . Make sure that you look at
the details of the VM type you plan to use to get the complete list of SAP HANA-
supported OS releases for that VM type.
Run all test cases that are listed in the SAP HANA SR Performance Optimized Scenario
guide or SAP HANA SR Cost Optimized Scenario guide, depending on your scenario.
You can find the guides listed in SLES for SAP best practices .
The following tests are a copy of the test descriptions of the SAP HANA SR Performance
Optimized Scenario SUSE Linux Enterprise Server for SAP Applications 12 SP1 guide. For
an up-to-date version, also read the guide itself. Always make sure that HANA is in sync
before you start the test, and make sure that the Pacemaker configuration is correct.
7 Note
The following tests are designed to be run in sequence. Each test depends on the
exit state of the preceding test.
Output
Bash
Pacemaker detects the stopped HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-0 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.
Run the following commands to register the hn1-db-0 node as secondary and
clean up the failed resource:
Bash
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0
Output
Output
Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-1 ]
Slaves: [ hn1-db-0 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
Bash
Pacemaker detects the stopped HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-1 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.
Run the following commands to register the hn1-db-1 node as secondary and
clean up the failed resource:
Bash
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1
Output
Output
Bash
Pacemaker detects the killed HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-0 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.
Run the following commands to register the hn1-db-0 node as secondary and
clean up the failed resource:
Bash
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0
Bash
Output
Bash
Pacemaker detects the killed HANA instance and fails over to the other node.
When the failover is finished, the HANA instance on the hn1-db-1 node is stopped
because Pacemaker doesn't automatically register the node as HANA secondary.
Run the following commands to register the hn1-db-1 node as secondary and
clean up the failed resource.
Bash
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1
Output
Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
Started: [ hn1-db-0 hn1-db-1 ]
Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
Masters: [ hn1-db-0 ]
Slaves: [ hn1-db-1 ]
Resource Group: g_ip_HN1_HDB03
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
Output
Bash
Pacemaker detects the killed cluster node and fences the node. When the node is
fenced, Pacemaker triggers a takeover of the HANA instance. When the fenced
node is rebooted, Pacemaker doesn't start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for the
hn1-db-0 node, register the hn1-db-0 node as secondary, and clean up the failed
resource:
Bash
# run as root
# list the SBD device(s)
hn1-db-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0
Output
Output
Pacemaker detects the killed cluster node and fences the node. When the node is
fenced, Pacemaker triggers a takeover of the HANA instance. When the fenced
node is rebooted, Pacemaker doesn't start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for the
hn1-db-1 node, register the hn1-db-1 node as secondary, and clean up the failed
resource:
Bash
# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1
Output
Output
Bash
Pacemaker detects the stopped HANA instance and marks the resource as failed
on the hn1-db-1 node. Pacemaker automatically restarts the HANA instance.
Bash
# run as root
hn1-db-1>:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-1
Output
Output
Bash
Pacemaker detects the killed HANA instance and marks the resource as failed on
the hn1-db-1 node. Run the following command to clean up the failed state.
Pacemaker then automatically restarts the HANA instance.
Bash
# run as root
hn1-db-1:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> HN1-db-1
Output
9. Test 9: Crash the secondary site node (node 2) that's running the secondary HANA
database.
Output
Bash
Pacemaker detects the killed cluster node and fenced the node. When the fenced
node is rebooted, Pacemaker doesn't start automatically.
Run the following commands to start Pacemaker, clean the SBD messages for the
hn1-db-1 node, and clean up the failed resource:
Bash
# run as root
# list the SBD device(s)
hn1-db-1:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"
Output
This test is relevant only when you have set up the susChkSrv hook as outlined in
Implement HANA hooks SAPHanaSR and susChkSrv.
Output
Bash
When the indexserver is terminated, the susChkSrv hook detects the event and
trigger an action to fence 'hn1-db-0' node and initiate a takeover process.
Run the following commands to register hn1-db-0 node as secondary and clean up
the failed resource:
Bash
# run as root
hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance
number> hn1-db-0
Output
You can execute a comparable test case by causing the indexserver on the
secondary node to crash. In the event of indexserver crash, the susChkSrv hook will
recognize the occurrence and initiate an action to fence the secondary node.
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High availability of SAP HANA scale-up
with Azure NetApp Files on SUSE
Enterprise Linux
Article • 02/27/2024
This article describes how to configure SAP HANA system replication in scale-up
deployment when the HANA file systems are mounted via NFS by using Azure NetApp
Files. In the example configurations and installation commands, instance number 03 and
HANA System ID HN1 are used. SAP HANA replication consists of one primary node and
at least one secondary node.
When steps in this document are marked with the following prefixes, they mean:
7 Note
This article contains references to a term that Microsoft no longer uses. When the
term is removed from the software, we'll remove it from this article.
Overview
Traditionally, in a scale-up environment, all file systems for SAP HANA are mounted from
local storage. Setting up HA of SAP HANA system replication on SUSE Enterprise Linux is
published in Set up SAP HANA system replication on SLES.
To achieve SAP HANA HA of a scale-up system on Azure NetApp Files NFS shares, we
need extra resource configuration in the cluster. This configuration is needed so that
HANA resources can recover when one node loses access to the NFS shares on Azure
NetApp Files.
SAP HANA file systems are mounted on NFS shares by using Azure NetApp Files on
each node. The file systems /hana/data, /hana/log, and /hana/shared are unique to each
node.
10.3.1.4:/hanadb1-data-mnt00001 on /hana/data
10.3.1.4:/hanadb1-log-mnt00001 on /hana/log
10.3.1.4:/hanadb1-shared-mnt00001 on /hana/shared
10.3.1.4:/hanadb2-data-mnt00001 on /hana/data
10.3.1.4:/hanadb2-log-mnt00001 on /hana/log
10.3.1.4:/hanadb2-shared-mnt0001 on /hana/shared
7 Note
The file systems /hana/shared, /hana/data, and /hana/log aren't shared between
the two nodes. Each cluster node has its own separate file systems.
SAP HA HANA system replication configuration uses a dedicated virtual hostname and
virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
presented configuration shows a load balancer with:
Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files availability by Azure region .
Important considerations
As you create your Azure NetApp Files for SAP HANA scale-up systems, be aware of the
important considerations documented in NFS v4.1 volumes on Azure NetApp Files for
SAP HANA.
While you design the infrastructure for SAP HANA on Azure with Azure NetApp Files, be
aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP
HANA.
The configuration in this article is presented with simple Azure NetApp Files volumes.
) Important
All commands to mount /hana/shared in this article are presented for NFSv4.1
/hana/shared volumes. If you deployed the /hana/shared volumes as NFSv3 volumes,
don't forget to adjust the mount commands for /hana/shared for NFSv3.
2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.
The HANA architecture presented in this article uses a single Azure NetApp Files
capacity pool at the Ultra service level. For HANA workloads on Azure, we
recommend using the Azure NetApp Files Ultra or Premium service Level.
4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.
As you deploy the volumes, be sure to select the NFSv4.1 version. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp Files volumes are assigned automatically.
The Azure NetApp Files resources and the Azure VMs must be in the same Azure
virtual network or in peered Azure virtual networks. For example, hanadb1-data-
mnt00001, hanadb1-log-mnt00001, and so on are the volume names, and
nfs://10.3.1.4/hanadb1-data-mnt00001, nfs://10.3.1.4/hanadb1-log-mnt00001, and
so on are the file paths for the Azure NetApp Files volumes.
On hanadb1:
On hanadb2:
Deploy VMs for SAP HANA. Choose a suitable SLES image that's supported for the
HANA system. You can deploy a VM in any one of the availability options: virtual
machine scale set, availability zone, or availability set.
) Important
Make sure that the OS you select is SAP certified for SAP HANA on the specific VM
types that you plan to use in your deployment. You can look up SAP HANA-
certified VM types and their OS releases in SAP HANA Certified IaaS Platforms .
Make sure that you look at the details of the VM type to get the complete list of
SAP HANA-supported OS releases for the specific VM type.
Azure portal
Follow the steps in Create load balancer to set up a standard load balancer for a
high-availability SAP system by using the Azure portal. During the setup of the load
balancer, consider the following points:
7 Note
For more information about the required ports for SAP HANA, read the chapter
Connections to Tenant Databases in the SAP HANA Tenant Databases guide or SAP
Note 2388694 .
) Important
) Important
Don't enable TCP timestamps on Azure VMs placed behind Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer
Bash
2. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, that is, defaultv4iddomain.com, and the
mapping is set to nobody.
Bash
Example output:
Bash
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
) Important
3. [A] Edit /etc/fstab on both nodes to permanently mount the volumes relevant to
each node. The following example shows how you mount the volumes
permanently.
Bash
sudo vi /etc/fstab
example
example
Bash
sudo mount -a
For workloads that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.
4. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.
Bash
sudo nfsstat -m
example
Bash
#Check nfs4_disable_idmapping
sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
host name in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines in the /etc/hosts file. Change the IP address and host
name to match your environment.
example
10.3.0.4 hanadb1
10.3.0.5 hanadb2
2. [A] Prepare the OS for running SAP HANA on Azure NetApp with NFS, as
described in SAP Note 3024346 - Linux Kernel Settings for NetApp NFS . Create
the configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp
configuration settings.
Bash
sudo vi /etc/sysctl.d/91-NetApp-HANA.conf
Add the following entries in the configuration file:
parameters
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
Bash
sudo vi /etc/sysctl.d/ms-az.conf
parameters
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 16348
net.ipv4.conf.all.rp_filter = 0
sunrpc.tcp_slot_table_entries = 128
vm.swappiness=10
Tip
allow the SAP Host Agent to manage the port ranges. For more information,
see SAP Note 2382421 .
4. [A] Adjust the sunrpc settings, as recommended in SAP Note 3024346 - Linux
Kernel Settings for NetApp NFS .
Bash
sudo vi /etc/modprobe.d/sunrpc.conf
parameter
Configure SLES as described in the following SAP Notes based on your SLES
version:
Starting with HANA 2.0 SPS 01, Multitenant Database Containers (MDC) is the
default option. When you install the HANA system, SYSTEMDB and a tenant with
the same SID are created together. In some cases, you don't want the default
tenant. If you don't want to create the initial tenant along with the installation,
follow the instructions in SAP Note 2629711 .
a. Start the hdblcm program from the HANA installation software directory.
Bash
./hdblcm
Bash
Cluster configuration
This section describes the necessary steps that are required for the cluster to operate
seamlessly when SAP HANA is installed on NFS shares by using Azure NetApp Files.
Bash
sudo crm_mon -r
Example output:
Output
Bash
2. [1] Configure the cluster to add the directory structure for monitoring.
Bash
3. [1] Clone and check the newly configured volume in the cluster.
Bash
Example output:
Bash
# Cluster Summary:
# Stack: corosync
# Current DC: hanadb1 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Tue Nov 2 17:57:39 2021
# Last change: Tue Nov 2 17:57:38 2021 by root via crm_attribute on
hanadb1
# 2 nodes configured
# 11 resource instances configured
# Node List:
# Online: [ hanadb1 hanadb2 ]
The on-fail=fence attribute is also added to the monitor operation. With this
option, if the monitor operation fails on a node, that node is immediately fenced.
) Important
1. Before you start a test, make sure that Pacemaker doesn't have any failed action
(via crm status) and no unexpected location constraints (for example, leftovers of a
migration test). Also, ensure that HANA system replication is in sync state, for
example, with systemReplicationStatus .
Bash
Bash
SAPHanaSR-showAttr
3. Verify the cluster configuration for a failure scenario when a node is shut down.
The following example shows shutting down node 1:
Bash
Example output:
Bash
#Cluster Summary:
# Stack: corosync
# Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Mon Nov 8 23:25:36 2021
# Last change: Mon Nov 8 23:25:19 2021 by root via crm_attribute on
hanadb2
# 2 nodes configured
# 11 resource instances configured
# Node List:
# Online: [ hanadb1 hanadb2 ]
# Full List of Resources:
# Clone Set: cln_azure-events [rsc_azure-events]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]
# Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
(promotable):
# Masters: [ hanadb2 ]
# Stopped: [ hanadb1 ]
# Resource Group: g_ip_HN1_HDB03:
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb2
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2
# rsc_st_azure (stonith:fence_azure_arm): Started hanadb2
# Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]:
# Started: [ hanadb1 hanadb2 ]
Bash
sudo su - hn1adm
sapcontrol -nr 03 -function StopWait 600 10
Bash
Example output:
example
Bash
Bash
sudo SAPHanaSR-showAttr
4. Verify the cluster configuration for a failure scenario when a node loses access to
the NFS share (/hana/shared).
The SAP HANA resource agents depend on binaries stored on /hana/shared to
perform operations during failover. File system /hana/shared is mounted over NFS
in the presented scenario.
It's difficult to simulate a failure, where one of the servers loses access to the NFS
share. As a test, you can remount the file system as read-only. This approach
validates that the cluster can fail over if access to /hana/shared is lost on the active
node.
read/write operations on the file system, fails. It fails because it can't write anything
on the file system and performs a HANA resource failover. The same result is
expected when your HANA node loses access to the NFS shares.
Bash
#Cluster Summary:
# Stack: corosync
# Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Mon Nov 8 23:01:27 2021
# Last change: Mon Nov 8 23:00:46 2021 by root via crm_attribute on
hanadb1
# 2 nodes configured
# 11 resource instances configured
#Node List:
# Online: [ hanadb1 hanadb2 ]
You can place /hana/shared in read-only mode on the active cluster node by using
this command:
Bash
The server hanadb1 either reboots or powers off based on the action set. After the
server hanadb1 is down, the HANA resource moves to hanadb2 . You can check the
status of the cluster from hanadb2 .
Bash
#Cluster Summary:
# Stack: corosync
# Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-
2.0.5+20201202.ba59be712) - partition with quorum
# Last updated: Wed Nov 10 22:00:27 2021
# Last change: Wed Nov 10 21:59:47 2021 by root via crm_attribute on
hanadb2
# 2 nodes configured
# 11 resource instances configured
#Node List:
# Online: [ hanadb1 hanadb2 ]
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
Deploy a SAP HANA scale-out system
with standby node on Azure VMs by
using Azure NetApp Files on SUSE Linux
Enterprise Server
Article • 07/11/2023
This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration with standby on Azure virtual machines (VMs) by using Azure NetApp
Files for the shared storage volumes.
In the example configurations, installation commands, and so on, the HANA instance is
03 and the HANA system ID is HN1. The examples are based on HANA 2.0 SP4 and SUSE
Linux Enterprise Server for SAP 12 SP4.
Before you begin, refer to the following SAP notes and papers:
Overview
One method for achieving HANA high availability is by configuring host auto failover. To
configure host auto failover, you add one or more virtual machines to the HANA system
and configure them as standby nodes. When active node fails, a standby node
automatically takes over. In the presented configuration with Azure virtual machines,
you achieve auto failover by using NFS on Azure NetApp Files.
7 Note
The standby node needs access to all database volumes. The HANA volumes must
be mounted as NFSv4 volumes. The improved file lease-based locking mechanism
in the NFSv4 protocol is used for I/O fencing.
) Important
To build the supported configuration, you must deploy the HANA data and log
volumes as NFSv4.1 volumes and mount them by using the NFSv4.1 protocol. The
HANA host auto-failover configuration with standby node is not supported with
NFSv3.
In the preceding diagram, which follows SAP HANA network recommendations, three
subnets are represented within one Azure virtual network:
The Azure NetApp volumes are in separate subnet, delegated to Azure NetApp Files.
client 10.23.0.0/24
storage 10.23.2.0/24
hana 10.23.3.0/24
anf 10.23.1.0/26
Set up the Azure NetApp Files infrastructure
Before you proceed with the set up for Azure NetApp Files infrastructure, familiarize
yourself with the Azure NetApp Files documentation.
Azure NetApp Files is available in several Azure regions . Check to see whether your
selected Azure region offers Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see Azure
NetApp Files Availability by Azure Region .
Important considerations
As you're creating your Azure NetApp Files for SAP NetWeaver on SUSE High Availability
architecture, be aware of the important considerations documented in NFS v4.1 volumes
on Azure NetApp Files for SAP HANA.
As you design the infrastructure for SAP HANA on Azure with Azure NetApp Files, be
aware of the recommendations in NFS v4.1 volumes on Azure NetApp Files for SAP
HANA.
The configuration in this article is presented with simple Azure NetApp Files Volumes.
) Important
2. Set up an Azure NetApp Files capacity pool by following the instructions in Set up
an Azure NetApp Files capacity pool.
The HANA architecture presented in this article uses a single Azure NetApp Files
capacity pool at the Ultra Service level. For HANA workloads on Azure, we
recommend using an Azure NetApp Files Ultra or Premium service Level.
4. Deploy Azure NetApp Files volumes by following the instructions in Create an NFS
volume for Azure NetApp Files.
As you're deploying the volumes, be sure to select the NFSv4.1 version. Currently,
access to NFSv4.1 requires being added to an allowlist. Deploy the volumes in the
designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp
volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. For example,
HN1-data-mnt00001, HN1-log-mnt00001, and so on, are the volume names and
nfs://10.23.1.5/HN1-data-mnt00001, nfs://10.23.1.4/HN1-log-mnt00001, and so on,
are the file paths for the Azure NetApp Files volumes.
In this example, we used a separate Azure NetApp Files volume for each HANA
data and log volume. For a more cost-optimized configuration on smaller or non-
productive systems, it's possible to place all data mounts and all logs mounts on a
single volume.
3. Create the additional network interfaces, and attach the network interfaces to the
corresponding VMs.
Each virtual machine has three network interfaces, which correspond to the three
Azure virtual network subnets ( client , storage and hana ).
For more information, see Create a Linux virtual machine in Azure with multiple
network interface cards.
) Important
For SAP HANA workloads, low latency is critical. To achieve low latency, work with
your Microsoft representative to ensure that the virtual machines and the Azure
NetApp Files volumes are deployed in close proximity. When you're onboarding
new SAP HANA system that's using SAP HANA Azure NetApp Files, submit the
necessary information.
The next instructions assume that you've already created the resource group, the Azure
virtual network, and the three Azure virtual network subnets: client , storage and hana .
When you deploy the VMs, select the client subnet, so that the client network interface
is the primary interface on the VMs. You will also need to configure an explicit route to
the Azure NetApp Files delegated subnet via the storage subnet gateway.
) Important
Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM
types you're using. For a list of SAP HANA certified VM types and OS releases for
those types, go to the SAP HANA certified IaaS platforms site. Click into the
details of the listed VM type to get the complete list of SAP HANA-supported OS
releases for that type.
1. Create an availability set for SAP HANA. Make sure to set the max update domain.
a. Use a SLES4SAP image in the Azure gallery that's supported for SAP HANA.
b. Select the availability set that you created earlier for SAP HANA.
c. Select the client Azure virtual network subnet. Select Accelerated Network.
When you deploy the virtual machines, the network interface name is automatically
generated. In these instructions for simplicity we'll refer to the automatically
generated network interfaces, which are attached to the client Azure virtual
network subnet, as hanadb1-client, hanadb2-client, and hanadb3-client.
3. Create three network interfaces, one for each virtual machine, for the storage
virtual network subnet (in this example, hanadb1-storage, hanadb2-storage, and
hanadb3-storage).
4. Create three network interfaces, one for each virtual machine, for the hana virtual
network subnet (in this example, hanadb1-hana, hanadb2-hana, and hanadb3-
hana).
5. Attach the newly created virtual network interfaces to the corresponding virtual
machines by doing the following steps:
a. Go to the virtual machine in the Azure portal .
b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hanadb1), and then select the virtual machine.
c. In the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface drop-down list, select the already created network interfaces
for the storage and hana subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hanadb2 and hanadb3).
g. Leave the virtual machines in stopped state for now. Next, we'll enable
accelerated networking for all newly attached network interfaces.
6. Enable accelerated networking for the additional network interfaces for the
storage and hana subnets by doing the following steps:
Bash
1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries were added to /etc/hosts for this example.
Bash
# Storage
10.23.2.4 hanadb1-storage
10.23.2.5 hanadb2-storage
10.23.2.6 hanadb3-storage
# Client
10.23.0.5 hanadb1
10.23.0.6 hanadb2
10.23.0.7 hanadb3
# Hana
10.23.3.4 hanadb1-hana
10.23.3.5 hanadb2-hana
10.23.3.6 hanadb3-hana
2. [A] Change DHCP and cloud config settings for the network interface for storage
to avoid unintended hostname changes.
The following instructions assume that the storage network interface is eth1 .
Bash
vi /etc/sysconfig/network/dhcp
# Change the following DHCP setting to "no"
DHCLIENT_SET_HOSTNAME="no"
vi /etc/sysconfig/network/ifcfg-eth1
# Edit ifcfg-eth1
#Change CLOUD_NETCONFIG_MANAGE='yes' to "no"
CLOUD_NETCONFIG_MANAGE='no'
3. [A] Add a network route, so that the communication to the Azure NetApp Files
goes via the storage network interface.
The following instructions assume that the storage network interface is eth1 .
Bash
vi /etc/sysconfig/network/ifroute-eth1
4. [A] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.
Bash
vi /etc/sysctl.d/91-NetApp-HANA.conf
Bash
vi /etc/sysctl.d/ms-az.conf
6. [A] Adjust the sunrpc settings for NFSv3 volumes, as recommended in SAP note
3024346 - Linux Kernel Settings for NetApp NFS .
Bash
vi /etc/modprobe.d/sunrpc.conf
Bash
mkdir -p /hana/data/HN1/mnt00001
mkdir -p /hana/data/HN1/mnt00002
mkdir -p /hana/log/HN1/mnt00001
mkdir -p /hana/log/HN1/mnt00002
mkdir -p /hana/shared
mkdir -p /usr/sap/HN1
Bash
# if using NFSv3 for this volume, mount with the following command
mount 10.23.1.4:/HN1-shared /mnt/tmp
# if using NFSv4.1 for this volume, mount with the following command
mount -t nfs -o sec=sys,nfsvers=4.1 10.23.1.4:/HN1-shared /mnt/tmp
cd /mnt/tmp
mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3
# unmount /hana/shared
cd
umount /mnt/tmp
3. [A] Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, i.e. defaultv4iddomain.com and the mapping is
set to nobody.
) Important
configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure
NetApp configuration, then the permissions for files on Azure NetApp
volumes that are mounted on the VMs will be displayed as nobody .
Bash
# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
Bash
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
5. [A] Create the SAP HANA group and user manually. The IDs for group sapsys and
user hn1adm must be set to the same IDs, which are provided during the
onboarding. (In this example, the IDs are set to 1001.) If the IDs aren't set correctly,
you won't be able to access the volumes. The IDs for group sapsys and user
accounts hn1adm and sapadm must be the same on all virtual machines.
Bash
# Create users
sudo useradd hn1adm -u 1001 -g 1001 -d /usr/sap/HN1/home -c "SAP HANA
Database System" -s /bin/sh
sudo useradd sapadm -u 1002 -g 1001 -d /home/sapadm -c "SAP Local
Administrator" -s /bin/sh
# Set the password for both user ids
sudo passwd hn1adm
sudo passwd sapadm
Bash
sudo vi /etc/fstab
For workloads, that require higher throughput, consider using the nconnect mount
option, as described in NFS v4.1 volumes on Azure NetApp Files for SAP HANA.
Check if nconnect is supported by Azure NetApp Files on your Linux release.
Bash
sudo vi /etc/fstab
Bash
sudo vi /etc/fstab
Bash
sudo vi /etc/fstab
10. [A] Verify that all HANA volumes are mounted with NFS protocol version NFSv4.1.
Bash
sudo nfsstat -m
Installation
In this example for deploying SAP HANA in scale-out configuration with standby node
with Azure, we've used HANA 2.0 SP4.
2. [1] Verify that you can log in via SSH to hanadb2 and hanadb3, without being
prompted for a password.
Bash
ssh root@hanadb2
ssh root@hanadb3
3. [A] Install additional packages, which are required for HANA 2.0 SP4. For more
information, see SAP Note 2593824 .
Bash
4. [2], [3] Change ownership of SAP HANA data and log directories to hn1adm.
Bash
# Execute as root
sudo chown hn1adm:sapsys /hana/data/HN1
sudo chown hn1adm:sapsys /hana/log/HN1
HANA installation
1. [1] Install SAP HANA by following the instructions in the SAP HANA 2.0 Installation
and Update guide . In this example, we install SAP HANA scale-out with master,
one worker, and one standby node.
a. Start the hdblcm program from the HANA installation software directory. Use
the internal_network parameter and pass the address space for subnet, which
is used for the internal HANA inter-node communication.
Bash
./hdblcm --internal_network=10.23.3.0/24
Display global.ini, and ensure that the configuration for the internal SAP HANA
inter-node communication is in place. Verify the communication section. It should
have the address space for the hana subnet, and listeninterface should be set to
.internal . Verify the internal_hostname_resolution section. It should have the IP
addresses for the HANA virtual machines that belong to the hana subnet.
Bash
# Example
#global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve
[communication]
internal_network = 10.23.3/24
listeninterface = .internal
[internal_hostname_resolution]
10.23.3.4 = hanadb1
10.23.3.5 = hanadb2
10.23.3.6 = hanadb3
3. [1] Add host mapping to ensure that the client IP addresses are used for client
communication. Add section public_host_resolution , and add the corresponding
IP addresses from the client subnet.
Bash
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
Bash
5. [1] Verify that the client interface will be using the IP addresses from the client
subnet for communication.
Bash
# Expected result
"hanadb3","net_publicname","10.23.0.7"
"hanadb2","net_publicname","10.23.0.6"
"hanadb1","net_publicname","10.23.0.5"
For information about how to verify the configuration, see SAP Note 2183363 -
Configuration of SAP HANA internal network .
6. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the
following SAP HANA parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For more information, see I/O stack configuration for SAP HANA .
Starting with SAP HANA 2.0 systems, you can set the parameters in global.ini .
For more information, see SAP Note 1999930 .
For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set
during the installation, as described in SAP Note 2267798 .
7. The storage that's used by Azure NetApp Files has a file size limitation of 16
terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it
won't automatically create a new data file when the file size limit of 16 TB is
reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will
result in errors and, eventually, in an index server crash.
) Important
To prevent SAP HANA from trying to grow data files beyond the 16-TB limit of
the storage subsystem, set the following parameters in global.ini .
datavolume_striping = true
datavolume_striping_size_gb = 15000 For more information, see SAP
Note 2400005 . Be aware of SAP Note 2631285 .
7 Note
This article contains references to the terms master and slave, terms that Microsoft
no longer uses. When these terms are removed from the software, we’ll remove
them from this article.
a. Before you simulate the node crash, run the following commands as hn1adm to
capture the status of the environment:
Bash
b. To simulate a node crash, run the following command as root on the worker
node, which is hanadb2 in this case:
Bash
c. Monitor the system for failover completion. When the failover has been
completed, capture the status, which should look like the following:
Bash
) Important
When a node experiences kernel panic, avoid delays with SAP HANA
failover by setting kernel.panic to 20 seconds on all HANA virtual
machines. The configuration is done in /etc/sysctl . Reboot the virtual
machines to activate the change. If this change isn't performed, failover can
take 10 or more minutes when a node is experiencing kernel panic.
a. Prior to the test, check the status of the environment by running the following
commands as hn1adm:
Bash
#Landscape status
python
/usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual
| Config | Actual | Config | Actual | Config |
Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| ------- | ------ | ------ | -------- | ------ | --------- | ------
--- | -------- | -------- | ---------- | ---------- | ----------- |
----------- | ------- | ------- | ------- | ------- |
| hanadb1 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| hanadb2 | yes | ok | | | 2 |
2 | default | default | master 2 | slave | worker |
slave | worker | worker | default | default |
| hanadb3 | no | ignore | | | 0 |
0 | default | default | master 3 | slave | standby |
standby | standby | standby | default | - |
b. Run the following commands as hn1adm on the active master node, which is
hanadb1 in this case:
Bash
The standby node hanadb3 will take over as master node. Here is the resource
state after the failover test is completed:
Bash
c. Restart the HANA instance on hanadb1 (that is, on the same virtual machine,
where the name server was killed). The hanadb1 node will rejoin the
environment and will keep its standby role.
Bash
After SAP HANA has started on hanadb1, expect the following status:
Bash
d. Again, kill the name server on the currently active master node (that is, on node
hanadb3).
Bash
Node hanadb1 will resume the role of master node. After the failover test has
been completed, the status will look like this:
Bash
e. Start SAP HANA on hanadb3, which will be ready to serve as a standby node.
Bash
After SAP HANA has started on hanadb3, the status looks like the following:
Bash
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs).
High availability for SAP HANA scale-
out system with HSR on SUSE Linux
Enterprise Server
Article • 01/17/2024
This article describes how to deploy a highly available SAP HANA system in a scale-out
configuration with HANA system replication (HSR) and Pacemaker on Azure SUSE Linux
Enterprise Server virtual machines (VMs). The shared file systems in the presented
architecture are NFS mounted and are provided by Azure NetApp Files or NFS share on
Azure Files.
In the example configurations, installation commands, and so on, the HANA instance is
03 and the HANA system ID is HN1.
Before you begin, refer to the following SAP notes and papers:
Overview
One method to achieve HANA high availability for HANA scale-out installations, is to
configure HANA system replication and protect the solution with Pacemaker cluster to
allow automatic failover. When an active node fails, the cluster fails over the HANA
resources to the other site.
The presented configuration shows three HANA nodes on each site, plus majority maker
node to prevent split-brain scenario. The instructions can be adapted, to include more
VMs as HANA DB nodes.
The HANA shared file system /hana/shared in the presented architecture can be
provided by Azure NetApp Files or NFS share on Azure Files. The HANA shared file
system is NFS mounted on each HANA node in the same HANA system replication site.
File systems /hana/data and /hana/log are local file systems and aren't shared between
the HANA DB nodes. SAP HANA will be installed in non-shared mode.
For recommended SAP HANA storage configurations, see SAP HANA Azure VMs storage
configurations.
) Important
If deploying all HANA file systems on Azure NetApp Files, for production systems,
where performance is a key, we recommend to evaluate and consider using Azure
NetApp Files application volume group for SAP HANA.
2 Warning
Deploying /hana/data and /hana/log on NFS on Azure Files is not supported.
In the preceding diagram, three subnets are represented within one Azure virtual
network, following the SAP HANA network recommendations:
As /hana/data and /hana/log are deployed on local disks, it isn't necessary to deploy
separate subnet and separate virtual network cards for communication to the storage.
If you're using Azure NetApp Files, the NFS volumes for /hana/shared , are deployed in a
separate subnet, delegated to Azure NetApp Files: anf 10.23.1.0/26.
For the configuration presented in this document, deploy seven virtual machines:
three virtual machines to serve as HANA DB nodes for HANA replication site
1: hana-s1-db1, hana-s1-db2 and hana-s1-db3
three virtual machines to serve as HANA DB nodes for HANA replication site
2: hana-s2-db1, hana-s2-db2 and hana-s2-db3
a small virtual machine to serve as majority maker: hana-s-mm
The VMs, deployed as SAP DB HANA nodes should be certified by SAP for HANA
as published in the SAP HANA Hardware directory . When deploying the HANA
DB nodes, make sure that Accelerated Network is selected.
For the majority maker node, you can deploy a small VM, as this VM doesn't run
any of the SAP HANA resources. The majority maker VM is used in the cluster
configuration to achieve odd number of cluster nodes in a split-brain scenario. The
majority maker VM only needs one virtual network interface in the client subnet
in this example.
Deploy local managed disks for /hana/data and /hana/log . The minimum
recommended storage configuration for /hana/data and /hana/log is described in
SAP HANA Azure VMs storage configurations.
Deploy the primary network interface for each VM in the client virtual network
subnet.
When the VM is deployed via Azure portal, the network interface name is
automatically generated. In these instructions for simplicity we'll refer to the
automatically generated, primary network interfaces, which are attached to the
client Azure virtual network subnet as hana-s1-db1-client, hana-s1-db2-client,
) Important
Make sure that the OS you select is SAP-certified for SAP HANA on the
specific VM types you're using. For a list of SAP HANA certified VM types
and OS releases for those types, go to the SAP HANA certified IaaS
platforms site. Click into the details of the listed VM type to get the
complete list of SAP HANA-supported OS releases for that type.
If you choose to deploy /hana/shared on NFS on Azure Files, we
recommend to deploy on SLES 15 SP2 and above.
2. Create six network interfaces, one for each HANA DB virtual machine, in the inter
virtual network subnet (in this example, hana-s1-db1-inter, hana-s1-db2-inter,
hana-s1-db3-inter, hana-s2-db1-inter, hana-s2-db2-inter, and hana-s2-db3-
inter).
3. Create six network interfaces, one for each HANA DB virtual machine, in the hsr
virtual network subnet (in this example, hana-s1-db1-hsr, hana-s1-db2-hsr, hana-
s1-db3-hsr, hana-s2-db1-hsr, hana-s2-db2-hsr, and hana-s2-db3-hsr).
4. Attach the newly created virtual network interfaces to the corresponding virtual
machines:
a. Go to the virtual machine in the Azure portal .
b. In the left pane, select Virtual Machines. Filter on the virtual machine name (for
example, hana-s1-db1), and then select the virtual machine.
c. In the Overview pane, select Stop to deallocate the virtual machine.
d. Select Networking, and then attach the network interface. In the Attach
network interface drop-down list, select the already created network interfaces
for the inter and hsr subnets.
e. Select Save.
f. Repeat steps b through e for the remaining virtual machines (in our example,
hana-s1-db2, hana-s1-db3, hana-s2-db1, hana-s2-db2 and hana-s2-db3).
g. Leave the virtual machines in stopped state for now. Next, we'll enable
accelerated networking for all newly attached network interfaces.
5. Enable accelerated networking for the additional network interfaces for the inter
and hsr subnets by doing the following steps:
Bash
7 Note
For HANA scale out, select the NIC for the client subnet when adding the
virtual machines in the backend pool.
The full set of command in Azure CLI and PowerShell adds the VMs with
primary NIC in the backend pool.
Azure Portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Do not enable TCP timestamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set
parameter net.ipv4.tcp_timestamps to 0 . For details see Load Balancer
health probes and SAP note 2382421 .
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , update saptune version to 3.1.1 or higher. For more
details, see saptune 3.1.1 – Do I Need to Update? .
Deploy NFS
There are two options for deploying Azure native NFS for /hana/shared . You can deploy
NFS volume on Azure NetApp Files or NFS share on Azure Files. Azure files support
NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.
The next sections describe the steps to deploy NFS - you'll need to select only one of
the options.
Tip
You chose to deploy /hana/shared on NFS share on Azure Files or NFS volume on
Azure NetApp Files.
Deploy ANF volumes for the /hana/shared file system. You'll need a separate
/hana/shared volume for each HANA system replication site. For more information, see
In this example, the following Azure NetApp Files volumes were used:
volume HN1-shared-s1 (nfs://10.23.1.7/HN1-shared-s1)
volume HN1-shared-s2 (nfs://10.23.1.7/HN1-shared-s2)
Deploy Azure Files NFS shares for the /hana/shared file system. You'll need a separate
/hana/shared Azure Files NFS share for each HANA system replication site. For more
In this example, the following Azure Files NFS shares were used:
1. [A] Maintain the host files on the virtual machines. Include entries for all subnets.
The following entries were added to /etc/hosts for this example.
Bash
# Client subnet
10.23.0.19 hana-s1-db1
10.23.0.20 hana-s1-db2
10.23.0.21 hana-s1-db3
10.23.0.22 hana-s2-db1
10.23.0.23 hana-s2-db2
10.23.0.24 hana-s2-db3
10.23.0.25 hana-s-mm
# Internode subnet
10.23.1.132 hana-s1-db1-inter
10.23.1.133 hana-s1-db2-inter
10.23.1.134 hana-s1-db3-inter
10.23.1.135 hana-s2-db1-inter
10.23.1.136 hana-s2-db2-inter
10.23.1.137 hana-s2-db3-inter
# HSR subnet
10.23.1.196 hana-s1-db1-hsr
10.23.1.197 hana-s1-db2-hsr
10.23.1.198 hana-s1-db3-hsr
10.23.1.199 hana-s2-db1-hsr
10.23.1.200 hana-s2-db2-hsr
10.23.1.201 hana-s2-db3-hsr
Bash
vi /etc/sysctl.d/ms-az.conf
Tip
3. [A] SUSE delivers special resource agents for SAP HANA and by default agents for
SAP HANA scale-up are installed. Uninstall the packages for scale-up, if installed
and install the packages for scenario SAP HANA scale-out. The step needs to be
performed on all cluster VMs, including the majority maker.
7 Note
4. [AH] Prepare the VMs - apply the recommended settings per SAP note 2205917
for SUSE Linux Enterprise Server for SAP Applications.
1. [AH] Prepare the OS for running SAP HANA on NetApp Systems with NFS, as
described in SAP note 3024346 - Linux Kernel Settings for NetApp NFS . Create
configuration file /etc/sysctl.d/91-NetApp-HANA.conf for the NetApp configuration
settings.
Bash
vi /etc/sysctl.d/91-NetApp-HANA.conf
Bash
vi /etc/modprobe.d/sunrpc.conf
Bash
mkdir -p /hana/shared
4. [AH] Verify the NFS domain setting. Make sure that the domain is configured as
the default Azure NetApp Files domain, that is, defaultv4iddomain.com and the
mapping is set to nobody.
This step is only needed, if using Azure NetAppFiles NFSv4.1.
) Important
Bash
Bash
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y
mkdir /mnt/tmp
mount 10.23.1.7:/HN1-share-s1 /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf
6. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
Bash
sudo vi /etc/fstab
# Add the following entry
10.23.1.7:/HN1-shared-s1 /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount all volumes
sudo mount -a
7. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
Bash
sudo vi /etc/fstab
# Add the following entry
10.23.1.7:/HN1-shared-s2 /hana/shared nfs
rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_n
etdev,sec=sys 0 0
# Mount the volume
sudo mount -a
8. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs with NFS protocol version NFSv4.1.
Bash
sudo nfsstat -m
# Verify that flag vers is set to 4.1
# Example from SITE 1, hana-s1-db1
/hana/shared from 10.23.1.7:/HN1-shared-s1
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,addr
=10.23.1.7
# Example from SITE 2, hana-s2-db1
/hana/shared from 10.23.1.7:/HN1-shared-s2
Flags:
rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp
,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr
=10.23.1.7
Bash
mkdir -p /hana/shared
2. [AH1] Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
Bash
sudo vi /etc/fstab
# Add the following entry
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount all volumes
sudo mount -a
3. [AH2] Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
Bash
sudo vi /etc/fstab
# Add the following entries
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared
nfs nfsvers=4.1,sec=sys 0 0
# Mount the volume
sudo mount -a
4. [AH] Verify that the corresponding /hana/shared/ file systems are mounted on all
HANA DB VMs with NFS protocol version NFSv4.1.
Bash
sudo nfsstat -m
# Example from SITE 1, hana-s1-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,a
ddr=10.23.0.35
# Example from SITE 2, hana-s2-db1
sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
Flags:
rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=
tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,a
ddr=10.23.0.35
Set up the disk layout with Logical Volume Manager (LVM). The following example
assumes that each HANA virtual machine has three data disks attached, that are used to
create two volumes.
Bash
ls /dev/disk/azure/scsi1/lun*
Example output:
Bash
/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1
/dev/disk/azure/scsi1/lun2
2. [AH] Create physical volumes for all of the disks that you want to use:
Bash
Bash
A linear volume is created when you use lvcreate without the -i switch. We
suggest that you create a striped volume for better I/O performance, and align the
stripe sizes to the values documented in SAP HANA VM storage configurations.
The -i argument should be the number of the underlying physical volumes and
the -I argument is the stripe size. In this document, two physical volumes are
used for the data volume, so the -i switch argument is set to 2. The stripe size for
the data volume is 256 KiB. One physical volume is used for the log volume, so no
-i or -I switches are explicitly used for the log volume commands.
) Important
Use the -i switch and set it to the number of the underlying physical volume
when you use more than one physical volume for each data or log volumes.
Use the -I switch to specify the stripe size, when creating a striped volume.
See SAP HANA VM storage configurations for recommended storage
configurations, including stripe sizes and number of disks.
Bash
5. [AH] Create the mount directories and copy the UUID of all of the logical volumes:
Bash
6. [AH] Create fstab entries for the logical volumes and mount:
Bash
sudo vi /etc/fstab
Bash
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_data_HN1-hana_data
/hana/data/HN1 xfs defaults,nofail 0 2
/dev/disk/by-uuid/UUID of /dev/mapper/vg_hana_log_HN1-hana_log
/hana/log/HN1 xfs defaults,nofail 0 2
Bash
sudo mount -a
) Important
Installation
In this example for deploying SAP HANA in scale-out configuration with HSR on Azure
VMs, we've used HANA 2.0 SP5.
Prepare for HANA installation
1. [AH] Before the HANA installation, set the root password. You can disable the root
password after the installation has been completed. Execute as root command
passwd .
Bash
3. [1] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s1-db2
and hana-s1-db3, without being prompted for a password. If that isn't the case,
exchange ssh keys as described in Enable SSH Access via Public Key .
Bash
ssh root@hana-s1-db2
ssh root@hana-s1-db3
4. [2] Verify that you can log in via SSH to the HANA DB VMs in this site hana-s2-db2
and hana-s2-db3, without being prompted for a password.
If that isn't the case, exchange ssh keys.
Bash
ssh root@hana-s2-db2
ssh root@hana-s2-db3
5. [AH] Install additional packages, which are required for HANA 2.0 SP4 and above.
For more information, see SAP Note 2593824 for your SLES version.
Bash
Bash
./hdblcm --internal_network=10.23.1.128/26
Display global.ini, and ensure that the configuration for the internal SAP HANA
inter-node communication is in place. Verify the communication section. It should
have the address space for the inter subnet, and listeninterface should be set
to .internal . Verify the internal_hostname_resolution section. It should have the
IP addresses for the HANA virtual machines that belong to the inter subnet.
Bash
Bash
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
[persistence]
basepath_shared = no
Bash
6. [1,2] Verify that the client interface will be using the IP addresses from the client
subnet for communication.
Bash
# Execute as hn1adm
/usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB
'select * from SYS.M_HOST_INFORMATION'|grep net_publicname
# Expected result - example from SITE 2
"hana-s2-db1","net_publicname","10.23.0.22"
For information about how to verify the configuration, see SAP Note 2183363 -
Configuration of SAP HANA internal network .
7. [AH] Change permissions on the data and log directories to avoid HANA
installation error.
Bash
8. [1] Install the secondary HANA nodes. The example instructions in this step are for
SITE 1.
Bash
cd /hana/shared/HN1/hdblcm
./hdblcm
9. [2] Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.
Bash
Bash
Bash
Register the second site to start the system replication. Run the following
command as <hanasid>adm:
Bash
Check the replication status and wait until all databases are in sync.
Bash
4. [1,2] Change the HANA configuration so that communication for HANA system
replication is directed through the HANA system replication virtual network
interfaces.
Bash
Edit global.ini to add the host mapping for HANA system replication: use the
IP addresses from the hsr subnet.
Bash
sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
#Add the section
[system_replication_hostname_resolution]
10.23.1.196 = hana-s1-db1
10.23.1.197 = hana-s1-db2
10.23.1.198 = hana-s1-db3
10.23.1.199 = hana-s2-db1
10.23.1.200 = hana-s2-db2
10.23.1.201 = hana-s2-db3
Bash
For more information, see Host Name resolution for System Replication .
1. [1] Place pacemaker in maintenance mode, in preparation for the creation of the
HANA cluster resources.
Bash
2. [1,2] Create the directory on the NFS mounted file system /hana/shared, which will
be used in the special file system monitoring resource. The directories need to be
created on both sites.
Bash
mkdir -p /hana/shared/HN1/check
3. [AH] Create the directory, which will be used to mount the special file system
monitoring resource. The directory needs to be created on all HANA cluster nodes.
Bash
mkdir -p /hana/check
Bash
operations perform a read/write test on the file system. Without this attribute, the
monitor operation only verifies that the file system is mounted. This can be a
problem because when connectivity is lost, the file system may remain mounted,
despite being inaccessible.
on-fail=fence attribute is also added to the monitor operation. With this option, if
7 Note
Provided steps for SAPHanaSrMultiTarget hook are for a new installation. Upgrading an
existing environment from SAPHanaSR to SAPHanaSrMultiTarget provider requires
several changes and are NOT described in this document. If the existing environment
uses no third site for disaster recovery and HANA multi-target system replication isn't
used, SAPHanaSR HA provider can remain in use.
SUSE SLES 15 SP1 or higher is required for operation of both HANA HA hooks.
Following table shows other dependencies.
ノ Expand table
SAP HANA HA hook HANA version required SAPHanaSR-ScaleOut required
Bash
2. [1,2] Adjust global.ini on each cluster site. If the prerequisites for susChkSrv hook
aren't met, entire block [ha_dr_provider_suschksrv] shouldn't be configured.
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid
values are [ ignore | stop | kill | fence ] .
Bash
[ha_dr_provider_suschksrv]
provider = susChkSrv
path = /usr/share/SAPHanaSR-ScaleOut
execution_order = 3
action_on_lost = kill
[trace]
ha_dr_saphanasrmultitarget = info
3. [AH] The cluster requires sudoers configuration on the cluster nodes for
<sid>adm. In this example that is achieved by creating a new file. Execute the
commands as root adapt the values of hn1 with correct lowercase SID.
Bash
Bash
5. [A] Verify the hook installation is active on all cluster nodes. Execute as <sid>adm.
Bash
cdtrace
grep HADR.*load.*SAPHanaSrMultiTarget nameserver_*.trc | tail -3
# Example output
# nameserver_hana-s1-db1.31001.000.trc:[14162]{-1}[-1/-1] 2023-01-26
12:53:55.728027 i ha_dr_provider HADRProviderManager.cpp(00083) :
loading HA/DR Provider 'SAPHanaSrMultiTarget' from
/usr/share/SAPHanaSR-ScaleOut/
grep SAPHanaSr.*init nameserver_*.trc | tail -3
# Example output
# nameserver_hana-s1-db1.31001.000.trc:[17636]{-1}[-1/-1] 2023-01-26
16:30:19.256705 i ha_dr_SAPHanaSrM SAPHanaSrMultiTarget.py(00080) :
SAPHanaSrMultiTarget.init() CALLING CRM: <sudo /usr/sbin/crm_attribute
-n hana_hn1_gsh -v 2.2 -l reboot> rc=0
# nameserver_hana-s1-db1.31001.000.trc:[17636]{-1}[-1/-1] 2023-01-26
16:30:19.256739 i ha_dr_SAPHanaSrM SAPHanaSrMultiTarget.py(00081) :
SAPHanaSrMultiTarget.init() Running srHookGeneration 2.2, see attribute
hana_hn1_gsh too
Bash
cdtrace
egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)'
nameserver_suschksrv.trc
# Example output
# 2023-01-19 08:23:10.581529 [1674116590-10005] susChkSrv.init()
version 0.7.7, parameter info: action_on_lost=fence stop_timeout=20
kill_signal=9
# 2023-01-19 08:23:31.553566 [1674116611-14022] START: indexserver
event looks like graceful tenant start
# 2023-01-19 08:23:52.834813 [1674116632-15235] START: indexserver
event looks like graceful tenant start (indexserver started)
Bash
7 Note
Bash
) Important
Bash
Bash
Bash
3. [1] Place the cluster out of maintenance mode. Make sure that the cluster status is
ok and that all of the resources are started.
Bash
4. [1] Verify the communication between the HANA HA hook and the cluster, showing
status SOK for SID and both replication sites with status P(rimary) or S(econdary).
Bash
sudo /usr/sbin/SAPHanaSR-showAttr
# Expected result
# Global cib-time maintenance prim sec sync_state upd
# ---------------------------------------------------------------------
# HN1 Fri Jan 27 10:38:46 2023 false HANA_S1 - SOK ok
#
# Sites lpt lss mns srHook srr
# -----------------------------------------------
# HANA_S1 1674815869 4 hana-s1-db1 PRIM P
# HANA_S2 30 4 hana-s2-db1 SWAIT S
7 Note
The timeouts in the above configuration are just examples and may need to
be adapted to the specific HANA setup. For instance, you may need to
increase the start timeout, if it takes longer to start the SAP HANA database.
7 Note
This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we’ll remove them from this article.
1. Before you start a test, check the cluster and SAP HANA system replication status.
Bash
Bash
3. Verify the cluster configuration for a failure scenario, when a node loses access to
the NFS share ( /hana/shared ).
Expected result: When you block the access to the /hana/shared NFS mounted file
system on one of the primary site VMs, the monitoring operation that performs
read/write operation on file system, will fail, as it is not able to access the file
system and will trigger HANA resource failover. The same result is expected when
your HANA node loses access to the NFS share.
You can check the state of the cluster resources by executing crm_mon or crm
status . Resource state before starting the test:
Bash
# Output of crm_mon
#7 nodes configured
#24 resource instances configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#stonith-sbd (stonith:external/sbd): Started hana-s-mm
# Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hana-s1-db1 ]
# Slaves: [ hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-
s2-db3 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-
s2-db1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-
s2-db1
If using NFS on ANF, first confirm the IP address for the /hana/shared ANF
volume on the primary site. You can do that by running df -kh|grep
/hana/shared .
If using NFS on Azure Files, first determine the IP address of the private end
point for your storage account.
Then, set up a temporary firewall rule to block access to the IP address of the
/hana/shared NFS file system by executing the following command on one of the
Bash
The cluster resources will be migrated to the other HANA system replication site.
Bash
Bash
# Output of crm_mon
#7 nodes configured
#24 resource instances configured
#
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1
hana-s2-db2 hana-s2-db3 ]
#
#Active resources:
#
#stonith-sbd (stonith:external/sbd): Started hana-s-mm
# Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Clone Set: cln_SAPHanaTopology_HN1_HDB03
[rsc_SAPHanaTopology_HN1_HDB03]
# Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-
s2-db2 hana-s2-db3 ]
# Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
# Masters: [ hana-s2-db1 ]
# Slaves: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db2 hana-
s2-db3 ]
# Resource Group: g_ip_HN1_HDB03
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-
s2-db1
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-
s2-db1
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs).
High availability of IBM Db2 LUW on
Azure VMs on Red Hat Enterprise Linux
Server
Article • 01/19/2024
IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery
(HADR) configuration consists of one node that runs a primary database instance and
at least one node that runs a secondary database instance. Changes to the primary
database instance are replicated to a secondary database instance synchronously or
asynchronously, depending on your configuration.
7 Note
This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we'll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs),
install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP
software installation. To help you accomplish these tasks, we provide references to SAP
and IBM installation manuals. This article focuses on parts that are specific to the Azure
environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note
1928533 .
Before you begin an installation, see the following SAP notes and documentation:
ノ Expand table
2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux, UNIX, and Windows -
additional information
ノ Expand table
Documentation
SAP Community Wiki : Has all of the required SAP Notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux guide
Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide
Overview of the High Availability Add-On for Red Hat Enterprise Linux 7
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster
Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on
Microsoft Azure
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
Support Policy for RHEL High Availability Clusters - Management of IBM Db2 for Linux, Unix, and
Windows in a Cluster
Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure
virtual machines, which are deployed in an virtual machine scale set with flexible
orchestration across availability zones or in an availability set.
The following graphics display a setup of two database server Azure VMs. Both database
server Azure VMs have their own storage attached and are up and running. In HADR,
one database instance in one of the Azure VMs has the role of the primary instance. All
clients are connected to primary instance. All changes in database transactions are
persisted locally in the Db2 transaction log. As the transaction log records are persisted
locally, the records are transferred via TCP/IP to the database instance on the second
database server, the standby server, or standby instance. The standby instance updates
the local database by rolling forward the transferred transaction log records. In this way,
the standby server is kept in sync with the primary server.
To have SAP application servers connect to primary database, you need a virtual host
name and a virtual IP address. After a failover, the SAP application servers connect to
new primary database instance. In an Azure environment, an Azure load balancer is
required to use a virtual IP address in the way that's required for HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a
highly available SAP system setup, the following image presents an overview of a highly
available setup of an SAP system based on IBM Db2 database. This article covers only
IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.
ノ Expand table
Define Azure resource groups Resource groups where you deploy VM, virtual network, Azure
Load Balancer, and other resources. Can be existing or new.
Virtual network / Subnet Where VMs for IBM Db2 and Azure Load Balancer are being
definition deployed. Can be existing or newly created.
Virtual host name and virtual The virtual IP or host name is used for connection of SAP
IP for IBM Db2 database application servers. db-virt-hostname, db-virt-ip.
Azure Load Balancer Usage of Standard (recommended), probe port for Db2 database
(our recommendation 62500) probe-port.
Name resolution How name resolution works in the environment. DNS service is
highly recommended. Local hosts file can be used.
For more information about Linux Pacemaker in Azure, see Setting up Pacemaker on
Red Hat Enterprise Linux in Azure.
) Important
For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using
Pacemaker from IBM.
Manual deployment
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of
supported OS versions for Azure VMs and Db2 releases is available in SAP note
1928533 . The list of OS releases by individual Db2 release is available in the SAP
Product Availability Matrix. We highly recommend a minimum of Red Hat Enterprise
Linux 7.4 for SAP because of Azure-related performance improvements in this or later
Red Hat Enterprise Linux versions.
Azure documentation.
SAP documentation.
IBM documentation.
Links to this documentation are provided in the introductory section of this article.
You can reduce the number of guides displayed in the portal by setting the following
filters:
Bash
Write down the "Database Communication port" that's set during installation. It
must be the same port number for both database instances.
7 Note
Specific to IBM Db2 with HADR configuration with normal startup: The secondary
or standby database instance must be up and running before you can start the
primary database instance.
7 Note
For installation and configuration that's specific to Azure and Pacemaker: During
the installation procedure through SAP Software Provisioning Manager, there is an
explicit question about high availability for IBM Db2 LUW:
To set up the Standby database server by using the SAP homogeneous system copy
procedure, execute these steps:
1. Select the System copy option > Target systems > Distributed > Database
instance.
2. As a copy method, select Homogeneous System so that you can use backup to
restore a backup on the standby server instance.
3. When you reach the exit step to restore the database for homogeneous system
copy, exit the installer. Restore the database from a backup of the primary host. All
subsequent installation phases have already been executed on the primary
database server.
Add firewall rules to allow traffic to DB2 and between DB2 for HADR to work:
Bash
After you've configured HADR and the status is PEER and CONNECTED on the primary
and standby nodes, perform the following check:
Bash
#Primary output:
Database Member 0 -- Database ID2 -- Active -- Up 1 days 15:45:23 -- Date
2019-06-25-10.55.25.349375
HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 1
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.076494
(1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 5
HEARTBEAT_EXPECTED = 52
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 5
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 369280
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 132242668
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 300
PEER_WINDOW_END = 06/25/2019 11:12:03.000000
(1561461123)
READS_ON_STANDBY_ENABLED = N
#Secondary output:
Database Member 0 -- Database ID2 -- Standby -- Up 1 days 15:45:18 -- Date
2019-06-25-10.56.19.820474
HADR_ROLE = STANDBY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 0
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS =
PRIMARY_MEMBER_HOST = az-idb01
PRIMARY_INSTANCE = db2id2
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = az-idb02
STANDBY_INSTANCE = db2id2
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.078116
(1561460105)
HEARTBEAT_INTERVAL(seconds) = 7
HEARTBEAT_MISSED = 0
HEARTBEAT_EXPECTED = 10
HADR_TIMEOUT(seconds) = 30
TIME_SINCE_LAST_RECV(seconds) = 1
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 598.000027
LOG_HADR_WAIT_ACCUMULATED(seconds) = 598.000
LOG_HADR_WAIT_COUNT = 1
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 367360
PRIMARY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
HADR_LOG_GAP(bytes) = 0
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000012.LOG, 14151, 3685322855
STANDBY_RECV_REPLAY_GAP(bytes) = 0
PRIMARY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_REPLAY_LOG_TIME = 06/25/2019 10:45:42.000000
(1561459542)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 1000
PEER_WINDOW_END = 06/25/2019 11:12:59.000000
(1561461179)
READS_ON_STANDBY_ENABLED = N
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.
Bash
sudo firewall-cmd --add-port=<probe-port>/tcp --permanent
sudo firewall-cmd --reload
Shut down both database servers with user db2<sid> with db2stop.
Bash
Pacemaker configuration
1. [1] IBM Db2 HADR-specific Pacemaker configuration:
Bash
Bash
# Replace bold strings with your instance name db2sid, database SID,
and virtual IP address/Azure Load Balancer.
sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2'
dblist='ID2' master meta notify=true resource-stickiness=5000
#Create colocation constrain - keep Db2 HADR Master and Group on same
node
sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master
Db2_HADR_ID2-master
Bash
# Replace bold strings with your instance name db2sid, database SID,
and virtual IP address/Azure Load Balancer.
sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2'
dblist='ID2' promotable meta notify=true resource-stickiness=5000
#Configure resource stickiness and correct cluster notifications for
master resoruce
sudo pcs resource update Db2_HADR_ID2-clone meta notify=true resource-
stickiness=5000
#Create colocation constrain - keep Db2 HADR Master and Group on same
node
sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master
Db2_HADR_ID2-clone
Bash
4. [1] Make sure that the cluster status is OK and that all of the resources are started.
It's not important which node the resources are running on.
Bash
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
) Important
You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools.
If you use db2 commands such as db2stop, Pacemaker detects the action as a
failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and
you can then use normal db2 administration commands.
/sapmnt/<SID>/profile/DEFAULT.PFL
Bash
SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname
/sapmnt/<SID>/global/db6/db2cli.ini
Bash
Hostname=db-virt-hostname
1. Sign in to the primary application server of the J2EE instance and execute:
Bash
sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
4. Change the host name in the JDBC URL to the virtual host name.
Bash
jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0
5. Select Add.
6. To save your changes, select the disk icon at the upper left.
The log archiving is performed only by the primary database. If you change the HADR
roles of the database servers or if a failure occurs, the new primary database is
responsible for log archiving. If you've set up multiple log archive locations, your logs
might be archived twice. In the event of a local or remote catch-up, you might also have
to manually copy the archived logs from the old primary server to the active log location
of the new primary server.
We recommend configuring a common NFS share or GlusterFS, where logs are written
from both nodes. The NFS share or GlusterFS has to be highly available.
You can use existing highly available NFS shares or GlusterFS for transports or a profile
directory. For more information, see:
GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver.
High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux
with Azure NetApp Files for SAP Applications.
Azure NetApp Files (to create NFS shares).
The initial status for all test cases is explained here: (crm_mon -r or pcs status)
Bash
2 nodes configured
5 resources configured
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
The original status in an SAP system is documented in Transaction DBACOCKPIT >
Configuration > Overview, as shown in the following image:
) Important
The IBM Db2 HADR synchronization is working. Check with user db2<sid>.
Bash
Migrate the node that's running the primary Db2 database by executing following
command:
Bash
# On RHEL 7.x
sudo pcs resource move Db2_HADR_ID2-master
# On RHEL 8.x
sudo pcs resource move Db2_HADR_ID2-clone --master
After the migration is done, the crm status output looks like:
Bash
2 nodes configured
5 resources configured
Remove the location constrain and standby node would be started on az-idb01.
Bash
# On RHEL 7.x
sudo pcs resource clear Db2_HADR_ID2-master
# On RHEL 8.x
sudo pcs resource clear Db2_HADR_ID2-clone
Bash
2 nodes configured
5 resources configured
Bash
# On RHEL 7.x
sudo pcs resource move Db2_HADR_ID2-master az-idb01
sudo pcs resource clear Db2_HADR_ID2-master
# On RHEL 8.x
sudo pcs resource move Db2_HADR_ID2-clone --master
sudo pcs resource clear Db2_HADR_ID2-clone
Bash
status on az-ibdb02
Bash
2 nodes configured
5 resources configured
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
After the failover, you can start the service again on az-idb01.
Bash
Kill the Db2 process on the node that runs the HADR
primary database
Bash
The Db2 instance is going to fail, and Pacemaker will move master node and report
following status:
Bash
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=49,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 09:57:35 2019', queued=0ms, exec=362ms
Pacemaker restarts the Db2 primary database instance on the same node, or it fails over
to the node that's running the secondary database instance and an error is reported.
Kill the Db2 process on the node that runs the secondary
database instance
Bash
Bash
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_monitor_20000 on az-idb02 'not running' (7): call=144,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 10:02:09 2019', queued=0ms, exec=0ms
The Db2 instance gets restarted in the secondary role it had assigned before.
Bash
Failure detected:
Bash
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms
The Db2 HADR secondary database instance got promoted into the primary role.
Bash
2 nodes configured
5 resources configured
Failed Actions:
* Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110,
status=complete, exitreason='none',
last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms
In such a case, Pacemaker detects that the node that's running the primary database
instance isn't responding.
Bash
2 nodes configured
5 resources configured
The next step is to check for a Split brain situation. After the surviving node has
determined that the node that last ran the primary database instance is down, a failover
of resources is executed.
Bash
2 nodes configured
5 resources configured
Online: [ az-idb02 ]
OFFLINE: [ az-idb01 ]
In the event of a kernel panic, the failed node will be restarted by fencing agent. After
the failed node is back online, you must start pacemaker cluster by
Bash
Bash
2 nodes configured
5 resources configured
Next steps
High-availability architecture and scenarios for SAP NetWeaver
Setting up Pacemaker on Red Hat Enterprise Linux in Azure
High availability of IBM Db2 LUW on
Azure VMs on SUSE Linux Enterprise
Server with Pacemaker
Article • 01/19/2024
IBM Db2 for Linux, UNIX, and Windows (LUW) in high availability and disaster recovery
(HADR) configuration consists of one node that runs a primary database instance and
at least one node that runs a secondary database instance. Changes to the primary
database instance are replicated to a secondary database instance synchronously or
asynchronously, depending on your configuration.
7 Note
This article contains references to terms that Microsoft no longer uses. When these
terms are removed from the software, we'll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs),
install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP
software installation. To help you accomplish these tasks, we provide references to SAP
and IBM installation manuals. This article focuses on parts that are specific to the Azure
environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note
1928533 .
Before you begin an installation, see the following SAP notes and documentation:
ノ Expand table
2233094 DB6: SAP applications on Azure that use IBM Db2 for Linux, UNIX, and Windows -
additional information
ノ Expand table
Documentation
SAP Community Wiki : Has all of the required SAP Notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux guide
Azure Virtual Machines database management system(DBMS) deployment for SAP on Linux guide
SUSE Linux Enterprise Server for SAP Applications 12 SP4 best practices guides
IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
Overview
To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure
virtual machines, which are deployed in an virtual machine scale set with flexible
orchestration across availability zones or in an availability set.
The following graphics display a setup of two database server Azure VMs. Both database
server Azure VMs have their own storage attached and are up and running. In HADR,
one database instance in one of the Azure VMs has the role of the primary instance. All
clients are connected to this primary instance. All changes in database transactions are
persisted locally in the Db2 transaction log. As the transaction log records are persisted
locally, the records are transferred via TCP/IP to the database instance on the second
database server, the standby server, or standby instance. The standby instance updates
the local database by rolling forward the transferred transaction log records. In this way,
the standby server is kept in sync with the primary server.
To have SAP application servers connect to primary database, you need a virtual host
name and a virtual IP address. After a failover, the SAP application servers connect to
new primary database instance. In an Azure environment, an Azure load balancer is
required to use a virtual IP address in the way that's required for HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a
highly available SAP system setup, the following image presents an overview of a highly
available setup of an SAP system based on IBM Db2 database. This article covers only
IBM Db2, but it provides references to other articles about how to set up other
components of an SAP system.
High-level overview of the required steps
To deploy an IBM Db2 configuration, you need to follow these steps:
ノ Expand table
Define Azure resource groups Resource groups where you deploy VM, virtual network, Azure
Load Balancer, and other resources. Can be existing or new.
Virtual network / Subnet Where VMs for IBM Db2 and Azure Load Balancer are being
definition deployed. Can be existing or newly created.
Virtual host name and virtual The virtual IP or host name that's used for connection of SAP
IP for IBM Db2 database application servers. db-virt-hostname, db-virt-ip.
Azure Load Balancer Usage of Standard (recommended), probe port for Db2 database
(our recommendation 62500) probe-port.
Name resolution How name resolution works in the environment. DNS service is
highly recommended. Local hosts file can be used.
For more information about Linux Pacemaker in Azure, see Set up Pacemaker on SUSE
Linux Enterprise Server in Azure.
) Important
For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using
Pacemaker from IBM.
Manual deployment
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of
supported OS versions for Azure VMs and Db2 releases is available in SAP note
1928533 . The list of OS releases by individual Db2 release is available in the SAP
Product Availability Matrix. We highly recommend a minimum of SLES 12 SP4 because
of Azure-related performance improvements in this or later SUSE Linux versions.
Azure documentation
SAP documentation
IBM documentation
Links to this documentation are provided in the introductory section of this article.
You can find the guides on the SAP Help portal by using the SAP Installation Guide
Finder .
You can reduce the number of guides displayed in the portal by setting the following
filters:
Write down the "Database Communication port" that's set during installation. It
must be the same port number for both database instances
To set up the Standby database server by using the SAP homogeneous system copy
procedure, execute these steps:
1. Select the System copy option > Target systems > Distributed > Database
instance.
2. As a copy method, select Homogeneous System so that you can use backup to
restore a backup on the standby server instance.
3. When you reach the exit step to restore the database for homogeneous system
copy, exit the installer. Restore the database from a backup of the primary host. All
subsequent installation phases have already been executed on the primary
database server.
7 Note
When you use an SBD device for Linux Pacemaker, set the following Db2 HADR
parameters:
When you use an Azure Pacemaker fencing agent, set the following parameters:
) Important
Specific to IBM Db2 with HADR configuration with normal startup: The secondary
or standby database instance must be up and running before you can start the
primary database instance.
For demonstration purposes and the procedures described in this article, the database
SID is PTR.
Bash
#Primary output:
# Database Member 0 -- Database PTR -- Active -- Up 1 days 01:51:38 -- Date
2019-02-06-15.35.28.505451
#
# HADR_ROLE = PRIMARY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 1
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.170561
(1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6137
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 13
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000025
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223713
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 374400
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15571, 27902548040
# STANDBY_RECV_REPLAY_GAP(bytes) = 0
# PRIMARY_LOG_TIME = 02/06/2019 15:34:39.000000
(1549467279)
# STANDBY_LOG_TIME = 02/06/2019 15:34:39.000000
(1549467279)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:34:39.000000
(1549467279)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:40:25.000000
(1549467625)
# READS_ON_STANDBY_ENABLED = N
#Secondary output:
# Database Member 0 -- Database PTR -- Standby -- Up 1 days 01:46:43 -- Date
2019-02-06-15.38.25.644168
#
# HADR_ROLE = STANDBY
# REPLAY_TYPE = PHYSICAL
# HADR_SYNCMODE = NEARSYNC
# STANDBY_ID = 0
# LOG_STREAM_ID = 0
# HADR_STATE = PEER
# HADR_FLAGS = TCP_PROTOCOL
# PRIMARY_MEMBER_HOST = azibmdb02
# PRIMARY_INSTANCE = db2ptr
# PRIMARY_MEMBER = 0
# STANDBY_MEMBER_HOST = azibmdb01
# STANDBY_INSTANCE = db2ptr
# STANDBY_MEMBER = 0
# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.205067
(1549374707)
# HEARTBEAT_INTERVAL(seconds) = 15
# HEARTBEAT_MISSED = 0
# HEARTBEAT_EXPECTED = 6186
# HADR_TIMEOUT(seconds) = 60
# TIME_SINCE_LAST_RECV(seconds) = 5
# PEER_WAIT_LIMIT(seconds) = 0
# LOG_HADR_WAIT_CUR(seconds) = 0.000
# LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.000023
# LOG_HADR_WAIT_ACCUMULATED(seconds) = 434.595
# LOG_HADR_WAIT_COUNT = 223725
# SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
# SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 372480
# PRIMARY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# HADR_LOG_GAP(bytes) = 0
# STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000280.LOG, 15574, 27902562173
# STANDBY_RECV_REPLAY_GAP(bytes) = 155
# PRIMARY_LOG_TIME = 02/06/2019 15:37:34.000000
(1549467454)
# STANDBY_LOG_TIME = 02/06/2019 15:37:34.000000
(1549467454)
# STANDBY_REPLAY_LOG_TIME = 02/06/2019 15:37:34.000000
(1549467454)
# STANDBY_RECV_BUF_SIZE(pages) = 2048
# STANDBY_RECV_BUF_PERCENT = 0
# STANDBY_SPOOL_LIMIT(pages) = 0
# STANDBY_SPOOL_PERCENT = NULL
# STANDBY_ERROR_TIME = NULL
# PEER_WINDOW(seconds) = 300
# PEER_WINDOW_END = 02/06/2019 15:43:19.000000
(1549467799)
# READS_ON_STANDBY_ENABLED = N
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) instance of Standard Azure Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for VMs using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.
Shut down both database servers with user db2<sid> with db2stop.
Change the shell environment for db2<sid> user to /bin/ksh. We recommend that
you use the Yast tool.
Pacemaker configuration
) Important
Recent testing revealed situations, where netcat stops responding to requests due
to backlog and its limitation of handling only one connection. The netcat resource
stops listening to the Azure Load balancer requests and the floating IP becomes
unavailable. For existing Pacemaker clusters, we recommended in the past
replacing netcat with socat. Currently we recommend using azure-lb resource
agent, which is part of package resource-agents, with the following package
version requirements:
Bash
Bash
Bash
4. [1] Make sure that the cluster status is OK and that all of the resources are started.
It's not important which node the resources are running on.
Bash
# 2 nodes configured
# 5 resources configured
) Important
You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools.
If you use db2 commands such as db2stop, Pacemaker detects the action as a
failure of resource. If you're performing maintenance, you can put the nodes or
resources in maintenance mode. Pacemaker suspends monitoring resources, and
you can then use normal db2 administration commands.
/sapmnt/<SID>/profile/DEFAULT.PFL
Bash
SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname
/sapmnt/<SID>/global/db6/db2cli.ini
Bash
Hostname=db-virt-hostname
If you performed the installation before you created the Db2 HADR configuration, make
the changes as described in the preceding section and as follows for SAP Java stacks.
1. Sign in to the primary application server of the J2EE instance and execute:
Bash
sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
4. Change the host name in the JDBC URL to the virtual host name.
TEXT
jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0
5. Select Add.
6. To save your changes, select the disk icon at the upper left.
The log archiving is performed only by the primary database. If you change the HADR
roles of the database servers or if a failure occurs, the new primary database is
responsible for log archiving. If you've set up multiple log archive locations, your logs
might be archived twice. In the event of a local or remote catch-up, you might also have
to manually copy the archived logs from the old primary server to the active log location
of the new primary server.
We recommend configuring a common NFS share where logs are written from both
nodes. The NFS share has to be highly available.
You can use existing highly available NFS shares for transports or a profile directory. For
more information, see:
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server.
High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server
with Azure NetApp Files for SAP Applications.
Azure NetApp Files (to create NFS shares).
The initial status for all test cases is explained here: (crm_mon -r or crm status)
Bash
2 nodes configured
5 resources configured
) Important
The IBM Db2 HADR synchronization is working. Check with user db2<sid>
Bash
Migrate the node that's running the primary Db2 database by executing following
command:
Bash
After the migration is done, the crm status output looks like:
Bash
2 nodes configured
5 resources configured
Online: [ azibmdb01 azibmdb02 ]
Resource migration with "crm resource migrate" creates location constraints. Location
constraints should be deleted. If location constraints aren't deleted, the resource can't
fail back or you can experience unwanted takeovers.
Migrate the resource back to azibmdb01 and clear the location constraints
Bash
crm resource migrate <res_name> <host>: Creates location constraints and can
cause issues with takeover
crm resource clear <res_name>: Clears location constraints
crm resource cleanup <res_name>: Clears all errors of the resource
Bash
Cluster node azibmdb01 should be rebooted. The IBM Db2 primary HADR role is going
to be moved to azibmdb02. When azibmdb01 is back online, the Db2 instance is going
to move in the role of a secondary database instance.
If the Pacemaker service doesn't start automatically on the rebooted former primary, be
sure to start it manually with:
Bash
Bash
status on azibmdb02
Bash
2 nodes configured
5 resources configured
Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]
After the failover, you can start the service again on azibmdb01.
Bash
Kill the Db2 process on the node that runs the HADR
primary database
Bash
The Db2 instance is going to fail, and Pacemaker will report following status:
Bash
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms
Pacemaker restarts the Db2 primary database instance on the same node, or it fails over
to the node that's running the secondary database instance and an error is reported.
Bash
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms
Kill the Db2 process on the node that runs the secondary
database instance
Bash
azibmdb02:~ # kill -9
Bash
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms
The Db2 instance gets restarted in the secondary role it had assigned before.
Bash
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms
2 nodes configured
5 resources configured
Bash
azibmdb01:~ # su - db2ptr
azibmdb01:db2ptr> db2stop force
Failure detected
Bash
2 nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=201,
status=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:25 2019', queued=1ms, exec=150ms
The Db2 HADR secondary database instance got promoted into the primary role.
Bash
nodes configured
5 resources configured
Failed Actions:
* rsc_Db2_db2ptr_PTR_start_0 on azibmdb01 'unknown error' (1): call=205,
stat
us=complete, exitreason='',
last-rc-change='Tue Feb 12 14:45:27 2019', queued=0ms, exec=865ms
Pacemaker promotes the secondary instance to the primary instance role. The old
primary instance will move into the secondary role after the VM and all services are fully
restored after the VM reboot.
Bash
nodes configured
5 resources configured
In such a case, Pacemaker detects that the node that's running the primary database
instance isn't responding.
Bash
2 nodes configured
5 resources configured
The next step is to check for a Split brain situation. After the surviving node has
determined that the node that last ran the primary database instance is down, a failover
of resources is executed.
Bash
2 nodes configured
5 resources configured
Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]
Bash
2 nodes configured
5 resources configured
Next steps
High-availability architecture and scenarios for SAP NetWeaver
Set up Pacemaker on SUSE Linux Enterprise Server in Azure
High availability for SAP NetWeaver on
VMs on RHEL with NFS on Azure Files
Article • 02/05/2024
This article describes how to deploy and configure virtual machines (VMs), install the
cluster framework, and install a high-availability (HA) SAP NetWeaver system by using
NFS on Azure Files. The example configurations use VMs that run on Red Hat Enterprise
Linux (RHEL).
Prerequisites
Azure Files documentation
SAP Note 1928533 , which has:
A list of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
7.x.
SAP Note 2772999 has recommended OS settings for Red Hat Enterprise Linux
8.x.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in Pacemaker cluster
General RHEL documentation:
High Availability Add-On Overview
High Availability Add-On Administration
High Availability Add-On Reference
Configuring ASCS/ERS for SAP NetWeaver with Standalone Resources in RHEL
7.5
Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2)
in Pacemaker on RHEL
Azure-specific RHEL documentation:
Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
Overview
To deploy the SAP NetWeaver application layer, you need shared directories like
/sapmnt/SID and /usr/sap/trans in the environment. Additionally, when you deploy an
HA SAP system, you need to protect and make highly available file systems like
/sapmnt/SID and /usr/sap/SID/ASCS .
Now you can place these file systems on NFS on Azure Files. NFS on Azure Files is an HA
storage solution. This solution offers synchronous zone-redundant storage (ZRS) and is
suitable for SAP ASCS/ERS instances deployed across availability zones. You still need a
Pacemaker cluster to protect single point of failure components like SAP NetWeaver
central services (ASCS/SCS).
The example configurations and installation commands use the following instance
numbers:
ノ Expand table
ERS 01
Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image
that's supported for the SAP system. You can deploy a VM in any one of the availability
options: virtual machine scale set, availability zone, or availability set.
Configure Azure load balancer
During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
7 Note
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard instance of Load Balancer, there's no
outbound internet connectivity unless more configuration is performed to allow
routing to public endpoints. For more information on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Azure
Standard Load Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Load Balancer. Enabling
TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.
Locally redundant storage (LRS), which offers local, in-zone synchronous data
replication.
Zone-redundant storage (ZRS), which replicates your data synchronously across
the three availability zones in the region.
Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate
redundancy. Review the availability of Azure Files by Azure region under Premium
Files Storage. If your scenario benefits from ZRS, verify that premium file shares with
ZRS are supported in your Azure region.
We recommend that you access your Azure Storage account through an Azure private
endpoint. Make sure to deploy the Azure Files storage account endpoint and the VMs,
where you need to mount the NFS shares, in the same Azure virtual network or peered
Azure virtual networks.
1. Deploy an Azure Files storage account named sapafsnfs . In this example, we use
ZRS. If you're not familiar with the process, see Create a storage account for the
Azure portal.
3. Select Next.
4. On the Advanced tab, clear Require secure transfer for REST API Operations. If
you don't clear this option, you can't mount the NFS share to your VM. The mount
operation will time out.
5. Select Next.
7. On the Create private endpoint pane, select your Subscription, Resource group,
and Location. For Name, enter sapafsnfs_pe . For Storage sub-resource, select file.
Under Networking, for Virtual network, select the virtual network and subnet to
use. Again, you can use the virtual network where your SAP VMs are or a peered
virtual network. Under Private DNS integration, accept the default option Yes for
Integrate with private DNS zone. Make sure to select your Private DNS Zone.
Select OK.
11. Wait for the validation to finish. Fix any issues before you continue.
) Important
The preceding share size is only an example. Make sure to size your shares
appropriately. Size is not only based on the size of the of data stored on the share
but also based on the requirements for IOPS and throughput. For more
information, see Azure file share targets.
The SAP file systems that don't need to be mounted via NFS can also be deployed on
Azure disk storage. In this example, you can deploy /usr/sap/NW1/D02 and
/usr/sap/NW1/D03 on Azure disk storage.
The minimum share size is 100 GiB. You only pay for the capacity of the
provisioned shares.
Size your NFS shares not only based on capacity requirements but also on IOPS
and throughput requirements. For more information, see Azure file share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues with NFS
on Azure Files, see Troubleshoot Azure file share performance.
For SAP J2EE systems, it's not supported to place /usr/sap/<SID>/J<nr> on NFS on
Azure Files.
If your SAP system has a heavy batch jobs load, you might have millions of job
logs. If the SAP batch job logs are stored in the file system, pay special attention to
the sizing of the sapmnt share. As of SAP_BASIS 7.52, the default behavior for the
batch job logs is to be stored in the database. For more information, see Job log in
the database .
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces, or saptrans .
Don't use the saptrans share for any other activity, such as interfaces, or sapmnt .
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also storage account performance scale targets. Be careful not
to exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than five SAP systems in a single
storage account. This guideline helps avoid exceeding the storage account limits
and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for nonproduction and production SAP
systems in the same storage account.
We recommend that you deploy on RHEL 8.4 or higher to benefit from NFS client
improvements.
Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions
automatically redirect to a healthy zone. You don't have to remount the NFS shares
on your VMs.
If you're deploying your VMs across availability zones, use a storage account with
ZRS in the Azure regions that support ZRS.
Azure Files doesn't currently support automatic cross-region replication for
disaster recovery scenarios.
Set up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.
Bash
Bash
Bash
Bash
Make sure that the version of the installed resource-agents-sap package is at least
3.9.5-124.el7 .
Bash
Bash
vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/
/usr/sap/NW1/SYS nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a
Bash
sudo vi /etc/waagent.conf
Bash
Configure RHEL as described in SAP Note 2002167 for RHEL 7.x, SAP Note
2772999 for RHEL 8.x, or SAP Note 3108316 for RHEL 9.x.
Bash
2. [1] Create a virtual IP resource and health probe for the ASCS instance.
Bash
sudo pcs node standby sap-cl2
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
Install SAP NetWeaver ASCS as the root on the first node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ASCS, for example, sapascs and 10.90.90.10, and the instance number that
you used for the probe of the load balancer, for example, 00.
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
Bash
4. [1] Create a virtual IP resource and health probe for the ERS instance.
Bash
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
Install SAP NetWeaver ERS as the root on the second node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ERS, for example, sapers and 10.90.90.9, and the instance number that you
used for the probe of the load balancer, for example, 01.
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
Bash
ASCS/SCS profile:
Bash
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP Note 1410736 .
ERS profile:
Bash
sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this
action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using
ENSA1. Change the Linux system keepalive settings on all SAP servers for both
ENSA1 and ENSA2. For more information, see SAP Note 1410736 .
Bash
To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.
Bash
sudo vi /usr/sap/sapservices
) Important
With the systemd based SAP Startup Framework, SAP instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As described in SAP Note 3115048 , a fresh
installation of a SAP kernel with integrated systemd based SAP Startup
Framework support will always result in a systemd controlled SAP instance.
After an SAP kernel upgrade of an existing SAP installation to a kernel which
has systemd based SAP Startup Framework support, however, some manual
steps have to be performed as documented in SAP Note 3115048 to convert
the existing SAP startup environment to one which is systemd controlled.
When utilizing Red Hat HA services for SAP (cluster configuration) to manage
SAP application server instances such as SAP ASCS and SAP ERS, additional
modifications will be necessary to ensure compatibility between the
SAPInstance resource agent and the new systemd-based SAP startup
framework. So once the SAP application server instances has been installed or
switched to a systemd enabled SAP Kernel as per SAP Note 3115048 , the
steps mentioned in Red Hat KBA 6884531 must be completed successfully
on all cluster nodes.
ENSA1
Bash
If you're upgrading from an older version and switching to enqueue server 2, see
SAP Note 2641322 .
7 Note
The timeouts in the preceding configuration are only examples and might
need to be adapted to the specific SAP setup.
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
10. [1] Run the following step to configure priority-fencing-delay (applicable only as
of pacemaker-2.0.4-6.el8 or higher).
7 Note
If you have a two-node cluster, you have the option to configure the
priority-fencing-delay cluster property. This property introduces additional
delay in fencing a node that has higher total resource priority when a split-
brain scenario occurs. For more information, see Can Pacemaker fence the
cluster node with the fewest running resources? .
The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8
version or higher. If you set up priority-fencing-delay on an existing cluster,
make sure to clear the pcmk_delay_max setting in the fencing device.
Bash
11. [A] Add firewall rules for ASCS and ERS on both nodes.
Bash
The following steps assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring
hostname resolution) aren't needed.
1. [A] Set up hostname resolution. You can either use a DNS server or modify the
/etc/hosts file on all nodes. This example shows how to use the /etc/hosts file.
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.
Bash
10.90.90.7 sap-cl1
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers
10.90.90.12 sapa01
10.90.90.13 sapa02
Bash
Bash
Bash
vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
# Mount the file systems
mount -a
Bash
sudo vi /etc/waagent.conf
Bash
Install the SAP NetWeaver database instance as a root by using a virtual hostname that
maps to the IP address of the load balancer front-end configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a nonroot user
to connect to sapinst .
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAP NetWeaver application server installation
Follow these steps to install an SAP application server.
Follow the steps in the previous section SAP NetWeaver application server
preparation to prepare the application server.
Bash
Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.
Bash
hdbuserstore List
Bash
KEY DEFAULT
ENV : 10.90.90.5:30313
USER: SAPABAP1
DATABASE: NW1
In this example, the IP address of the default entry points to the VM, not the load
balancer. Change the entry to point to the virtual hostname of the load balancer.
Make sure to use the same port and database name. For example, use 30313 and
NW1 in the sample output.
Bash
su - nw1adm
hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP
schema>
Next steps
To deploy a cost-optimization scenario where the PAS and AAS instance is
deployed with SAP NetWeaver HA cluster on RHEL, see Install SAP dialog instance
with SAP ASCS/SCS high-availability VMs on RHEL.
See HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide.
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
(large instances), see SAP HANA (large instances) high availability and disaster
recovery on Azure.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
VMs, see High availability of SAP HANA on Azure Virtual Machines.
Azure Virtual Machines HA for SAP
NetWeaver on RHEL with Azure NetApp
Files for SAP applications
Article • 01/19/2024
This article describes how to deploy virtual machines (VMs), configure the VMs, install
the cluster framework, and install a highly available SAP NetWeaver 7.50 system by
using Azure NetApp Files. In the example configurations and installation commands, the
ASCS instance is number 00, the ERS instance is number 01, the Primary Application
instance (PAS) is 02, and the Application instance (AAS) is 03. The SAP System ID QAS is
used.
Prerequisites
Read the following SAP Notes and papers first:
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux.
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Overview
High availability (HA) for SAP NetWeaver central services requires shared storage. Until
now to achieve HA on Red Hat Linux, it was necessary to build a separate highly
available GlusterFS cluster.
Now it's possible to achieve SAP NetWeaver HA by using shared storage deployed on
Azure NetApp Files. Using Azure NetApp Files for shared storage eliminates the need for
more GlusterFS clusters. Pacemaker is still needed for HA of the SAP NetWeaver central
services (ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA
database use virtual hostname and virtual IP addresses. On Azure, a load balancer is
required to use a virtual IP address. We recommend using Azure Load Balancer
Standard. The configuration here shows a load balancer with a:
1. Create the Azure NetApp Files account in the selected Azure region by following
the instructions to create an Azure NetApp Files account.
2. Set up an Azure NetApp Files capacity pool by following the instructions on how to
set up an Azure NetApp Files capacity pool. The SAP NetWeaver architecture
presented in this article uses a single Azure NetApp Files capacity pool, Premium
SKU. We recommend the Azure NetApp Files Premium SKU for the SAP NetWeaver
application workload on Azure.
Important considerations
When you consider Azure NetApp Files for the SAP NetWeaver on RHEL HA architecture,
be aware of the following important considerations:
The minimum capacity pool is 4 TiB. You can increase the capacity pool size in 1-
TiB increments.
The minimum volume is 100 GiB.
Azure NetApp Files and all VMs, where Azure NetApp Files volumes will be
mounted, must be in the same Azure virtual network or in peered virtual networks
in the same region. Azure NetApp Files access over virtual network peering in the
same region is supported now. Azure NetApp Files access over global peering isn't
supported yet.
The selected virtual network must have a subnet delegated to Azure NetApp Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level. For more information, see
Service level for Azure NetApp Files. When you size the SAP Azure NetApp
volumes, make sure that the resulting throughput meets the application
requirements.
Azure NetApp Files offers export policy. You can control the allowed clients and
the access type (like Read/Write and Read Only).
The Azure NetApp Files feature isn't zone aware yet. Currently, the Azure NetApp
Files feature isn't deployed in all availability zones in an Azure region. Be aware of
the potential latency implications in some Azure regions.
You can deploy Azure NetApp Files volumes as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP application layer (ASCS/ERS, SAP application
servers).
Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image
that's supported for the SAP system. You can deploy a VM in any one of the availability
options: virtual machine scale set, availability zone, or availability set.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) standard load balancer, there's no outbound internet
connectivity unless more configuration is performed to allow routing to public
endpoints. For more information on how to achieve outbound connectivity, see
Public endpoint connectivity for VMs by using Azure Standard Load Balancer in
SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps could cause the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0. For more information, see Load Balancer health
probes.
) Important
configuration on the NFS client (that is, the VM) and the NFS server (that is,
the Azure NetApp configuration), then the permissions for files on Azure
NetApp volumes that are mounted on the VMs display as nobody .
Bash
# Example
[General]
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
Bash
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.
Bash
Bash
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp
192.168.24.5:/sapQAS /saptmp
# If using NFSv4.1
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp
192.168.24.5:/sapQAS /saptmp
Bash
Make sure that the version of the installed resource-agents-sap package is at least
3.9.5-124.el7 .
Bash
Bash
sudo vi /etc/fstab
Bash
sudo vi /etc/fstab
7 Note
Make sure to match the NFS protocol version of the Azure NetApp Files
volumes when you mount the volumes. If the Azure NetApp Files volumes are
created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the
Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the
instructions to disable ID mapping and make sure to use the corresponding
NFSv4.1 configuration. In this example, the Azure NetApp Files volumes were
created as NFSv3 volumes.
Bash
sudo mount -a
Bash
sudo vi /etc/waagent.conf
Bash
Based on the RHEL version, perform the configuration mentioned in SAP Note
2002167 , 2772999 , or 3108316 .
Bash
2. [1] Create a virtual IP resource and health probe for the ASCS instance.
Bash
# If using NFSv3
sudo pcs resource create fs_QAS_ASCS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-QAS_ASCS
# If using NFSv4.1
sudo pcs resource create fs_QAS_ASCS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASascs' \
directory='/usr/sap/QAS/ASCS00' fstype='nfs' force_unmount=safe
options='sec=sys,nfsvers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=105 \
--group g-QAS_ASCS
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
Install SAP NetWeaver ASCS as the root on the first node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ASCS, for example, anftstsapvh, 192.168.14.9, and the instance number that
you used for the probe of the load balancer, for example, 00.
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
Bash
sudo chown qasadm /usr/sap/QAS/ASCS00
sudo chgrp sapsys /usr/sap/QAS/ASCS00
4. [1] Create a virtual IP resource and health probe for the ERS instance.
Bash
# If using NFSv3
sudo pcs resource create fs_QAS_AERS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=40 \
--group g-QAS_AERS
# If using NFSv4.1
sudo pcs resource create fs_QAS_AERS Filesystem
device='192.168.24.5:/sapQAS/usrsapQASers' \
directory='/usr/sap/QAS/ERS01' fstype='nfs' force_unmount=safe
options='sec=sys,nfsvers=4.1' \
op start interval=0 timeout=60 op stop interval=0 timeout=120 op
monitor interval=200 timeout=105 \
--group g-QAS_AERS
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
Install SAP NetWeaver ERS as the root on the second node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ERS, for example, anftstsapers, 192.168.14.10, and the instance number that
you used for the probe of the load balancer, for example, 01.
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
Bash
ASCS/SCS profile
Bash
sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP Note 1410736 .
ERS profile
Bash
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this
action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you use ENSA1,
and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. For more information, see SAP Note 1410736 .
Bash
To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.
Bash
sudo vi /usr/sap/sapservices
# Depending on whether the SAP Startup framework is integrated with
systemd, you will observe one of the two entries on the ASCS node. You
should comment out the line(s).
# LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/QAS/ASCS00/exe/sapstartsrv
pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm
# systemctl --no-ask-password start SAPQAS_00 # sapstartsrv
pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh
) Important
With the systemd based SAP Startup Framework, SAP instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As described in SAP Note 3115048 , a fresh
installation of a SAP kernel with integrated systemd based SAP Startup
Framework support will always result in a systemd controlled SAP instance.
After an SAP kernel upgrade of an existing SAP installation to a kernel which
has systemd based SAP Startup Framework support, however, some manual
steps have to be performed as documented in SAP Note 3115048 to convert
the existing SAP startup environment to one which is systemd controlled.
When utilizing Red Hat HA services for SAP (cluster configuration) to manage
SAP application server instances such as SAP ASCS and SAP ERS, additional
modifications will be necessary to ensure compatibility between the
SAPInstance resource agent and the new systemd-based SAP startup
framework. So once the SAP application server instances has been installed or
switched to a systemd enabled SAP Kernel as per SAP Note 3115048 , the
steps mentioned in Red Hat KBA 6884531 must be completed successfully
on all cluster nodes.
ENSA1
Bash
# If using NFSv3
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=60 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS
# If using NFSv4.1
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 migration-threshold=1 failure-
timeout=60 \
op monitor interval=20 on-fail=restart timeout=105 \
op start interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_ASCS
# If using NFSv3
sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=60 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_AERS
# If using NFSv4.1
sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
AUTOMATIC_RECOVER=false IS_ERS=true \
op monitor interval=20 on-fail=restart timeout=105 op start
interval=0 timeout=600 op stop interval=0 timeout=600 \
--group g-QAS_AERS
If you're upgrading from an older version and switching to enqueue server 2, see
SAP Note 2641322 .
7 Note
The higher timeouts that are suggested when you use NFSv4.1 are necessary
owing to protocol-specific pause, which is related to NFSv4.1 lease renewals.
For more information, see NFS in NetApp best practice . The timeouts in
the preceding configuration are only examples and might need to be adapted
to the specific SAP setup.
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
10. [1] Run the following step to configure priority-fencing-delay (applicable only as
of pacemaker-2.0.4-6.el8 or higher).
7 Note
If you have a two-node cluster, you have the option to configure the
priority-fencing-delay cluster property. This property introduces more delay
in fencing a node that has higher total resource priority when a split-brain
scenario occurs. For more information, see Can Pacemaker fence the cluster
node with the fewest running resources? .
Bash
11. [A] Add firewall rules for ASCS and ERS on both nodes.
Bash
The following steps assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring
hostname resolution) aren't needed.
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.
text
Bash
Bash
Bash
sudo vi /etc/fstab
Bash
sudo vi /etc/fstab
Bash
sudo mount -a
Bash
# Mount
sudo mount -a
Bash
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
# Mount
sudo mount -a
Bash
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=3
# Mount
sudo mount -a
Bash
sudo vi /etc/fstab
# Add the following line to fstab
92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys
# Mount
sudo mount -a
Bash
sudo vi /etc/waagent.conf
Bash
Install the SAP NetWeaver database instance as the root by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the database.
Bash
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
Follow the steps in the previous section SAP NetWeaver application server
preparation to prepare the application server.
Bash
Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.
Bash
hdbuserstore List
Bash
KEY DEFAULT
ENV : 192.168.14.4:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the VM
and not to the load balancer's IP address. You need to change this entry to point to
the virtual hostname of the load balancer. Make sure to use the same port (30313
in the preceding output) and database name (QAS in the preceding output).
Bash
su - qasadm
hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP
schema>
Next steps
To deploy a cost-optimization scenario where the PAS and AAS instances are
deployed with the SAP NetWeaver HA cluster on RHEL, see Install SAP dialog
instance with SAP ASCS/SCS high availability VMs on RHEL.
See HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide.
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
(large instances), see SAP HANA (large instances) high availability and disaster
recovery on Azure.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
Virtual Machines, see High availability of SAP HANA on Azure Virtual Machines.
Azure Virtual Machines high availability
for SAP NetWeaver on Red Hat
Enterprise Linux
Article • 01/19/2024
This article describes how to deploy virtual machines (VMs), configure the VMs, install
the cluster framework, and install a highly available SAP NetWeaver 7.50 system.
In the example configurations and installation commands, ASCS instance number 00,
ERS instance number 02, and SAP System ID NW1 are used. The names of the resources
(for example, VMs and virtual networks) in the example assume that you used the
ASCS/SCS template with Resource Prefix NW1 to create the resources.
Prerequisites
Read the following SAP Notes and papers first:
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
(RHEL).
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is
configured in a separate cluster and multiple SAP systems can use it.
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA
database use virtual hostname and virtual IP addresses. On Azure, a load balancer is
required to use a virtual IP address. We recommend using Standard Azure Load
Balancer. The configuration here shows a load balancer with:
Set up GlusterFS
SAP NetWeaver requires shared storage for the transport and profile directory. To see
how to set up GlusterFS for SAP NetWeaver, see GlusterFS on Azure VMs on Red Hat
Enterprise Linux for SAP NetWeaver.
Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image
that's supported for the SAP system. You can deploy a VM in any one of the availability
options: virtual machine scale set, availability zone, or availability set.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
Frontend IP address: Select frontend IP
Backend pool: Select backend pool
Check "High availability ports"
Protocol: TCP
Health Probe: Create health probe with below details (applies for both
ASCS or ERS)
Protocol: TCP
Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.>
for ERS]
Interval: 5
Probe Threshold: 2
Idle timeout (minutes): 30
Check "Enable Floating IP"
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard Azure load balancer, there's no outbound
internet connectivity unless more configuration is performed to allow routing to
public endpoints. For more information on how to achieve outbound connectivity,
see Public endpoint connectivity for VMs using Azure Standard Load Balancer in
SAP high-availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.
Set up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines to the /etc/hosts file. Change the IP address and
hostname to match your environment.
text
Bash
Bash
Make sure that the version of the installed resource-agents-sap package is at least
3.9.5-124.el7.
Bash
Bash
sudo vi /etc/fstab
Bash
sudo mount -a
Bash
sudo vi /etc/waagent.conf
Bash
sudo service waagent restart
Based on the RHEL version, perform the configuration mentioned in SAP Note
2002167 , SAP Note 2772999 , or SAP Note 3108316 .
Bash
2. [1] Create a virtual IP resource and health probe for the ASCS instance.
Bash
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
Install SAP NetWeaver ASCS as the root on the first node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ASCS, for example, nw1-ascs and 10.0.0.7, and the instance number that
you used for the probe of the load balancer, for example, 00.
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
Bash
4. [1] Create a virtual IP resource and health probe for the ERS instance.
Bash
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
Install SAP NetWeaver ERS as the root on the second node by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the ERS, for example, nw1-aers and 10.0.0.8, and the instance number that you
used for the probe of the load balancer, for example, 02.
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
Bash
ASCS/SCS profile:
Bash
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP Note 1410736 .
ERS profile:
Bash
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers
The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this
action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using
ENSA1. Change the Linux system keepalive settings on all SAP servers for both
ENSA1 and ENSA2. For more information, see SAP Note 1410736 .
Bash
To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.
Bash
sudo vi /usr/sap/sapservices
# On the node where you installed the ASCS, comment out the following
line
# LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW1/ASCS00/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_nw1-ascs -D -u nw1adm
# On the node where you installed the ERS, comment out the following
line
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export
LD_LIBRARY_PATH; /usr/sap/NW1/ERS02/exe/sapstartsrv
pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm
ENSA1
Bash
7 Note
7 Note
The timeouts in the preceding configuration are only examples and might
need to be adapted to the specific SAP setup.
Make sure that the cluster status is okay and that all resources are started. Which
node the resources are running on isn't important.
Bash
10. [A] Add firewall rules for ASCS and ERS on both nodes.
Bash
You can either use a DNS server or modify the /etc/hosts file on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands:
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and hostname to
match your environment.
Bash
Bash
Bash
sudo yum -y install glusterfs-fuse uuidd
Bash
sudo vi /etc/fstab
Bash
sudo mount -a
Bash
sudo vi /etc/waagent.conf
Bash
Install the SAP NetWeaver database instance as the root by using a virtual
hostname that maps to the IP address of the load balancer front-end configuration
for the database, for example, nw1-db and 10.0.0.13.
Bash
Follow the steps in the previous section SAP NetWeaver application server
preparation to prepare the application server.
Bash
Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.
hdbuserstore List
text
KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: NW1
The output shows that the IP address of the default entry is pointing to the VM
and not to the load balancer's IP address. This entry needs to be changed to point
to the virtual hostname of the load balancer. Make sure to use the same port
(30313 in the preceding output) and database name (HN1 in the preceding
output).
Bash
su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@NW1 SAPABAP1 <password of ABAP
schema>
text
Bash
# Remove failed actions for the ERS that occurred as part of the
migration
[root@nw1-cl-0 ~]# pcs resource cleanup rsc_sap_NW1_ERS02
text
Run the following command as root on the node where the ASCS instance is
running.
Bash
The status after the node is started again should look like:
text
Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-0 'not running' (7):
call=45, status=complete, exitreason='',
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms
Bash
text
text
Bash
When cluster nodes can't communicate with each other, there's a risk of a split-
brain scenario. In such situations, cluster nodes try to simultaneously fence each
other, which results in a fence race. To avoid this situation, we recommend that you
set a priority-fencing-delay property in a cluster configuration (applicable only
for pacemaker-2.0.4-6.el8 or higher).
Bash
# If the iptables rule set on the server gets reset after a reboot,
the rules will be cleared out. In case they have not been reset, please
proceed to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.8 -j DROP; iptables -D OUTPUT -d 10.0.0.8
-j DROP
text
Run the following commands as root to identify the process of the message server
and kill it.
Bash
If you kill the message server only once, sapstart restarts it. If you kill it often
enough, Pacemaker eventually moves the ASCS instance to the other node. Run
the following commands as root to clean up the resource state of the ASCS and
ERS instance after the test.
Bash
Bash
text
Run the following commands as root on the node where the ASCS instance is
running to kill the enqueue server.
Bash
The ASCS instance should immediately fail over to the other node, in the case of
ENSA1. The ERS instance should also fail over after the ASCS instance is started.
Run the following commands as root to clean up the resource state of the ASCS
and ERS instance after the test.
Bash
text
text
Run the following command as root on the node where the ERS instance is
running to kill the enqueue replication server process.
Bash
If you run the command only once, sapstart restarts the process. If you run it
often enough, sapstart won't restart the process and the resource is in a stopped
state. Run the following commands as root to clean up the resource state of the
ERS instance after the test.
Bash
text
text
Run the following commands as root on the node where the ASCS is running.
Bash
text
rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-
0
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-
0
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-
0
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-
0
Resource Group: g-NW1_AERS
fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-
1
nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-
1
vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-
1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-
1
Next steps
To deploy a cost-optimization scenario where the PAS and AAS instance is
deployed with SAP NetWeaver HA cluster on RHEL, see Install SAP dialog instance
with SAP ASCS/SCS high availability VMs on RHEL.
See HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide.
See Azure Virtual Machines planning and implementation for SAP.
See Azure Virtual Machines deployment for SAP.
See Azure Virtual Machines DBMS deployment for SAP.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
(large instances), see SAP HANA (large instances) high availability and disaster
recovery on Azure.
To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure
VMs, see High availability of SAP HANA on Azure Virtual Machines.
GlusterFS on Azure VMs on Red Hat
Enterprise Linux for SAP NetWeaver
Article • 07/04/2023
This article describes how to deploy the virtual machines, configure the virtual machines,
and install a GlusterFS cluster that can be used to store the shared data of a highly
available SAP system. This guide describes how to set up GlusterFS that is used by two
SAP systems, NW1 and NW2. The names of the resources (for example virtual machines,
virtual networks) in the example assume that you have used the SAP file server
template with resource prefix glust.
Be aware that as documented in Red Hat Gluster Storage Life Cycle Red Hat Gluster
Storage will reach end of life at the end of 2024. The configuration will be supported for
SAP on Azure until it reaches end of life stage. GlusterFS should not be used for new
deployments. We recommend to deploy the SAP shared directories on NFS on Azure
Files or Azure NetApp Files volumes as documented in HA for SAP NW on RHEL with
NFS on Azure Files or HA for SAP NW on RHEL with Azure NetApp Files.
SAP Note 2002167 has recommended OS settings for Red Hat Enterprise Linux
SAP Note 2009879 has SAP HANA Guidelines for Red Hat Enterprise Linux
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Overview
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is
configured in a separate cluster and can be used by multiple SAP systems.
Set up GlusterFS
In this example, the resources were deployed manually via the Azure portal .
Deploy virtual machines for GlusterFS. Choose a suitable RHEL image that is supported
for Gluster storage. You can deploy VM in any one of the availability options - scale set,
availability zone or availability set.
Configure GlusterFS
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1, [2] - only applicable to node 2, [3] - only applicable to node 3.
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
text
2. [A] Register
Register your virtual machines and attach it to a pool that contains repositories for
RHEL 7 and GlusterFS
Bash
Bash
Bash
sudo yum -y install redhat-storage-server
Bash
Bash
Bash
# Number of Peers: 2
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Accepted peer request (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Accepted peer request (Connected)
8. [2] Test peer status
Bash
Bash
In this example, the GlusterFS is used for two SAP systems, NW1 and NW2. Use the
following commands to create LVM configurations for these SAP systems.
Bash
echo -e "/dev/rhgs-
NW1/sapmnt\t/rhs/NW1/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,no
uuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/trans\t/rhs/NW1/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouu
id 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/sys\t/rhs/NW1/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0
2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/ascs\t/rhs/NW1/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW1/aers\t/rhs/NW1/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab
sudo mount -a
Bash
echo -e "/dev/rhgs-
NW2/sapmnt\t/rhs/NW2/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,no
uuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/trans\t/rhs/NW2/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouu
id 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/sys\t/rhs/NW2/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0
2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/ascs\t/rhs/NW2/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-
NW2/aers\t/rhs/NW2/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid
0 2" | sudo tee -a /etc/fstab
sudo mount -a
Use the following commands to create the GlusterFS volume for NW1 and start it.
Bash
Use the following commands to create the GlusterFS volume for NW2 and start it.
Bash
Next steps
Install the SAP ASCS and database
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability
and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on Red Hat Enterprise Linux
for SAP applications multi-SID
Article • 01/18/2024
This article describes how to deploy multiple SAP NetWeaver highly available systems
(multi-SID) in a two node cluster on Azure VMs with Red Hat Enterprise Linux for SAP
applications.
In the example configurations, three SAP NetWeaver 7.50 systems are deployed in a
single, two node high availability cluster. The SAP systems SIDs are:
NW1 : ASCS instance number 00 and virtual host name msnw1ascs . ERS instance
The article doesn't cover the database layer and the deployment of the SAP NFS shares.
The examples in this article use the Azure NetApp Files volume sapMSID for the NFS
shares, assuming that the volume is already deployed. The examples assume that the
Azure NetApp Files volume is deployed with NFSv3 protocol. They use the following file
paths for the cluster resources for the ASCS and ERS instances of SAP systems NW1 , NW2 ,
and NW3 :
Overview
The virtual machines that participate in the cluster must be sized to be able to run all
resources in case failover occurs. Each SAP SID can fail over independently from each
other in the multi-SID high availability cluster.
To achieve high availability, SAP NetWeaver requires highly available shares. This article
shows examples with the SAP shares deployed on Azure NetApp Files NFS volumes. You
could instead host the shares on highly available GlusterFS cluster, which can be used by
multiple SAP systems.
) Important
The support for multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest
operating system in Azure VMs is limited to five SAP SIDs on the same cluster. Each
new SID increases the complexity. A mix of SAP Enqueue Replication Server 1 and
Enqueue Replication Server 2 on the same cluster is not supported. Multi-SID
clustering describes the installation of multiple SAP ASCS/ERS instances with
different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only
supported for ASCS/ERS.
Tip
SAP NetWeaver ASCS, SAP NetWeaver SCS, and SAP NetWeaver ERS use virtual
hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual
IP address. We recommend using Standard load balancer.
Frontend IP addresses for ASCS: 10.3.1.50 (NW1), 10.3.1.52 (NW2), and 10.3.1.54
(NW3)
Frontend IP addresses for ERS: 10.3.1.51 (NW1), 10.3.1.53 (NW2), and 10.3.1.55
(NW3)
Probe port 62000 for NW1 ASCS, 62010 for NW2 ASCS, and 62020 for NW3 ASCS
Probe port 62102 for NW1 ASCS, 62112 for NW2 ASCS, and 62122 for NW3 ASCS
) Important
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there is no outbound internet
connectivity, unless additional configuration is performed to allow routing to public
end points. For details on how to achieve outbound connectivity see Public
endpoint connectivity for Virtual Machines using Azure Standard Load Balancer
in SAP high-availability scenarios.
) Important
Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set parameter
net.ipv4.tcp_timestamps to 0. For more information, see Load Balancer health
probes.
SAP shares
SAP NetWeaver requires shared storage for the transport, profile directory, and so on.
For highly available SAP system, it's important to have highly available shares. You need
to decide on the architecture for your SAP shares. One option is to deploy the shares on
Azure NetApp Files NFS volumes. With Azure NetApp Files, you get built-in high
availability for the SAP NFS shares.
Another option is to build GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP
NetWeaver, which can be shared between multiple SAP systems.
If you use Azure NetApp Files NFS volumes, follow Azure VMs high availability for
SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP
applications.
If you use GlusterFS cluster, follow GlusterFS on Azure VMs on Red Hat Enterprise
Linux for SAP NetWeaver.
These articles guide you through the steps to prepare the necessary infrastructure, build
the cluster, prepare the OS for running the SAP application.
Tip
Always test the failover functionality of the cluster after the first system is deployed,
before adding the additional SAP SIDs to the cluster. That way, you know that the
cluster functionality works, before adding the complexity of additional SAP systems
to the cluster.
Prerequisites
) Important
Before following the instructions to deploy additional SAP systems in the cluster,
deploy the first SAP system in the cluster. There are steps which are only necessary
during the first system deployment.
2. [A] Set up name resolution for the more SAP systems. You can either use DNS
server or modify /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Adapt the IP addresses and the host names to your environment.
sudo vi /etc/hosts
# IP address of the load balancer frontend configuration for NW2 ASCS
10.3.1.52 msnw2ascs
# IP address of the load balancer frontend configuration for NW3 ASCS
10.3.1.54 msnw3ascs
# IP address of the load balancer frontend configuration for NW2 ERS
10.3.1.53 msnw2ers
# IP address of the load balancer frontend configuration for NW3 ERS
10.3.1.55 msnw3ers
3. [A] Create the shared directories for the NW2 and NW3 SAP systems to deploy to
the cluster.
4. [A] Add the mount entries for the /sapmnt/SID and /usr/sap/SID/SYS file systems
for the other SAP systems that you're deploying to the cluster. In this example, it's
NW2 and NW3 .
Update file /etc/fstab with the file systems for the other SAP systems that you're
deploying to the cluster.
If using Azure NetApp Files, follow the instructions in Azure VMs high
availability for SAP NW on RHEL with Azure NetApp Files.
If using GlusterFS cluster, follow the instructions in Azure VMs high
availability for SAP NW on RHEL.
Make sure the cluster status is ok and that all resources are started. It's not
important on which node the resources are running.
Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS. For example, for
system NW2 , the virtual hostname is msnw2ascs , 10.3.1.52 , and the instance
number that you used for the probe of the load balancer, for example 10 . For
system NW3 , the virtual hostname is msnw3ascs , 10.3.1.54 , and the instance
number that you used for the probe of the load balancer, for example 20 . Note
down on which cluster node you installed ASCS for each SAP SID.
3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of
the other SAP system you're deploying to the cluster. This example is for NW2 and
NW3 ERS, using NFS on Azure NetApp Files volumes with NFSv3 protocol.
Make sure the cluster status is ok and that all resources are started.
Next, make sure that the resources of the newly created ERS group are running on
the cluster node, opposite to the cluster node where the ASCS instance for the
same SAP system was installed. For example, if NW2 ASCS was installed on
rhelmsscl1 , then make sure the NW2 ERS group is running on rhelmsscl2 . You
can migrate the NW2 ERS group to rhelmsscl2 by running the following command
for one of the cluster resources in the group:
Install SAP NetWeaver ERS as root on the other node, using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS. For example, for system NW2 , the virtual host name is msnw2ers , 10.3.1.53 ,
and the instance number that you used for the probe of the load balancer, for
example 12 . For system NW3 , the virtual host name msnw3ers , 10.3.1.55 , and the
instance number that you used for the probe of the load balancer, for example 22 .
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again
sudo firewall-cmd --zone=public --add-port=4237/tcp
sudo swpm/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname
7 Note
If it was necessary for you to migrate the ERS group of the newly deployed SAP
system to a different cluster node, don't forget to remove the location constraint
for the ERS group. You can remove the constraint by running the following
command. This example is given for SAP systems NW2 and NW3 . Make sure to
remove the temporary constraints for the same resource you used in the command
to move the ERS cluster group.
5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP
systems. The example shown below is for NW2 . You need to adapt the ASCS/SCS
and ERS profiles for all SAP instances added to the cluster.
ASCS/SCS profile
sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP note 1410736 .
ERS profile
sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from /usr/sap/sapservices file.
The example shown below is for SAP systems NW2 and NW3 .
) Important
With the systemd based SAP Startup Framework, SAP instances can now be
managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL)
version is RHEL 8 for SAP. As described in SAP Note 3115048 , a fresh
installation of a SAP kernel with integrated systemd based SAP Startup
Framework support will always result in a systemd controlled SAP instance.
After an SAP kernel upgrade of an existing SAP installation to a kernel which
has systemd based SAP Startup Framework support, however, some manual
steps have to be performed as documented in SAP Note 3115048 to convert
the existing SAP startup environment to one which is systemd controlled.
When utilizing Red Hat HA services for SAP (cluster configuration) to manage
SAP application server instances such as SAP ASCS and SAP ERS, additional
modifications will be necessary to ensure compatibility between the
SAPInstance resource agent and the new systemd-based SAP startup
framework. So once the SAP application server instances has been installed or
switched to a systemd enabled SAP Kernel as per SAP Note 3115048 , the
steps mentioned in Red Hat KBA 6884531 must be completed successfully
on all cluster nodes.
7. [1] Create the SAP cluster resources for the newly installed SAP system.
ENSA1
Bash
If you're upgrading from an older version and switching to enqueue server 2, see
SAP note 2641019 .
7 Note
The timeouts in the above configuration are just examples and might need to
be adapted to the specific SAP setup.
Make sure that the cluster status is ok and that all resources are started. It's not
important on which node the resources are running. The following example shows
the cluster resources status, after SAP systems NW2 and NW3 were added to the
cluster.
Bash
8. [A] Add firewall rules for ASCS and ERS on both nodes. The example below shows
the firewall rules for both SAP systems NW2 and NW3 .
Bash
# NW1 - ASCS
sudo firewall-cmd --zone=public --add-port=
{62010,3210,3610,3910,8110,51013,51014,51016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62010,3210,3610,3910,8110,51013,51014,51016}/tcp
# NW2 - ERS
sudo firewall-cmd --zone=public --add-port=
{62112,3212,3312,51213,51214,51216}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62112,3212,3312,51213,51214,51216}/tcp
# NW3 - ASCS
sudo firewall-cmd --zone=public --add-port=
{62020,3220,3620,3920,8120,52013,52014,52016}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62020,3220,3620,3920,8120,52013,52014,52016}/tcp
# NW3 - ERS
sudo firewall-cmd --zone=public --add-port=
{62122,3222,3322,52213,52214,52216}/tcp --permanent
sudo firewall-cmd --zone=public --add-port=
{62122,3222,3322,52213,52214,52216}/tcp
If you use Azure NetApp Files NFS volumes, follow Azure VMs high availability for
SAP NetWeaver on RHEL with Azure NetApp Files for SAP applications
If you use highly available GlusterFS , follow Azure VMs high availability for SAP
NetWeaver on RHEL for SAP applications.
Always read the Red Hat best practices guides and perform all other tests that might
have been added. The tests that are presented are in a two-node, multi-SID cluster with
three SAP systems installed.
1. Manually migrate the ASCS instance. The example shows migrating the ASCS
instance for SAP system NW3.
Run the following commands as root to migrate the NW3 ASCS instance.
# Remove failed actions for the ERS that occurred as part of the
migration
pcs resource cleanup rsc_sap_NW3_ERS22
Run the following command as root on a node where at least one ASCS instance is
running. This example runs the command on rhelmsscl1 , where the ASCS
instances for NW1 , NW2 , and NW3 are running.
The status after the test and after the node that was crashed has started again,
should look like these results:
If there are messages for failed resources, clean the status of the failed resources.
For example:
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA
on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines (VMs).
High-availability SAP NetWeaver with
simple mount and NFS on SLES for SAP
Applications VMs
Article • 05/06/2024
This article describes how to deploy and configure Azure virtual machines (VMs), install
the cluster framework, and install a high-availability (HA) SAP NetWeaver system with a
simple mount structure. You can implement the presented architecture by using one of
the following Azure native Network File System (NFS) services:
The simple mount configuration is expected to be the default for new implementations
on SLES for SAP Applications 15.
Prerequisites
The following guides contain all the required information to set up a NetWeaver HA
system:
SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster With Simple
Mount
Use of Filesystem resource for ABAP SAP Central Services (ASCS)/ERS HA setup not
possible
SAP Note 1928533 , which has:
A list of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, operating systems (OSs), and combinations
The required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 , which lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 , which has recommended OS settings for SUSE Linux
Enterprise Server (SLES) for SAP Applications
SAP Note 2178632 , which has detailed information about all monitoring metrics
reported for SAP in Azure
SAP Note 2191498 , which has the required SAP Host Agent version for Linux in
Azure
SAP Note 2243692 , which has information about SAP licensing on Linux in Azure
SAP Note 2578899 , which has general information about SUSE Linux Enterprise
Server 15
SAP Note 1275776 , which has information about preparing SUSE Linux
Enterprise Server for SAP environments
SAP Note 1999351 , which has additional troubleshooting information for the
Azure Enhanced Monitoring Extension for SAP
SAP community wiki , which has all required SAP Notes for Linux
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SUSE SAP HA best practice guides
SUSE High Availability Extension release notes
Azure Files documentation
NetApp NFS best practices
Overview
This article describes a high-availability configuration for ASCS with a simple mount
structure. To deploy the SAP application layer, you need shared directories like
/sapmnt/SID , /usr/sap/SID , and /usr/sap/trans , which are highly available. You can
deploy these file systems on NFS on Azure Files or Azure NetApp Files.
Compared to the classic Pacemaker cluster configuration, with the simple mount
deployment, the cluster doesn't manage the file systems. This configuration is
supported only on SLES for SAP Applications 15 and later. This article doesn't cover the
database layer in detail.
The example configurations and installation commands use the following instance
numbers.
ノ Expand table
ASCS 00
) Important
The configuration with simple mount structure is supported only on SLES for SAP
Applications 15 and later releases.
Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
7 Note
Health probe configuration property numberOfProbes, otherwise known as
"Unhealthy threshold" in Portal, isn't respected. So to control the number of
successful or failed consecutive probes, set the property "probeThreshold" to
2. It is currently not possible to set this property using Azure portal, so use
either the Azure CLI or PowerShell command.
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard Azure load balancer, there will be no
outbound internet connectivity unless you perform additional configuration to
allow routing to public endpoints. For details on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Azure
Standard Load Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer
health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more information, see saptune 3.1.1 – Do I Need to Update? .
Deploy NFS
There are two options for deploying Azure native NFS to host the SAP shared
directories. You can either deploy an NFS file share on Azure Files or deploy an NFS
volume on Azure NetApp Files. NFS on Azure Files supports the NFSv4.1 protocol. NFS
on Azure NetApp Files supports both NFSv4.1 and NFSv3.
The next sections describe the steps to deploy NFS. Select only one of the options.
Locally redundant storage (LRS) offers local, in-zone synchronous data replication.
Zone-redundant storage (ZRS) replicates your data synchronously across the three
availability zones in the region.
Check if your selected Azure region offers NFSv4.1 on Azure Files with the appropriate
redundancy. Review the availability of Azure Files by Azure region for Premium Files
Storage. If your scenario benefits from ZRS, verify that premium file shares with ZRS are
supported in your Azure region.
We recommend that you access your Azure storage account through an Azure private
endpoint. Be sure to deploy the Azure Files storage account endpoint, and the VMs
where you need to mount the NFS shares, in the same Azure virtual network or in
peered Azure virtual networks.
1. Deploy an Azure Files storage account named sapnfsafs. This example uses ZRS. If
you're not familiar with the process, see Create a storage account for the Azure
portal.
2. On the Basics tab, use these settings:
a. For Storage account name, enter sapnfsafs.
b. For Performance, select Premium.
c. For Premium account type, select FileStorage.
d. For Replication, select Zone redundancy (ZRS).
3. Select Next.
4. On the Advanced tab, clear Require secure transfer for REST API. If you don't clear
this option, you can't mount the NFS share to your VM. The mount operation will
time out.
5. Select Next.
6. In the Networking section, configure these settings:
a. Under Networking connectivity, for Connectivity method, select Private
endpoint.
b. Under Private endpoint, select Add private endpoint.
7. On the Create private endpoint pane, select your subscription, resource group,
and location. Then make the following selections:
a. For Name, enter sapnfsafs_pe.
b. For Storage sub-resource, select file.
c. Under Networking, for Virtual network, select the virtual network and subnet to
use. Again, you can use either the virtual network where your SAP VMs are or a
peered virtual network.
d. Under Private DNS integration, accept the default option of Yes for Integrate
with private DNS zone. Be sure to select your private DNS zone.
e. Select OK.
8. On the Networking tab again, select Next.
9. On the Data protection tab, keep all the default settings.
10. Select Review + create to validate your configuration.
11. Wait for the validation to finish. Fix any issues before continuing.
12. On the Review + create tab, select Create.
Next, deploy the NFS shares in the storage account that you created. In this example,
there are two NFS shares, sapnw1 and saptrans .
The SAP file systems that don't need to be mounted via NFS can also be deployed on
Azure disk storage. In this example, you can deploy /usr/sap/NW1/D02 and
/usr/sap/NW1/D03 on Azure disk storage.
3. Set up the Azure NetApp Files capacity pool. Follow these instructions.
The SAP NetWeaver architecture presented in this article uses a single Azure
NetApp Files capacity pool, Premium SKU. We recommend Azure NetApp Files
Premium SKU for SAP NetWeaver application workloads on Azure.
5. Deploy Azure NetApp Files volumes by following these instructions. Deploy the
volumes in the designated Azure NetApp Files subnet. The IP addresses of the
Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure virtual network or in peered Azure virtual networks. This example
uses two Azure NetApp Files volumes: sapnw1 and trans . The file paths that are
mounted to the corresponding mount points are:
The SAP file systems that don't need to be shared can also be deployed on Azure disk
storage. For example, /usr/sap/NW1/D02 and /usr/sap/NW1/D03 could be deployed as
Azure disk storage.
When you're considering Azure NetApp Files for the SAP NetWeaver high-availability
architecture, be aware of the following important considerations:
The minimum capacity pool is 4 tebibytes (TiB). You can increase the size of the
capacity pool in 1-TiB increments.
The minimum volume is 100 GiB.
Azure NetApp Files and all virtual machines where Azure NetApp Files volumes are
mounted must be in the same Azure virtual network or in peered virtual networks
in the same region. Azure NetApp Files access over virtual network peering in the
same region is supported. Azure NetApp Files access over global peering isn't yet
supported.
The selected virtual network must have a subnet that's delegated to Azure NetApp
Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. When you're sizing the Azure NetApp Files volumes for
SAP, make sure that the resulting throughput meets the application's
requirements.
Azure NetApp Files offers an export policy. You can control the allowed clients and
the access type (for example, read/write or read-only).
Azure NetApp Files isn't zone aware yet. Currently, Azure NetApp Files isn't
deployed in all availability zones in an Azure region. Be aware of the potential
latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP application layer (ASCS/ERS, SAP application
servers).
Set up ASCS
Next, you'll prepare and install the SAP ASCS and ERS instances.
Bash
Bash
sudo zypper install sapstartsrv-resource-agents
To use the configuration that this article describes, you need a patch for the
resource-agents package. To check if the patch is already installed, use the
following command.
Bash
Bash
If the grep command doesn't find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page .
) Important
You can either use a DNS server or modify /etc/hosts on all nodes. This example
shows how to use the /etc/hosts file.
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and host name to
match your environment.
Bash
# IP address of cluster node 1
10.27.0.6 sap-cl1
# IP address of cluster node 2
10.27.0.7 sap-cl2
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ASCS
10.27.0.9 sapascs
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ERS
10.27.0.10 sapers
Bash
sudo vi /etc/waagent.conf
Bash
Temporarily mount the NFS share sapnw1 to one of the VMs and create the SAP
directories that will be used as nested mount points.
Bash
# Temporarily mount the volume.
sudo mkdir -p /saptmp
sudo mount -t nfs sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1
/saptmp -o noresvport,vers=4,minorversion=1,sec=sys
# Create the SAP directories.
sudo cd /saptmp
sudo mkdir -p sapmntNW1
sudo mkdir -p usrsapNW1
# Unmount the volume and delete the temporary directory.
cd ..
sudo umount /saptmp
sudo rmdir /saptmp
Bash
With the simple mount configuration, the Pacemaker cluster doesn't control the
file systems.
Bash
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1
/sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/
/usr/sap/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans
/usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
# Mount the file systems.
mount -a
a. Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain, defaultv4iddomain.com . Also verify that the
mapping is set to nobody .
Bash
Bash
# Check nfs4_disable_idmapping.
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
# If you need to set nfs4_disable_idmapping to Y:
mkdir /mnt/tmp
mount 10.27.1.5:/sapnw1 /mnt/tmp
umount /mnt/tmp
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
# Make the configuration permanent.
echo "options nfs nfs4_disable_idmapping=Y" >>
/etc/modprobe.d/nfs.conf
2. [1] Temporarily mount the Azure NetApp Files volume on one of the VMs and
create the SAP directories (file paths).
Bash
# Temporarily mount the volume.
sudo mkdir -p /saptmp
# If you're using NFSv3:
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,nfsvers=3,tcp
10.27.1.5:/sapnw1 /saptmp
# If you're using NFSv4.1:
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,nfsvers=4.1,sec=sys,tcp
10.27.1.5:/sapnw1 /saptmp
# Create the SAP directories.
sudo cd /saptmp
sudo mkdir -p sapmntNW1
sudo mkdir -p usrsapNW1
# Unmount the volume and delete the temporary directory.
sudo cd ..
sudo umount /saptmp
sudo rmdir /saptmp
Bash
With the simple mount configuration, the Pacemaker cluster doesn't control the
file systems.
Bash
) Important
Bash
Make sure that the cluster status is OK and that all resources are started. It isn't
important which node the resources are running on.
Bash
sudo crm_mon -r
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl1
# Resource Group: g-NW1_ASCS
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
Bash
Bash
3. [1] Create a virtual IP resource and health probe for the ERS instance.
Bash
Make sure that the cluster status is OK and that all resources are started. It isn't
important which node the resources are running on.
Bash
sudo crm_mon -r
Use a virtual host name that maps to the IP address of the load balancer's front-
end configuration for ERS (for example, sapers , 10.27.0.10 ) and the instance
number that you used for the probe of the load balancer (for example, 01 ).
Bash
<swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname
7 Note
Bash
Bash
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs
For Standalone Enqueue Server 1 and 2 (ENSA1 and ENSA2), make sure that the
keepalive OS parameters are set as described in SAP Note 1410736 .
Bash
sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
To prevent this disconnection, you need to set a parameter in the SAP NetWeaver
ASCS profile, if you're using ENSA1. Change the Linux system keepalive settings
on all SAP servers for both ENSA1 and ENSA2. For more information, read SAP
Note1410736 .
Bash
Bash
8. [1] Add the ASCS and ERS SAP services to the sapservice file.
Add the ASCS service entry to the second node, and copy the ERS service entry to
the first node.
Bash
9. [A] Enable sapping and sappong . The sapping agent runs before sapinit to hide
the /usr/sap/sapservices file. The sappong agent runs after sapinit to unhide the
sapservices file during VM boot. SAPStartSrv isn't started automatically for an
SAP instance at boot time, because the Pacemaker cluster manages it.
Bash
10. [1] Create SAPStartSrv resource for ASCS and ERS by creating a file and then load
the file.
Bash
vi crm_sapstartsrv.txt
Bash
Bash
7 Note
ENSA1
Bash
If you're upgrading from an older version and switching to ENSA2, see SAP Note
2641019 .
Make sure that the cluster status is OK and that all resources are started. It isn't
important which node the resources are running on.
Bash
sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl2
# Resource Group: g-NW1_ASCS
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
# rsc_sapstartsrv_NW1_ASCS00 (ocf::suse:SAPStartSrv): Started
sap-cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
# Resource Group: g-NW1_ERS
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
# rsc_sapstartsrv_NW1_ERS01 (ocf::suse:SAPStartSrv): Started
sap-cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
The following common steps assume that you install the application server on a server
that's different from the ASCS and HANA servers:
You can either use a DNS server or modify /etc/hosts on all nodes. This example
shows how to use the /etc/hosts file.
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts . Change the IP address and host name to
match your environment.
Bash
10.27.0.6 sap-cl1
10.27.0.7 sap-cl2
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ASCS
10.27.0.9 sapascs
# IP address of the load balancer's front-end configuration for SAP
NetWeaver ERS
10.27.0.10 sapers
10.27.0.8 sapa01
10.27.0.12 sapa02
Bash
sudo vi /etc/waagent.conf
Bash
Bash
Bash
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1
/sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans
/usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >>
/etc/fstab
# Mount the file systems.
mount -a
If you're using NFS on Azure NetApp Files, use the following instructions to prepare the
SAP directories on the SAP application server VMs:
Bash
Bash
Bash
Bash
3. [A] Update the SAP HANA secure store to point to the virtual name of the SAP
HANA system replication setup.
Bash
hdbuserstore List
The command should list all entries and should look similar to this example.
Bash
KEY DEFAULT
ENV : 10.27.0.4:30313
USER: SAPABAP1
DATABASE: NW1
In this example, the IP address of the default entry points to the VM, not the load
balancer. Change the entry to point to the virtual host name of the load balancer.
Be sure to use the same port and database name. For example, use 30313 and NW1
in the sample output.
Bash
su - nw1adm
hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP
schema>
Next steps
HA for SAP NetWeaver on Azure VMs on SLES for SAP applications multi-SID guide
SAP workload configurations with Azure availability zones
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
High Availability of SAP HANA on Azure VMs
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server with NFS on Azure Files
Article • 02/05/2024
This article describes how to deploy and configure VMs, install the cluster framework,
and install an HA SAP NetWeaver system, using NFS on Azure Files. The example
configurations use VMs that run on SUSE Linux Enterprise Server (SLES).
For new implementations on SLES for SAP Applications 15, we recommended deploying
high availability for SAP ASCS/ERS in simple mount configuration. The classic Pacemaker
configuration, based on cluster-controlled file systems for the SAP central services
directories, described in this article is still supported .
Prerequisites
Azure Files documentation.
SAP Note 1928533 , which has:
List of Azure VM sizes that are supported for the deployment of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software, and operating system (OS) and database
combinations.
Required SAP kernel version for Windows and Linux on Microsoft Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 2578899 has general information about SUSE Linux Enterprise Server
15
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux.
Azure Virtual Machines deployment for SAP on Linux.
Azure Virtual Machines DBMS deployment for SAP on Linux.
SUSE SAP HA Best Practice Guides . The guides contain all required information
to set up Netweaver HA and SAP HANA System Replication on-premises. Use these
guides as a general baseline. They provide much more detailed information.
SUSE High Availability Extension Release Notes .
Overview
To deploy the SAP NetWeaver application layer, you need shared directories like
/sapmnt/SID and /usr/sap/trans in the environment. Additionally, when deploying an
HA SAP system, you need to protect and make highly available file systems like
/sapmnt/SID and /usr/sap/SID/ASCS .
Now you can place these file systems on NFS on Azure Files. NFS on Azure Files is an HA
storage solution. This solution offers synchronous Zone redundant storage (ZRS) and is
suitable for SAP ASCS/ERS instances deployed across Availability Zones. You still need a
Pacemaker cluster to protect single point of failure components like SAP Netweaver
central services(ASCS/SCS).
The example configurations and installation commands use the following instance
numbers:
ノ Expand table
ERS 01
Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.
Configure Azure load balancer
During VM configuration, you have an option to create or select exiting load balancer in
networking section. Follow the steps below to configure a standard load balancer for the
high-availability setup of SAP ASCS and SAP ERS.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
7 Note
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer
health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .
Locally redundant storage (LRS), which offers local, in-zone synchronous data
replication.
Zone redundant storage (ZRS), which replicates your data synchronously across the
three availability zones in the region.
Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate
redundancy. Review the availability of Azure Files by Azure region under Premium
Files Storage. If your scenario benefits from ZRS, verify that Premium File shares with
ZRS are supported in your Azure region.
It's recommended to access your Azure Storage account through an Azure Private
Endpoint. Make sure to deploy the Azure Files storage account endpoint and the VMs,
where you need to mount the NFS shares, in the same Azure VNet or peered Azure
VNets.
1. Deploy a File Storage account named sapafsnfs . In this example, we use ZRS. If
you're not familiar with the process, see Create a storage account for the Azure
portal.
2. In the Basics tab, use these settings:
a. For Storage account name, enter sapafsnfs .
b. For Performance, select Premium.
c. For Premium account type, select FileStorage.
d. For Replication, select zone redundancy (ZRS).
3. Select Next.
4. In the Advanced tab, deselect Require secure transfer for REST API Operations. If
you don't deselect this option, you can't mount the NFS share to your VM. The
mount operation will time out.
5. Select Next.
6. In the Networking section, configure these settings:
a. Under Networking connectivity, for Connectivity method, select Private
endpoint.
b. Under Private endpoint, select Add private endpoint.
7. In the Create private endpoint pane, select your Subscription, Resource group,
and Location. For Name, enter sapafsnfs_pe . For Storage sub-resource, select file.
Under Networking, for Virtual network, select the VNet and subnet to use. Again,
you can use the VNet where your SAP VMs are, or a peered VNet. Under Private
DNS integration, accept the default option Yes for Integrate with private DNS
zone. Make sure to select your Private DNS Zone. Select OK.
8. On the Networking tab again, select Next.
9. On the Data protection tab, keep all the default settings.
10. Select Review + create to validate your configuration.
11. Wait for the validation to finish. Fix any issues before continuing.
12. On the Review + create tab, select Create.
Next, deploy the NFS shares in the storage account you created. In this example, there
are two NFS shares, sapnw1 and saptrans .
4. On the resource menu for sapafsnfs, select File shares under Data storage.
) Important
The share size above is just an example. Make sure to size your shares
appropriately. Size not only based on the size of the of data stored on the
share, but also based on the requirements for IOPS and throughput. For
details see Azure file share targets.
The SAP file systems that don't need to be mounted via NFS can also be deployed
on Azure disk storage. In this example, you can deploy /usr/sap/NW1/D02 and
/usr/sap/NW1/D03 on Azure disk storage.
The minimum share size is 100 GiB. You only pay for the capacity of the
provisioned shares.
Size your NFS shares not only based on capacity requirements, but also on IOPS
and throughput requirements. For details see Azure file share targets.
Test the workload to validate your sizing and ensure that it meets your
performance targets. To learn how to troubleshoot performance issues on Azure
Files, consult Troubleshoot Azure file shares performance.
For SAP J2EE systems, it's not supported to place /usr/sap/<SID>/J<nr> on NFS on
Azure Files.
If your SAP system has a heavy batch jobs load, you may have millions of job logs.
If the SAP batch job logs are stored in the file system, pay special attention to the
sizing of the sapmnt share. As of SAP_BASIS 7.52 the default behavior for the batch
job logs is to be stored in the database. For details see Job log in the database .
Deploy a separate sapmnt share for each SAP system.
Don't use the sapmnt share for any other activity, such as interfaces, or saptrans .
Don't use the saptrans share for any other activity, such as interfaces, or sapmnt .
Avoid consolidating the shares for too many SAP systems in a single storage
account. There are also Storage account performance scale targets. Be careful to
not exceed the limits for the storage account, too.
In general, don't consolidate the shares for more than 5 SAP systems in a single
storage account. This guideline helps avoid exceeding the storage account limits
and simplifies performance analysis.
In general, avoid mixing shares like sapmnt for non-production and production
SAP systems in the same storage account.
We recommend deploying on SLES 15 SP2 or higher to benefit from NFS client
improvements.
Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions
automatically redirect to a healthy zone. You don't have to remount the NFS shares
on your VMs.
If you're deploying your VMs across Availability Zones, use Storage account with
ZRS in the Azure regions that supports ZRS.
Azure Files doesn't currently support automatic cross-region replication for
disaster recovery scenarios.
Setting up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.
7 Note
The known issue with using a dash in host names is fixed with version 3.1.1 of
package sap-suse-cluster-connector. Make sure that you are using at least
version 3.1.1 of package sap-suse-cluster-connector, if using cluster nodes
with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector.
The old one was called sap_suse_cluster_connector and the new one is called sap-
suse-cluster-connector.
A patch for the resource-agents package is required to use the new configuration
that is described in this article. You can check, if the patch is already installed with
the following command
Bash
Bash
If the grep command does not find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
Bash
Bash
Bash
sudo mkdir -p /sapmnt/NW1
sudo mkdir -p /usr/sap/trans
sudo mkdir -p /usr/sap/NW1/SYS
sudo mkdir -p /usr/sap/NW1/ASCS00
sudo mkdir -p /usr/sap/NW1/ERS01
2. [A] Mount the file systems that will not be controlled by the Pacemaker cluster.
Bash
vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/
/usr/sap/NW1/SYS nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
Bash
sudo vi /etc/waagent.conf
) Important
Bash
Make sure that the cluster status is ok and that all resources are started. It is not
important on which node the resources are running.
Bash
sudo crm_mon -r
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the ASCS,
for example sapascs, 10.90.90.10 and the instance number that you used for the
probe of the load balancer, for example 00.
Bash
Bash
3. [1] Create a virtual IP resource and health-probe for the ERS instance
Bash
Make sure that the cluster status is ok and that all resources are started. It is not
important on which node the resources are running.
Bash
sudo crm_mon -r
Install SAP NetWeaver ERS as root on the second node using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS, for example sapers, 10.90.90.9 and the instance number that you used for the
probe of the load balancer, for example 01.
<swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname
7 Note
Bash
ASCS/SCS profile
Bash
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set
as described in SAP note 1410736 .
ERS profile
Bash
sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
# Change the restart command to a start command
#Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this you
need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1.
Change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.
Bash
Bash
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to
the first node.
Bash
ENSA1
Bash
If you are upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .
Make sure that the cluster status is ok and that all resources are started. It is not
important on which node the resources are running.
Bash
sudo crm_mon -r
# Full list of resources:
#
# stonith-sbd (stonith:external/sbd): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
# Resource Group: g-NW1_ERS
# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started sap-cl2
# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
The steps below assume that you install the application server on a server different from
the ASCS/SCS and HANA servers. Otherwise some of the steps below (like configuring
host name resolution) are not needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] -
only applicable to PAS or [S] - only applicable to AAS.
Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .
Bash
sudo vi /etc/sysctl.conf
# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
Bash
10.90.90.7 sap-cl1
10.90.90.8 sap-cl2
# IP address of the load balancer frontend configuration for SAP
Netweaver ASCS
10.90.90.10 sapascs
# IP address of the load balancer frontend configuration for SAP
Netweaver ERS
10.90.90.9 sapers
10.90.90.12 sapa01
10.90.90.13 sapa02
Bash
Bash
vi /etc/fstab
# Add the following lines to fstab, save and exit
sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs
noresvport,vers=4,minorversion=1,sec=sys 0 0
sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1
nfs noresvport,vers=4,minorversion=1,sec=sys 0 0
sudo vi /etc/waagent.conf
Bash
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). For a list of
supported databases, see SAP Note 1928533 .
Install the SAP NetWeaver database instance as root using a virtual hostname that maps
to the IP address of the load balancer frontend configuration for the database.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root
user to connect to sapinst.
Bash
1. [A] Prepare application server Follow the steps in the chapter SAP NetWeaver
application server preparation above to prepare the application server.
2. [A] Install SAP NetWeaver application server.
Install a primary or additional SAP NetWeaver applications server.
Bash
Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.
Bash
hdbuserstore List
The command should list all entries and should look similar to
Bash
KEY DEFAULT
ENV : 10.90.90.5:30313
USER: SAPABAP1
DATABASE: NW1
In this example, the IP address of the default entry points to the VM, not the load
balancer. Change the entry to point to the virtual hostname of the load balancer.
Make sure to use the same port and database name. For example, 30313 and NW1
in the sample output.
Bash
su - nw1adm
hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP
schema>
Test cluster setup
Thoroughly test your Pacemaker cluster. Execute the typical failover tests.
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server with Azure NetApp Files for SAP
applications
Article • 01/18/2024
This article explains how to configure high availability for SAP NetWeaver application
with Azure NetApp Files.
For new implementations on SLES for SAP Applications 15, we recommended deploying
high availability for SAP ASCS/ERS in simple mount configuration. The classic Pacemaker
configuration, based on cluster-controlled file systems for the SAP central services
directories, described in this article is still supported .
In the example configurations, installation commands etc., the ASCS instance is number
00, the ERS instance number 01, the Primary Application instance (PAS) is 02 and the
Application instance (AAS) is 03. SAP System ID QAS is used. The database layer isn't
covered in detail in this article.
Overview
High availability(HA) for SAP Netweaver central services requires shared storage. To
achieve that on SUSE Linux so far it was necessary to build separate highly available NFS
cluster.
Now it's possible to achieve SAP Netweaver HA by using shared storage, deployed on
Azure NetApp Files. Using Azure NetApp Files for the shared storage eliminates the
need for additional NFS cluster. Pacemaker is still needed for HA of the SAP Netweaver
central services(ASCS/SCS).
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA
database use virtual hostname and virtual IP addresses. On Azure, a load balancer is
required to use a virtual IP address. We recommend using Standard load balancer. The
presented configuration shows a load balancer with:
1. Create the NetApp account in the selected Azure region, following the instructions
to create NetApp Account.
2. Set up Azure NetApp Files capacity pool, following the instructions on how to set
up Azure NetApp Files capacity pool.
The SAP Netweaver architecture presented in this article uses single Azure NetApp
Files capacity pool, Premium SKU. We recommend Azure NetApp Files Premium
SKU for SAP Netweaver application workload on Azure.
3. Delegate a subnet to Azure NetApp files as described in the instructions Delegate
a subnet to Azure NetApp Files.
4. Deploy Azure NetApp Files volumes, following the instructions to create a volume
for Azure NetApp Files. Deploy the volumes in the designated Azure NetApp Files
subnet. The IP addresses of the Azure NetApp volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
the same Azure Virtual Network or in peered Azure Virtual Networks. In this
example we use two Azure NetApp Files volumes: sapQAS and trans. The file paths
that are mounted to the corresponding mount points are /usrsapqas/sapmntQAS,
/usrsapqas/usrsapQASsys, etc.
a. volume sapQAS (nfs://10.1.0.4/usrsapqas/sapmntQAS)
b. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASascs)
c. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASsys)
d. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASers)
e. volume trans (nfs://10.1.0.4/trans)
f. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASpas)
g. volume sapQAS (nfs://10.1.0.4/usrsapqas/usrsapQASaas)
In this example, we used Azure NetApp Files for all SAP Netweaver file systems to
demonstrate how Azure NetApp Files can be used. The SAP file systems that don't need
to be mounted via NFS can also be deployed as Azure disk storage . In this example a-e
must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS/D02, /usr/sap/QAS/D03)
could be deployed as Azure disk storage.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability
architecture, be aware of the following important considerations:
The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1-
TiB increments.
The minimum volume is 100 GiB
Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will
be mounted, must be in the same Azure Virtual Network or in peered virtual
networks in the same region. Azure NetApp Files access over VNET peering in the
same region is supported now. Azure NetApp access over global peering is not yet
supported.
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. While sizing the SAP Azure NetApp volumes, make sure
that the resulting throughput meets the application requirements.
Azure NetApp Files offers export policy: you can control the allowed clients, the
access type (Read&Write, Read Only, etc.).
Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files
feature isn't deployed in all Availability zones in an Azure region. Be aware of the
potential latency implications in some Azure regions.
Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both
protocols are supported for the SAP application layer (ASCS/ERS, SAP application
servers).
Prepare infrastructure
The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.
Deploy Linux VMs manually via Azure portal
This document assumes that you've already deployed a resource group, Azure Virtual
Network, and subnet.
Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version
of SLES image that is supported for SAP system. You can deploy VM in any one of the
availability options - virtual machine scale set, availability zone, or availability set.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
) Important
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer
health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .
1. Verify the NFS domain setting. Make sure that the domain is configured as the
default Azure NetApp Files domain that is, defaultv4iddomain.com and the
mapping is set to nobody.
) Important
configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure
NetApp configuration, then the permissions for files on Azure NetApp
volumes that are mounted on the VMs will be displayed as nobody .
Bash
# Example
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = defaultv4iddomain.com
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
Bash
# Check nfs4_disable_idmapping
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
Setting up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.
Bash
7 Note
The known issue with using a dash in host names is fixed with version 3.1.1 of
package sap-suse-cluster-connector. Make sure that you are using at least
version 3.1.1 of package sap-suse-cluster-connector, if using cluster nodes
with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector.
The old one was called sap_suse_cluster_connector and the new one is called sap-
suse-cluster-connector.
Bash
A patch for the resource-agents package is required to use the new configuration
that is described in this article. You can check, if the patch is already installed with
the following command
Bash
text
If the grep command doesn't find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page
Bash
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
text
Mount temporarily the Azure NetApp Files volume on one of the VMs and create
the SAP directories(file paths).
Bash
Bash
Bash
sudo vi /etc/auto.master
Bash
sudo vi /etc/auto.direct
Bash
sudo vi /etc/auto.direct
7 Note
Make sure to match the NFS protocol version of the Azure NetApp Files
volumes, when mounting the volumes. If the Azure NetApp Files volumes are
created as NFSv3 volumes, use the corresponding NFSv3 configuration. If the
Azure NetApp Files volumes are created as NFSv4.1 volumes, follow the
instructions to disable ID mapping and make sure to use the corresponding
NFSv4.1 configuration. In this example the Azure NetApp Files volumes were
created as NFSv3 volumes.
Bash
Bash
sudo vi /etc/waagent.conf
Bash
sudo service waagent restart
) Important
Bash
# If using NFSv3
sudo crm configure primitive fs_QAS_ASCS Filesystem
device='10.1.0.4/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
# If using NFSv4.1
sudo crm configure primitive fs_QAS_ASCS Filesystem
device='10.1.0.4:/usrsapqas/usrsapQASascs'
directory='/usr/sap/QAS/ASCS00' fstype='nfs'
options='sec=sys,nfsvers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=105s
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
Bash
sudo crm_mon -r
Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the ASCS,
for example anftstsapvh, 10.1.1.20 and the instance number that you used for the
probe of the load balancer, for example 00.
Bash
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
SAPINST_USE_HOSTNAME=virtual_hostname
Bash
3. [1] Create a virtual IP resource and health-probe for the ERS instance.
Bash
# If using NFSv3
sudo crm configure primitive fs_QAS_ERS Filesystem
device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
# If using NFSv4.1
sudo crm configure primitive fs_QAS_ERS Filesystem
device='10.1.0.4:/usrsapqas/usrsapQASers'
directory='/usr/sap/QAS/ERS01' fstype='nfs'
options='sec=sys,nfsvers=4.1' \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=105s
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
Bash
sudo crm_mon -r
Install SAP NetWeaver ERS as root on the second node using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS, for example anftstsapers, 10.1.1.21 and the instance number that you used for
the probe of the load balancer, for example 01.
Bash
7 Note
Bash
chown qasadm /usr/sap/QAS/ERS01
chgrp sapsys /usr/sap/QAS/ERS01
ASCS/SCS profile
Bash
sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP note 1410736 .
ERS profile
Bash
sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this you
need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1,
and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.
Bash
Bash
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to
the first node.
Bash
ENSA1
Bash
# If using NFSv3
sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=60 migration-
threshold=1 priority=10
# If using NFSv4.1
sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \
op monitor interval=11 timeout=105 on-fail=restart \
params InstanceName=QAS_ASCS00_anftstsapvh
START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
AUTOMATIC_RECOVER=false \
meta resource-stickiness=5000 failure-timeout=105 migration-
threshold=1 priority=10
# If using NFSv3
sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000
# If using NFSv4.1
sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \
op monitor interval=11 timeout=105 on-fail=restart \
params InstanceName=QAS_ERS01_anftstsapers
START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers"
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000
If you're upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .
7 Note
The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-
specific pause, related to NFSv4.1 lease renewals. For more information, see NFS in
NetApp Best practice .
The timeouts in the above configuration may need to be adapted to the specific
SAP setup.
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
Bash
sudo crm_mon -r
The steps bellow assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like
configuring host name resolution) aren't needed.
The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] -
only applicable to PAS or [S] - only applicable to AAS.
Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .
Bash
sudo vi /etc/sysctl.conf
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
text
Bash
Bash
Bash
Bash
sudo vi /etc/auto.master
Bash
sudo vi /etc/auto.direct
Bash
sudo vi /etc/auto.direct
# Add the following lines to the file, save and exit
/sapmnt/QAS -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/usrsapqas/sapmntQAS
/usr/sap/trans -nfsvers=4.1,nobind,sec=sys 10.1.0.4:/trans
/usr/sap/QAS/D02 -nfsvers=4.1,nobind,sec=sys
10.1.0.4:/usrsapqas/usrsapQASpas
Bash
Bash
sudo vi /etc/auto.master
Bash
sudo vi /etc/auto.direct
Bash
sudo vi /etc/auto.direct
Bash
sudo systemctl enable autofs
sudo service autofs restart
Bash
sudo vi /etc/waagent.conf
Bash
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). For a list of
supported databases, see SAP Note 1928533 .
Install the SAP NetWeaver database instance as root using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the
database.
Bash
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
1. [A] Prepare application server Follow the steps in the chapter SAP NetWeaver
application server preparation above to prepare the application server.
2. [A] Install SAP NetWeaver application server Install a primary or additional SAP
NetWeaver applications server.
Bash
Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.
Bash
hdbuserstore List
text
KEY DEFAULT
ENV : 10.1.1.5:30313
USER: SAPABAP1
DATABASE: QAS
The output shows that the IP address of the default entry is pointing to the virtual
machine and not to the load balancer's IP address. This entry needs to be changed
to point to the virtual hostname of the load balancer. Make sure to use the same
port (30313 in the output above) and database name (QAS in the output above)!
Bash
su - qasadm
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise
Server for SAP applications
Article • 01/18/2024
This article describes how to deploy the virtual machines, configure the virtual machines,
install the cluster framework, and install a highly available SAP NetWeaver or SAP ABAP
platform based system. In the example configurations, ASCS instance number 00, ERS
instance number 02, and SAP System ID NW1 is used.
For new implementations on SLES for SAP Applications 15, we recommended deploying
high availability for SAP ASCS/ERS in simple mount configuration. The classic Pacemaker
configuration, based on cluster-controlled file systems for the SAP central services
directories, described in this article is still supported .
Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is
configured in a separate cluster and can be used by multiple SAP systems.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and
the SAP HANA database use virtual hostname and virtual IP addresses. On Azure, a load
balancer is required to use a virtual IP address. We recommend using Standard load
balancer. The presented configuration shows a load balancer with:
Frontend IP address 10.0.0.7 for ASCS
Frontend IP address 10.0.0.8 for ERS
Probe port 62000 for ASCS
Probe port 62101 for ERS
7 Note
We recommend deploying one of the Azure first-party NFS services: NFS on Azure
Files or NFS ANF volumes for storing shared data in a highly available SAP system.
Be aware that we are de-emphasizing SAP reference architectures, utilizing NFS
clusters.
The SAP configuration guides for SAP NW highly available SAP system with native
NFS services are:
High availability SAP NW on Azure VMswith simple mount and NFS on SLES
for SAP Applications
High availability for SAP NW on Azure VMs with NFS on Azure Files on SLES
for SAP Applications
High availability for SAP NW on Azure VMs with NFS on Azure NetApp Files
on SLES for SAP Applications
SAP NetWeaver requires shared storage for the transport and profile directory. Read
High availability for NFS on Azure VMs on SUSE Linux Enterprise Server on how to set
up an NFS server for SAP NetWeaver.
Prepare infrastructure
The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP
Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is
available in Azure Marketplace. You can use the image to deploy new VMs.
Azure portal
Follow create load balancer guide to set up a standard load balancer for a high
availability SAP system using the Azure portal. During the setup of load balancer,
consider following points.
1. Frontend IP Configuration: Create two frontend IP, one for ASCS and another
for ERS. Select the same virtual network and subnet as your ASCS/ERS virtual
machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs.
3. Inbound rules: Create two load balancing rule, one for ASCS and another for
ERS. Follow the same steps for both load balancing rules.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer
health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .
Setting up (A)SCS
Next, you'll prepare and install the SAP ASCS and ERS instances.
Installation
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1, or [2] - only applicable to node 2.
Bash
7 Note
The known issue with using a dash in host names is fixed with version 3.1.1 of
package sap-suse-cluster-connector. Make sure that you are using at least
version 3.1.1 of package sap-suse-cluster-connector, if using cluster nodes
with dash in the host name. Otherwise your cluster will not work.
Make sure that you installed the new version of the SAP SUSE cluster connector.
The old one was called sap_suse_cluster_connector and the new one is called sap-
suse-cluster-connector.
Bash
Bash
text
If the grep command doesn't find the IS_ERS parameter, you need to install the
patch listed on the SUSE download page .
Bash
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands.
Bash
sudo vi /etc/hosts
Bash
Bash
sudo vi /etc/auto.master
Bash
sudo vi /etc/auto.direct
Bash
sudo systemctl enable autofs
sudo service autofs restart
Create a swap file as defined in Create a SWAP file for an Azure Linux VM
Bash
#!/bin/sh
if [ ! -f ${LOCATION}/swapfile ]
then
# Enable swap
/sbin/swapon ${LOCATION}/swapfile
/sbin/swapon -a
Bash
chmod +x /var/lib/cloud/scripts/per-boot/swap.sh
Stop and start the VM. Stopping and starting the VM is only necessary the first
time after you create the SWAP file.
) Important
Bash
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
Bash
sudo crm_mon -r
Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the ASCS,
for example nw1-ascs, 10.0.0.7 and the instance number that you used for the
probe of the load balancer, for example 00.
Bash
Bash
3. [1] Create a virtual IP resource and health-probe for the ERS instance
Bash
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
Bash
sudo crm_mon -r
Install SAP NetWeaver ERS as root on the second node using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS, for example nw1-aers, 10.0.0.8 and the instance number that you used for the
probe of the load balancer, for example 02.
Bash
7 Note
Bash
ASCS/SCS profile
Bash
sudo vi /sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are
set as described in SAP note 1410736 .
ERS profile
Bash
sudo vi /sapmnt/NW1/profile/NW1_ERS02_nw1-aers
The communication between the SAP NetWeaver application server and the
ASCS/SCS is routed through a software load balancer. The load balancer
disconnects inactive connections after a configurable timeout. To prevent this you
need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1,
and change the Linux system keepalive settings on all SAP servers for both
ENSA1/ENSA2. Read SAP Note 1410736 for more information.
Bash
Bash
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to
the first node.
Bash
ENSA1
Bash
If you're upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
Bash
sudo crm_mon -r
The steps bellow assume that you install the application server on a server different
from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like
configuring host name resolution) aren't needed.
Reduce the size of the dirty cache. For more information, see Low write
performance on SLES 11/12 servers with large RAM .
Bash
sudo vi /etc/sysctl.conf
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
text
Bash
Bash
sudo vi /etc/auto.master
Bash
sudo vi /etc/auto.direct
Bash
Bash
sudo vi /etc/waagent.conf
Bash
sudo service waagent restart
Install database
In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported
database for this installation. For more information on how to install SAP HANA in
Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). For a list of
supported databases, see SAP Note 1928533 .
Install the SAP NetWeaver database instance as root using a virtual hostname that
maps to the IP address of the load balancer frontend configuration for the
database, for example, nw1-db and 10.0.0.13.
Bash
Follow the steps in the chapter SAP NetWeaver application server preparation
above to prepare the application server.
Bash
Update the SAP HANA secure store to point to the virtual name of the SAP HANA
System Replication setup.
Bash
hdbuserstore List
text
KEY DEFAULT
ENV : 10.0.0.14:30313
USER: SAPABAP1
DATABASE: HN1
The output shows that the IP address of the default entry is pointing to the virtual
machine and not to the load balancer's IP address. This entry needs to be changed
to point to the virtual hostname of the load balancer. Make sure to use the same
port (30313 in the output above) and database name (HN1 in the output above)!
Bash
su - nw1adm
hdbuserstore SET DEFAULT nw1-db:30313@HN1 SAPABAP1 <password of ABAP
schema>
Bash
# 15.08.2018 13:50:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: Toolchain Module
# HASAPInterfaceVersion: Toolchain Module (sap_suse_cluster_connector
3.0.1)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode:
# HANodes: nw1-cl-0, nw1-cl-1
# 15.08.2018 14:00:04
# HACheckConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2
ABAP instances detected
# SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0
Java instances detected
# SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server
separated from application server
# SUCCESS, SAP CONFIGURATION, MessageServer separation, All
MessageServer separated from application server
# SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP
instances on multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service
configuration, 2 ABAP instances with SPOOL service detected
# SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP
instances with active SPOOL service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on
multiple hosts, ABAP instances with active ABAP SPOOL service on
multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service
configuration, 2 ABAP instances with BATCH service detected
# SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP
instances with active BATCH service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on
multiple hosts, ABAP instances with active ABAP BATCH service on
multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service
configuration, 2 ABAP instances with DIALOG service detected
# SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP
instances with active DIALOG service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on
multiple hosts, ABAP instances with active ABAP DIALOG service on
multiple hosts detected
# SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service
configuration, 2 ABAP instances with UPDATE service detected
# SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP
instances with active UPDATE service detected
# SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on
multiple hosts, ABAP instances with active ABAP UPDATE service on
multiple hosts detected
# SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (nw1-
ascs_NW1_00), SAPInstance includes is-ers patch
# SUCCESS, SAP CONFIGURATION, Enqueue replication (nw1-ascs_NW1_00),
Enqueue replication enabled
# SUCCESS, SAP STATE, Enqueue replication state (nw1-ascs_NW1_00),
Enqueue replication active
# 15.08.2018 14:04:08
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch
Bash
Bash
# Remove failed actions for the ERS that occurred as part of the
migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
Bash
3. Test HAFailoverToNode
Bash
Bash
# run as root
# Remove failed actions for the ERS that occurred as part of the
migration
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
# Remove migration constraints
nw1-cl-0:~ # crm resource clear rsc_sap_NW1_ASCS00
#INFO: Removed migration constraints for rsc_sap_NW1_ASCS00
Bash
Run the following command as root on the node where the ASCS instance is
running
Bash
If you use SBD, Pacemaker shouldn't automatically start on the killed node. The
status after the node is started again should look like this.
Bash
Online: [ nw1-cl-1 ]
OFFLINE: [ nw1-cl-0 ]
Failed Actions:
* rsc_sap_NW1_ERS02_monitor_11000 on nw1-cl-1 'not running' (7):
call=219, status=complete, exitreason='none',
last-rc-change='Wed Aug 15 14:38:38 2018', queued=0ms, exec=0ms
Use the following commands to start Pacemaker on the killed node, clean the SBD
messages, and clean the failed resources.
Bash
# run as root
# list the SBD device(s)
nw1-cl-0:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"
Bash
Bash
Bash
When cluster nodes can't communicate to each other, there's a risk of a split-brain
scenario. In such situations, cluster nodes will try to simultaneously fence each
other, resulting in fence race.
Bash
# If the iptables rule set on the server gets reset after a reboot, the
rules will be cleared out. In case they have not been reset, please
proceed to remove the iptables rule using the following command.
iptables -D INPUT -s 10.0.0.6 -j DROP; iptables -D OUTPUT -d 10.0.0.6 -
j DROP
Bash
Create an enqueue lock by, for example edit a user in transaction su01. Run the
following commands as <sapsid>adm on the node where the ASCS instance is
running. The commands will stop the ASCS instance and start it again. If using
enqueue server 1 architecture, the enqueue lock is expected to be lost in this test.
If using enqueue server 2 architecture, the enqueue will be retained.
Bash
nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StopWait 600 2
Bash
Bash
The enqueue lock of transaction su01 should be lost and the back-end should have
been reset. Resource state after the test:
Bash
Bash
Run the following commands as root to identify the process of the message server
and kill it.
Bash
If you only kill the message server once, it will be restarted by sapstart. If you kill it
often enough, Pacemaker will eventually move the ASCS instance to the other
node, in case of ENSA1. Run the following commands as root to clean up the
resource state of the ASCS and ERS instance after the test.
Bash
Bash
Bash
Run the following commands as root on the node where the ASCS instance is
running to kill the enqueue server.
Bash
nw1-cl-0:~ #
#If using ENSA1
pgrep -f en.sapNW1 | xargs kill -9
#If using ENSA2
pgrep -f enq.sapNW1 | xargs kill -9
The ASCS instance should immediately fail over to the other node, in the case of
ENSA1. The ERS instance should also fail over after the ASCS instance is started.
Run the following commands as root to clean up the resource state of the ASCS
and ERS instance after the test.
Bash
Bash
Bash
Run the following command as root on the node where the ERS instance is
running to kill the enqueue replication server process.
Bash
nw1-cl-0:~ # pgrep -f er.sapNW1 | xargs kill -9
If you only run the command once, sapstart will restart the process. If you run it
often enough, sapstart will not restart the process, and the resource will be in a
stopped state. Run the following commands as root to clean up the resource state
of the ERS instance after the test.
Bash
Bash
Bash
Run the following commands as root on the node where the ASCS is running.
Bash
Bash
Next steps
HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
High availability for NFS on Azure VMs
on SUSE Linux Enterprise Server
Article • 01/18/2024
7 Note
We recommend deploying one of the Azure first-party NFS services: NFS on Azure
Files or NFS ANF volumes for storing shared data in a highly available SAP system.
Be aware, that we are de-emphasizing SAP reference architectures, utilizing NFS
clusters.
This article describes how to deploy the virtual machines, configure the virtual machines,
install the cluster framework, and install a highly available NFS server that can be used
to store the shared data of a highly available SAP system. This guide describes how to
set up a highly available NFS server that is used by two SAP systems, NW1 and NW2.
The names of the resources (for example virtual machines, virtual networks) in the
example assume that you have used the SAP file server template with resource prefix
prod.
7 Note
This article contains references to terms that Microsoft no longer uses. When the
terms are removed from the software, we'll remove them from this article.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise
Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server
for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server
12.
SAP Note 1999351 has additional troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
SUSE Linux Enterprise High Availability Extension 12 SP3 best practices guides
Highly Available NFS Storage with DRBD and Pacemaker
SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides
Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is
configured in a separate cluster and can be used by multiple SAP systems.
The NFS server uses a dedicated virtual hostname and virtual IP addresses for every SAP
system that uses this NFS server. On Azure, a load balancer is required to use a virtual IP
address. The presented configuration shows a load balancer with:
Deploy two virtual machines for NFS servers. Choose a suitable SLES image that is
supported with your SAP system. You can deploy VM in any one of the availability
options - scale set, availability zone or availability set.
Configure Azure load balancer
Follow create load balancer guide to configure a standard load balancer for an NFS
server high availability. During the configuration of load balancer, consider following
points.
1. Frontend IP Configuration: Create two frontend IP. Select the same virtual network
and subnet as your NFS server.
2. Backend Pool: Create backend pool and add NFS server VMs.
3. Inbound rules: Create two load balancing rule, one for NW1 and another for NW2.
Follow the same steps for both load balancing rules.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer
health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more details, see saptune 3.1.1 – Do I Need to Update? .
You can either use a DNS server or modify the /etc/hosts on all nodes. This
example shows how to use the /etc/hosts file. Replace the IP address and the
hostname in the following commands
Bash
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to
match your environment
Bash
10.0.0.4 nw1-nfs
10.0.0.5 nw2-nfs
Bash
Bash
Bash
sudo ls /dev/disk/azure/scsi1/
# Example output
# lun0 lun1
Bash
ls /dev/disk/azure/scsi1/lun*-part*
# Example output
# /dev/disk/azure/scsi1/lun0-part1 /dev/disk/azure/scsi1/lun1-part1
Bash
Bash
sudo vi /etc/drbd.conf
Make sure that the drbd.conf file contains the following two lines
text
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
Bash
sudo vi /etc/drbd.d/global_common.conf
text
global {
usage-count no;
}
common {
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger
; reboot -f";
}
startup {
wfc-timeout 0;
}
options {
}
disk {
md-flushes yes;
disk-flushes yes;
c-plan-ahead 1;
c-min-rate 100M;
c-fill-target 20M;
c-max-rate 4G;
}
net {
after-sb-0pri discard-younger-primary;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
protocol C;
tcp-cork yes;
max-buffers 20000;
max-epoch-size 20000;
sndbuf-size 0;
rcvbuf-size 0;
}
}
Bash
sudo vi /etc/drbd.d/NW1-nfs.res
Insert the configuration for the new drbd device and exit
text
resource NW1-nfs {
protocol C;
disk {
on-io-error detach;
}
net {
fencing resource-and-stonith;
}
on prod-nfs-0 {
address 10.0.0.6:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7790;
device /dev/drbd0;
disk /dev/vg-NW1-NFS/NW1;
meta-disk internal;
}
}
Bash
sudo vi /etc/drbd.d/NW2-nfs.res
Insert the configuration for the new drbd device and exit
text
resource NW2-nfs {
protocol C;
disk {
on-io-error detach;
}
net {
fencing resource-and-stonith;
}
on prod-nfs-0 {
address 10.0.0.6:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.7:7791;
device /dev/drbd1;
disk /dev/vg-NW2-NFS/NW2;
meta-disk internal;
}
}
Bash
Bash
Bash
10. [1] Wait until the new drbd devices are synchronized
Bash
Bash
When using drbd to synchronize data from one host to another, a so called split
brain can occur. A split brain is a scenario where both cluster nodes promoted the
drbd device to be the primary and went out of sync. It might be a rare situation
but you still want to handle and resolve a split brain as fast as possible. It is
therefore important to be notified when a split brain happened.
Read the official drbd documentation on how to set up a split brain notification.
It is also possible to automatically recover from a split brain scenario. For more
information, read Automatic split brain recovery policies
) Important
2. [1] Add the NFS drbd devices for SAP system NW2 to the cluster configuration
Bash
# Enable maintenance mode
sudo crm configure property maintenance-mode=true
Bash
This article describes how to deploy multiple SAP NetWeaver or S4HANA highly
available systems(that is, multi-SID) in a two node cluster on Azure VMs with SUSE Linux
Enterprise Server for SAP applications.
In the example configurations, installation commands etc. three SAP NetWeaver 7.50
systems are deployed in a single, two node high availability cluster. The SAP systems
SIDs are:
NW1: ASCS instance number 00 and virtual host name msnw1ascs; ERS instance
number 02 and virtual host name msnw1ers.
NW2: ASCS instance number 10 and virtual hostname msnw2ascs; ERS instance
number 12 and virtual host name msnw2ers.
NW3: ASCS instance number 20 and virtual hostname msnw3ascs; ERS instance
number 22 and virtual host name msnw3ers.
The article doesn't cover the database layer and the deployment of the SAP NFS shares.
In the examples in this article, we're using virtual names nw2-nfs for the NW2 NFS
shares and nw3-nfs for the NW3 NFS shares, assuming that NFS cluster was deployed.
Before you begin, refer to the following SAP Notes and papers first:
Overview
The virtual machines that participate in the cluster must be sized to be able to run all
resources, in case failover occurs. Each SAP SID can fail over independent from each
other in the multi-SID high availability cluster. If using SBD fencing, the SBD devices can
be shared between multiple clusters.
To achieve high availability, SAP NetWeaver requires highly available NFS shares. In this
example, we assume the SAP NFS shares are either hosted on highly available NFS file
server, which can be used by multiple SAP systems. Or the shares are deployed on Azure
NetApp Files NFS volumes.
) Important
The support for multi-SID clustering of SAP ASCS/ERS with SUSE Linux as guest
operating system in Azure VMs is limited to five SAP SIDs on the same cluster. Each
new SID increases the complexity. A mix of SAP Enqueue Replication Server 1 and
Enqueue Replication Server 2 on the same cluster is not supported. Multi-SID
clustering describes the installation of multiple SAP ASCS/ERS instances with
different SIDs in one Pacemaker cluster. Currently multi-SID clustering is only
supported for ASCS/ERS.
Tip
The presented configuration for this multi-SID cluster example with three SAP systems
shows a load balancer with:
Frontend IP addresses for ASCS: 10.3.1.14 (NW1), 10.3.1.16 (NW2) and 10.3.1.13
(NW3)
Frontend IP addresses for ERS: 10.3.1.15 (NW1), 10.3.1.17 (NW2) and 10.3.1.19
(NW3)
Probe port 62000 for NW1 ASCS, 62010 for NW2 ASCS and 62020 for NW3 ASCS
Probe port 62102 for NW1 ASCS, 62112 for NW2 ASCS and 62122 for NW3 ASCS
) Important
7 Note
When VMs without public IP addresses are placed in the backend pool of internal
(no public IP address) Standard Azure load balancer, there will be no outbound
internet connectivity, unless additional configuration is performed to allow routing
to public end points. For details on how to achieve outbound connectivity see
Public endpoint connectivity for Virtual Machines using Azure Standard Load
Balancer in SAP high-availability scenarios.
) Important
Don't enable TCP time stamps on Azure VMs placed behind Azure Load
Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the
net.ipv4.tcp_timestamps parameter to 0 . For details, see Load Balancer
health probes.
To prevent saptune from changing the manually set net.ipv4.tcp_timestamps
value from 0 back to 1 , you should update saptune version to 3.1.1 or higher.
For more information, see saptune 3.1.1 – Do I Need to Update? .
SAP NFS shares
SAP NetWeaver requires shared storage for the transport, profile directory, and so on.
For highly available SAP system, it's important to have highly available NFS shares. You
would need to decide on the architecture for your SAP NFS shares. One option is to
build Highly available NFS cluster on Azure VMs on SUSE Linux Enterprise Server, which
can be shared between multiple SAP systems.
Another option is to deploy the shares on Azure NetApp Files NFS volumes. With Azure
NetApp Files, you would get built-in high availability for the SAP NFS shares.
If using highly available NFS server, follow High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise Server for SAP applications.
If using Azure NetApp Files NFS volumes, follow High availability for SAP
NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files
for SAP applications
The documents listed above would guide you through the steps to prepare the
necessary infrastructures, build the cluster, prepare the OS for running the SAP
application.
Tip
Always test the fail over functionality of the cluster, after the first system is
deployed, before adding the additional SAP SIDs to the cluster. That way you will
know that the cluster functionality works, before adding the complexity of
additional SAP systems to the cluster.
The following items are prefixed with either [A] - applicable to all nodes, [1] - only
applicable to node 1 or [2] - only applicable to node 2.
Prerequisites
) Important
Before following the instructions to deploy additional SAP systems in the cluster,
follow the instructions to deploy the first SAP system in the cluster, as there are
steps which are only necessary during the first system deployment.
2. [A] Set up name resolution for the additional SAP systems. You can either use DNS
server or modify /etc/hosts on all nodes. This example shows how to use the
/etc/hosts file. Adapt the IP addresses and the host names to your environment.
Bash
sudo vi /etc/hosts
Bash
4. [A] Configure autofs to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems
for the additional SAP systems that you're deploying to the cluster. In this example
NW2 and NW3.
Update file /etc/auto.direct with the file systems for the additional SAP systems
that you're deploying to the cluster.
If using NFS file server, follow the instructions on the Azure VMs high
availability for SAP NetWeaver on SLES page
If using Azure NetApp Files, follow the instructions on the Azure VMs high
availability for SAP NW on SLES with Azure NetApp Files page
You need to restart the autofs service to mount the newly added shares.
) Important
Recent testing revealed situations, where netcat stops responding to requests
due to backlog and its limitation of handling only one connection. The netcat
resource stops listening to the Azure Load balancer requests and the floating
IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat
with socat. Currently we recommend using azure-lb resource agent, which is
part of package resource-agents, with the following package version
requirements:
Bash
As you creating the resources they may be assigned to different cluster resources.
When you group them, they'll migrate to one of the cluster nodes. Make sure the
cluster status is ok and that all resources are started. It isn't important on which
node the resources are running.
Install SAP NetWeaver ASCS as root, using a virtual hostname that maps to the IP
address of the load balancer frontend configuration for the ASCS. For example, for
system NW2, the virtual hostname is msnw2ascs, 10.3.1.16 and the instance
number that you used for the probe of the load balancer, for example 10. for
system NW3, the virtual hostname is msnw3ascs, 10.3.1.13 and the instance
number that you used for the probe of the load balancer, for example 20.
Bash
3. [1] Create a virtual IP and health-probe cluster resources for the ERS instance of
the additional SAP system you're deploying to the cluster. The example shown
here is for NW2 and NW3 ERS, using highly available NFS server.
Bash
As you creating the resources they may be assigned to different cluster nodes.
When you group them, they'll migrate to one of the cluster nodes. Make sure the
cluster status is ok and that all resources are started.
Next, make sure that the resources of the newly created ERS group, are running on
the cluster node, opposite to the cluster node where the ASCS instance for the
same SAP system was installed. For example, if NW2 ASCS was installed on
slesmsscl1 , then make sure the NW2 ERS group is running on slesmsscl2 . You
can migrate the NW2 ERS group to slesmsscl2 by running the following
command:
Bash
Install SAP NetWeaver ERS as root on the other node, using a virtual hostname
that maps to the IP address of the load balancer frontend configuration for the
ERS. For example for system NW2, the virtual host name is msnw2ers, 10.3.1.17 and
the instance number that you used for the probe of the load balancer, for example
12. For system NW3, the virtual host name msnw3ers, 10.3.1.19 and the instance
number that you used for the probe of the load balancer, for example 22.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a
non-root user to connect to sapinst. You can use parameter
SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
Bash
7 Note
If it was necessary for you to migrate the ERS group of the newly deployed SAP
system to a different cluster node, don't forget to remove the location constraint
for the ERS group. You can remove the constraint by running the following
command (the example is given for SAP systems NW2 and NW3).
Bash
5. [1] Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP
system(s). The example shown below is for NW2. You'll need to adapt the
ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
ASCS/SCS profile
Bash
sudo vi /sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs
For both ENSA1 and ENSA2, make sure that the keepalive OS parameters are set
as described in SAP note 1410736 .
ERS profile
Bash
sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
6. [A] Configure the SAP users for the newly deployed SAP system, in this example
NW2 and NW3.
Bash
7. Add the ASCS and ERS SAP services for the newly installed SAP system to the
sapservice file. The example shown below is for SAP systems NW2 and NW3.
Add the ASCS service entry to the second node and copy the ERS service entry to
the first node. Execute the commands for each SAP system on the node, where the
ASCS instance for the SAP system was installed.
Bash
8. [1] Create the SAP cluster resources for the newly installed SAP system.
ENSA1
Bash
If you're upgrading from an older version and switching to enqueue server 2, see SAP
note 2641019 .
Make sure that the cluster status is ok and that all resources are started. It isn't
important on which node the resources are running.
The following example shows the cluster resources status, after SAP systems NW2 and
NW3 were added to the cluster.
Bash
sudo crm_mon -r
The following picture shows how the resources would look like in the HA Web
Konsole(Hawk), with the resources for SAP system NW2 expanded.
If using highly available NFS server, follow High availability for SAP NetWeaver on
Azure VMs on SUSE Linux Enterprise Server for SAP applications.
If using Azure NetApp Files NFS volumes, follow High availability for SAP
NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files
for SAP applications
Always read the SUSE best practices guides and perform all additional tests that might
have been added.
The tests that are presented are in a two nodes, multi-SID cluster with three SAP
systems installed.
Run the following commands as <sapsid>adm on the node where the ASCS
instance is currently running. If the commands fail with FAIL: Insufficient memory, it
might be caused by dashes in your hostname. This is a known issue and will be
fixed by SUSE in the sap-suse-cluster-connector package.
Bash
# 10.12.2019 21:33:08
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications
12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP
Applications 12 SP4 (sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2
# 10.12.2019 21:37:09
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications
12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP
Applications 12 SP4 (sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode: slesmsscl2
# HANodes: slesmsscl2, slesmsscl1
# 19.12.2019 21:17:39
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch
# 10.12.2019 23:35:36
# HAGetFailoverConfig
# OK
# HAActive: TRUE
# HAProductVersion: SUSE Linux Enterprise Server for SAP Applications
12 SP4
# HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP
Applications 12 SP4 (sap_suse_cluster_connector 3.1.0)
# HADocumentation: https://www.suse.com/products/sles-for-sap/resource-
library/sap-best-practices/
# HAActiveNode: slesmsscl1
# HANodes: slesmsscl1, slesmsscl2
# 19.12.2019 21:10:42
# HACheckFailoverConfig
# OK
# state, category, description, comment
# SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version,
SAPInstance includes is-ers patch
2. Manually migrate the ASCS instance. The example shows migrating the ASCS
instance for SAP system NW2.
text
Run the following commands as root to migrate the NW2 ASCS instance.
Bash
# Remove failed actions for the ERS that occurred as part of the
migration
crm resource cleanup rsc_sap_NW2_ERS12
text
3. Test HAFailoverToNode. The test presented here shows migrating the ASCS
instance for SAP system NW2.
text
Run the following commands as nw2adm to migrate the NW2 ASCS instance.
Bash
# run as root
# Remove failed actions for the ERS that occurred as part of the
migration
crm resource cleanup rsc_sap_NW2_ERS12
# Remove migration constraints
crm resource clear rsc_sap_NW2_ASCS10
#INFO: Removed migration constraints for rsc_sap_NW2_ASCS10
text
text
Full list of resources:
stonith-sbd (stonith:external/sbd): Started slesmsscl1
Resource Group: g-NW1_ASCS
fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW1_ERS
fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW2_ASCS
fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW2_ERS
fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Resource Group: g-NW3_ASCS
fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started
slesmsscl2
nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started
slesmsscl2
vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started
slesmsscl2
rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started
slesmsscl2
Resource Group: g-NW3_ERS
fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started
slesmsscl1
rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started
slesmsscl1
Run the following command as root on the node where at least one ASCS instance
is running. In this example, we executed the command on slesmsscl2 , where the
ASCS instances for NW1 and NW3 are running.
Bash
If you use SBD, Pacemaker shouldn't automatically start on the killed node. The
status after the node is started again should look like this.
text
Online: [ slesmsscl1 ]
OFFLINE: [ slesmsscl2 ]
Full list of resources:
Use the following commands to start Pacemaker on the killed node, clean the SBD
messages, and clean the failed resources.
Bash
# run as root
# list the SBD device(s)
cat /etc/sysconfig/sbd | grep SBD_DEVICE=
# output is like:
# SBD_DEVICE="/dev/disk/by-id/scsi-
36001405772fe8401e6240c985857e116;/dev/disk/by-id/scsi-
36001405034a84428af24ddd8c3a3e9e1;/dev/disk/by-id/scsi-
36001405cdd5ac8d40e548449318510c3"
sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e116 -d
/dev/disk/by-id/scsi-36001405034a84428af24ddd8c3a3e9e1 -d /dev/disk/by-
id/scsi-36001405cdd5ac8d40e548449318510c3 message slesmsscl2 clear
Windows
A failover cluster is a group of 1+n independent servers (nodes) that work together to
increase the availability of applications and services. If a node failure occurs, WSFC
calculates the number of failures that can occur and still maintain a healthy cluster to
provide applications and services. You can choose from various quorum modes to
achieve failover clustering.
Prerequisites
Before you begin the tasks in this article, review the article High-availability architecture
and scenarios for SAP NetWeaver.
The Azure Load Balancer service provides an internal load balancer for Azure. With the
internal load balancer, clients reach the cluster over the cluster's virtual IP address.
Deploy the internal load balancer in the resource group that contains the cluster nodes.
Then, configure all necessary port-forwarding rules by using the probe ports of the
internal load balancer. Clients can connect via the virtual host name. The DNS server
resolves the cluster IP address, and the internal load balancer handles port forwarding
to the active node of the cluster.
) Important
The sapmnt file share, which enables access to these global S:\usr\sap\
<SID>\SYS... files by using the following UNC path:
In a high-availability setting, you cluster SAP ASCS/SCS instances. You use cluster shared
disks (drive S in this article's example) to place the SAP ASCS/SCS and SAP global host
files.
With an Enqueue Replication Server 1 (ERS1) architecture:
The same ASCS/SCS virtual host name is used to access the SAP message server
and enqueue server processes, in addition to the SAP global host files via the
sapmnt file share.
The same cluster shared disk (drive S) is shared between them.
The same ASCS/SCS virtual host name is used to access the SAP message server
process, in addition to the SAP global host files via the sapmnt file share.
The same cluster shared disk (drive S) is shared between them.
There's a separate ERS virtual host name to access the enqueue server process.
Is not clustered.
Uses a localhost name.
Is deployed on local disks on each of the cluster nodes.
Shared disks are also supported with an ERS2 architecture, where the ERS2 instance:
Is clustered.
Uses a dedicated virtual or network host name.
Needs the IP address of ERS virtual host name to be configured on an Azure
internal load balancer, in addition to the (A)SCS IP address.
Is deployed on local disks on each of the clustered nodes, so there's no need for a
shared disk.
For more information about ERS1 and ERS2, see Enqueue Replication Server in a
Microsoft Failover Cluster and New Enqueue Replicator in Failover Cluster
environments on the SAP website.
Use Azure shared disks to attach Azure managed disks to multiple VMs
simultaneously.
Use SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates
cluster shared storage.
When you're selecting the technology for shared disks, keep in mind the following
considerations about Azure shared disks for SAP workloads:
Use of Azure shared disks with Azure Premium SSD disks is supported for SAP
deployment in availability sets and availability zones.
Azure Ultra Disk Storage disks and Azure Standard SSD disks are not supported as
Azure shared disks for SAP workloads.
Be sure to provision Azure Premium SSD disks with a minimum disk size, as
specified in Premium SSD ranges, to be able to attach to the required number of
VMs simultaneously. You typically need two VMs for SAP ASCS Windows failover
clusters.
The SIOS solution provides real-time synchronous data replication between two
disks.
With the SIOS solution, you operate with two managed disks. If you're using either
availability sets or availability zones, the managed disks are on different storage
clusters.
Deployment in availability zones is supported.
The SIOS solution requires installing and operating third-party software, which you
need to purchase separately.
Azure Ultra Disk Storage disks and Standard SSD disks are not supported as Azure
shared disks for SAP workloads.
Azure Shared disks with Premium SSD disks are supported for SAP deployment in
availability sets and availability zones.
Azure shared disks with Premium SSD disks come with two storage options:
Locally redundant storage (LRS) for Premium SSD shared disks ( skuName value of
Premium_LRS ) is supported with deployment in availability sets.
Zone-redundant storage (ZRS) for Premium SSD shared disks ( skuName value of
Premium_ZRS ) is supported with deployment in availability zones.
The Azure shared disk value maxShares determines how many cluster nodes can
use the shared disk. For an SAP ASCS/SCS instance, you typically configure two
nodes in WSFC. You then set the value for maxShares to 2 .
An Azure proximity placement group (PPG) is not required for Azure shared disks.
But for SAP deployment with PPGs, follow these guidelines:
If you're using PPGs for an SAP system deployed in a region, all virtual machines
that share a disk must be part of the same PPG.
If you're using PPGs for an SAP system deployed across zones, as described in
Proximity placement groups with zonal deployments, you can attach
Premium_ZRS storage to virtual machines that share a disk.
For more information, review the Limitations section of the documentation for Azure
shared disks.
Consider these important points about Azure Premium SSD shared disks:
For other important considerations about planning your SAP deployment, review Plan
and implement an SAP deployment on Azure and Azure Storage types for SAP
workloads.
Supported OS versions
Windows Server 2016, 2019, and later are supported. Use the latest datacenter images.
We strongly recommend using at least Windows Server 2019 Datacenter, for these
reasons:
7 Note
You don't need shared disks for high availability with some DBMS products, like
SQL Server. SQL Server Always On replicates DBMS data and log files from the local
disk of one cluster node to the local disk of another cluster node. In this case, the
Windows cluster configuration doesn't need a shared disk.
Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Windows
Server Failover Clustering to reduce the total number of VMs.
This configuration can be either local SAP application servers on an SAP ASCS/SCS
cluster or an SAP ASCS/SCS cluster role on Microsoft SQL Server Always On nodes.
) Important
Installing a local SAP application server on a SQL Server Always On node is not
supported.
Both SAP ASCS/SCS and the Microsoft SQL Server database are single points of failure
(SPOFs). WSFC helps protect these SPOFs in a Windows environment.
Although the resource consumption of the SAP ASCS/SCS is fairly small, we recommend
a reduction of the memory configuration for either SQL Server or the SAP application
server by 2 GB.
This diagram illustrates SAP application servers on WSFC nodes with the use of SIOS
DataKeeper:
Because the SAP application servers are installed locally, there's no need to set up any
synchronization.
This diagram illustrates SAP ASCS/SCS on SQL Server Always On nodes with the use of
SIOS DataKeeper:
Optional configuration for SAP application servers on WSFC nodes using Server
Message Block in Azure NetApp Files
Optional configuration for SAP ASCS/SCS on SQL Server Always On nodes using
Windows Scale-Out File Server
Optional configuration for SAP ASCS/SCS on SQL Server Always On nodes using
Server Message Block in Azure NetApp Files
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for an SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
Prepare the Azure infrastructure for SAP HA by
using a Windows failover cluster and shared disk
for SAP ASCS/SCS
Article • 01/21/2024
Windows
This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a
high-availability SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk as an
option for clustering an SAP ASCS instance. Two alternatives for cluster shared disk are presented in the
documentation:
Prerequisites
Before you begin the installation review this article:
Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a
cluster shared disk
Create Azure Internal Load Balancer for SAP ASCS /SCS instance.
Add Windows VMs to the AD domain.
Based on your deployment type, the host names and the IP addresses of the scenario would be like:
ノ Expand table
ノ Expand table
The steps mentioned in the document remain same for both deployment type. But if your cluster is
running in availability set, you need to deploy LRS for Azure premium shared disk (Premium_LRS) and if
the cluster is running in availability zone deploy ZRS for Azure premium shared disk (Premium_ZRS).
7 Note
Azure proximity placement group is not required for Azure shared disk. But for SAP deployment
with PPG, follow below guidelines:
If you are using PPG for SAP system deployed in a region then all virtual machines sharing a
disk must be part of the same PPG.
If you are using PPG for SAP system deployed across zones like described in the document
Proximity placement groups with zonal deployments, you can attach Premium_ZRS storage to
virtual machines sharing a disk.
1. Frontend IP Configuration: Create frontend IP (example: 10.0.0.43). Select the same virtual network
and subnet as your ASCS/ERS virtual machines.
2. Backend Pool: Create backend pool and add ASCS and ERS VMs. In this example, VMs are pr1-ascs-
10 and pr1-ascs-11.
3. Inbound rules: Create load balancing rule.
4. Applicable to only ENSA2 architecture: Create additional frontend IP (10.0.0.44), load balancing rule
(use 621<Instance-no.> for ERS2 health probe port) as described in point 1 and 3.
7 Note
) Important
A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in
load-balancing scenarios. For details, see Azure Load Balancer limitations. If you need another IP
address for the VM, deploy a second NIC.
7 Note
When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP
address) Standard Azure load balancer, there will be no outbound internet connectivity unless you
perform additional configuration to allow routing to public endpoints. For details on how to achieve
outbound connectivity, see Public endpoint connectivity for virtual machines using Azure Standard
Load Balancer in SAP high-availability scenarios.
Tip
With the Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure
Shared Disk , you can automate the infrastructure preparation, using Azure Shared Disk for one SAP
SID with ERS1.
The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and
attach to the VMs. Azure Internal Load Balancer will be created and configured as well. For details -
see the ARM template.
KeepAliveTime
KeepAliveInterval
ノ Expand table
PowerShell
Once the feature installation has completed, reboot both cluster nodes.
For more information, see, Windows Server 2019 Failover Clustering New features Run this command on
one of the cluster nodes:
PowerShell
# IP adress for cluster network name is needed ONLY on Windows Server 2016 cluster
$ClusterStaticIPAddress = "10.0.0.42"
# Test cluster
Test-Cluster –Node $ClusterNodes -Verbose
$ComputerInfo = Get-ComputerInfo
$WindowsVersion = $ComputerInfo.WindowsProductName
PowerShell
$AzureStorageAccountName = "cloudquorumwitness"
Set-ClusterQuorum –CloudWitness –AccountName $AzureStorageAccountName -AccessKey
<YourAzureStorageAccessKey> -Verbose
SameSubNetDelay = 2000
SameSubNetThreshold = 15
RouteHistoryLength = 30
These settings were tested with customers and offer a good compromise. They're resilient enough, but
they also provide failover that is fast enough for real error conditions in SAP workloads or VM failure.
PowerShell
#############################
# Create Azure Shared Disk
#############################
$ResourceGroupName = "MyResourceGroup"
$location = "MyAzureRegion"
$SAPSID = "PR1"
$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"
# With parameter '-MaxSharesCount', we define the maximum number of cluster nodes to attach
the shared disk
$NumberOfWindowsClusterNodes = 2
PowerShell
PowerShell
# Format SAP ASCS Disk number '2', with drive letter 'S'
$SAPSID = "PR1"
$DiskNumber = 2
$DriveLetter = "S"
$DiskLabel = "$SAPSID" + "SAP"
PowerShell
Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP
ASCS/SCS instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster
Edition is a third-party solution that you can use to create shared disk resources.
Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:
Add Microsoft .NET Framework, if needed. See the SIOS documentation for the most up-to-date
.NET framework requirements
Install SIOS DataKeeper
Configure SIOS DataKeeper
Before you install the SIOS software, create the DataKeeperSvc domain user.
7 Note
Add the DataKeeperSvc domain user to the Local Administrator group on both cluster nodes.
3. In the dialog box, we recommend that you select Domain or Server account.
User selection for SIOS DataKeeper
4. Enter the domain account user name and password that you created for SIOS DataKeeper.
Enter the domain user name and password for the SIOS DataKeeper installation
5. Install the license key for your SIOS DataKeeper instance, as shown in Figure 35.
Enter your SIOS DataKeeper license key
1. Start the DataKeeper Management and Configuration tool, and then select Connect Server.
Insert the name or TCP/IP address of the first node the Management and Configuration tool should
connect to, and in a second step, the second node
5. Define the name, TCP/IP address, and disk volume of the target node.
Define the name, TCP/IP address, and disk volume of the current target node
6. Define the compression algorithms. In our example, we recommend that you compress the
replication stream. Especially in resynchronization situations, the compression of the replication
stream dramatically reduces resynchronization time. Compression uses the CPU and RAM resources
of a virtual machine. As the compression rate increases, so does the volume of CPU resources that
are used. You can adjust this setting later.
7. Another setting you need to check is whether the replication occurs asynchronously or
synchronously. When you protect SAP ASCS/SCS configurations, you must use synchronous
replication.
Define replication details
8. Define whether the volume that is replicated by the replication job should be represented to a
Windows Server failover cluster configuration as a shared disk. For the SAP ASCS/SCS configuration,
select Yes so that the Windows cluster sees the replicated volume as a shared disk that it can use as
a cluster volume.
After the volume is created, the DataKeeper Management and Configuration tool shows that the
replication job is active.
DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active
Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 45:
Failover Cluster Manager shows the disk that DataKeeper replicated
Next steps
Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS
instance
Install SAP NetWeaver HA on a
Windows failover cluster and shared
disk for an SAP ASCS/SCS instance in
Azure
Article • 02/10/2023
This article describes how to install and configure a high-availability SAP system in Azure
by using a Windows Server failover cluster and cluster shared disk for clustering an SAP
ASCS/SCS instance. As described in Architecture guide: Cluster an SAP ASCS/SCS
instance on a Windows failover cluster by using a cluster shared disk, there are two
alternatives for cluster shared disk:
Prerequisites
Before you begin the installation, review these documents:
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for an SAP ASCS/SCS instance
We don't describe the DBMS setup in this article because setups vary depending on the
DBMS system you use. We assume that high-availability concerns with the DBMS are
addressed with the functionalities that different DBMS vendors support for Azure.
Examples are Always On or database mirroring for SQL Server and Oracle Data Guard for
Oracle databases. The high availability scenarios for the DBMS are not covered in this
article.
There are no special considerations when different DBMS services interact with a
clustered SAP ASCS or SCS configuration in Azure.
7 Note
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and
ABAP+Java systems are almost identical. The most significant difference is that an
SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance
running in the same Microsoft failover cluster group. Any installation differences for
each SAP NetWeaver installation stack are explicitly mentioned. You can assume
that the rest of the steps are the same.
) Important
If you use SIOS to present shared disk, don't place your page file on the SIOS
DataKeeper mirrored volumes. You can leave your page file on the temporary drive
D of an Azure virtual machine, which is the default. If it's not already there, move
the Windows page file to drive D of your Azure virtual machine.
Create a virtual host name for the clustered SAP ASCS/SCS instance.
Install SAP on the first cluster node.
Modify the SAP profile of the ASCS/SCS instance.
Add a probe port.
Open the Windows firewall probe port.
) Important
The IP address that you assign to the virtual host name of the ASCS/SCS
instance must be the same as the IP address that you assigned to Azure Load
Balancer.
Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address
2. If are using the new SAP Enqueue Replication Server 2, which is also clustered
instance, then you need to reserve in DNS a virtual host name for ERS2 as well.
) Important
The IP address that you assign to the virtual host name of the ERS2 instance
must be the second the IP address that you assigned to Azure Load Balancer.
Define the DNS entry for the SAP ERS2 cluster virtual name and TCP/IP address
3. To define the IP address that's assigned to the virtual host name, select DNS
Manager > Domain.
New virtual name and TCP/IP address for SAP ASCS/SCS cluster configuration
) Important
Keep in mind that the configuration in the Azure internal load balancer load
balancing rules(if using Basic SKU) and the selected SAP instance numbers
must match.
2. Follow the SAP described installation procedure. Make sure in the start installation
option “First Cluster Node”, to choose “Cluster Shared Disk” as configuration
option.
Tip
The SAP installation documentation describes how to install the first ASCS/SCS
cluster node.
1. Add this profile parameter to the SAP ASCS/SCS instance profile, if using ERS1.
enque/encni/set_so_keepalive = true
For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as
described in SAP note 1410736 .
2. To apply the SAP profile parameter changes, restart the SAP ASCS/SCS instance.
However, this won't work in some cluster configurations because only one instance is
active. The other instance is passive and can’t accept any of the workload. A probe
functionality helps when the Azure internal load balancer detect which instance is active,
and only target the active instance.
) Important
In this example configuration, the ProbePort is set to 620Nr. For SAP ASCS instance
with number 00 it is 62000. You will need to adjust the configuration to match your
SAP instance numbers and your SAP SID.
To add a probe port run this PowerShell Module on one of the cluster VMs:
PowerShell
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
SID -ProbePort 62000
If using ERS2, which is clustered. There is no need to configure probe port for
ERS1, as it is not clustered.
PowerShell
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
SID -ProbePort 62001 -IsSAPERSClusteredInstance $True
PowerShell
function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {
<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer Health Probe Port on 'SAP $SAPSID IP' cluster resource.
.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer Health Probe Port on 'SAP $SAPSID IP' cluster resource.
It will also restart SAP Cluster group (default behavior), to activate the
changes.
You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.
.PARAMETER SAPSID
SAP SID - 3 characters staring with letter.
.PARAMETER ProbePort
Azure Load Balancer Health Check Probe Port.
.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is '$True', so SAP cluster group will be
restarted to activate the changes.
.PARAMETER IsSAPERSClusteredInstance
Optional parameter.Default value is '$False'.
If set to $True , then handle clsutered new SAP ERS2 instance.
.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP', and
restart the SAP cluster group 'SAP AB1', to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000
.EXAMPLE
# Set probe port to 62000, on SAP cluster resource 'SAP AB1 IP'. SAP
cluster group 'SAP AB1' IS NOT restarted, therefore changes are NOT active.
# To activate the changes you need to manualy restart 'SAP AB1' cluster
group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -RestartSAPClusterGroup $False
.EXAMPLE
# Set probe port to 62001, on SAP cluster resource 'SAP AB1 ERS IP'. SAP
cluster group 'SAP AB1 ERS' IS restarted, to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -IsSAPERSClusteredInstance $True
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,
[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,
[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
)
BEGIN{}
PROCESS{
try{
if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS Instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS Instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}
$SAPIPResourceClusterParameters = Get-ClusterResource
$SAPIPresourceName | Get-ClusterParameter
$IPAddress = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters |
Where-Object {$_.Name -eq "OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "ProbePort" }).Value
if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."
PowerShell
1. Verify that the SAP system can successfully failover from node A to node B Choose
one of these options to initiate a failover of the SAP <SID> cluster group from
cluster node A to cluster node B:
PowerShell
2. Restart cluster node A within the Windows guest operating system. This initiates an
automatic failover of the SAP <SID> cluster group from node A to node B.
3. Restart cluster node A from the Azure portal. This initiates an automatic failover of
the SAP <SID> cluster group from node A to node B.
5. Verification
After failover, verify that the the SAP <SID> cluster group is running on
cluster node B.
In Failover Cluster Manager, the SAP <SID> cluster group is running on cluster
node B
After failover, if using SIOS, verify that SIOS DataKeeper is replicating data
from source volume drive S on cluster node B to target volume drive S on
cluster node A.
SIOS DataKeeper replicates the local volume from cluster node B to cluster
node A
SAP ASCS/SCS instance multi-SID high
availability with Windows Server
Failover Clustering and Azure shared
disk
Article • 01/21/2024
Windows
This article focuses on how to move from a single SAP ASCS/SCS installation to
configuration of multiple SAP system IDs (SIDs) by installing additional SAP ASCS/SCS
clustered instances into an existing Windows Server Failover Clustering (WSFC) cluster
with an Azure shared disk. When you complete this process, you've configured an SAP
multi-SID cluster.
Azure Ultra Disk Storage disks and Azure Standard SSD disks aren't supported as
Azure shared disks for SAP workloads.
Azure shared disks with Premium SSD disks are supported for SAP deployment in
availability sets and availability zones.
Azure shared disks with Premium SSD disks come with two storage options:
Locally redundant storage (LRS) for Premium SSD shared disks ( skuName value of
Premium_LRS ) is supported with deployment in availability sets.
Zone-redundant storage (ZRS) for Premium SSD shared disks ( skuName value of
Premium_ZRS ) is supported with deployment in availability zones.
The Azure shared disk value maxShares determines how many cluster nodes can
use the shared disk. For an SAP ASCS/SCS instance, you typically configure two
nodes in WSFC. You then set the value for maxShares to 2 .
An Azure proximity placement group (PPG) isn't required for Azure shared disks.
But for SAP deployment with PPGs, follow these guidelines:
If you're using PPGs for an SAP system deployed in a region, all virtual machines
that share a disk must be part of the same PPG.
If you're using PPGs for an SAP system deployed across zones, as described in
Proximity placement groups with zonal deployments, you can attach
Premium_ZRS storage to virtual machines that share a disk.
For more information, review the Limitations section of the documentation for Azure
shared disks.
) Important
The SID for each database management system (DBMS) must have its own
dedicated WSFC cluster.
SAP application servers that belong to one SAP SID must have their own
dedicated virtual machines (VMs).
A mix of Enqueue Replication Server 1 (ERS1) and Enqueue Replication Server
2 (ERS2) in the same cluster is not supported.
Supported OS versions
Windows Server 2016, 2019, and later are supported. Use the latest datacenter images.
We strongly recommend using at least Windows Server 2019 Datacenter, for these
reasons:
Architecture
Both ERS1 and ERS2 are supported in a multi-SID configuration. A mix of ERS1 and ERS2
isn't supported in the same cluster.
The following example shows two SAP SIDs. Both have an ERS1 architecture where:
SAP SID1 is deployed on a shared disk with ERS1. The ERS instance is installed on a
local host and on a local drive.
SAP SID1 has its own virtual IP address (SID1 (A)SCS IP1), which is configured on
the Azure internal load balancer.
SAP SID2 is deployed on a shared disk with ERS1. The ERS instance is installed on a
local host and on a local drive.
SAP SID2 has own virtual IP address (SID2 (A)SCS IP2), which is configured on the
Azure internal load balancer.
The next example also shows two SAP SIDs. Both have an ERS2 architecture where:
SAP SID1 is deployed on a shard disk with ERS2, which is clustered and is deployed
on a local drive.
SAP SID1 has its own virtual IP address (SID1 (A)SCS IP1), which is configured on
the Azure internal load balancer.
SAP ERS2 has its own virtual IP address (SID1 ERS2 IP2), which is configured on the
Azure internal load balancer.
SAP SID2 is deployed on a shard disk with ERS2, which is clustered and is deployed
on a local drive.
SAP SID2 has own virtual IP address (SID2 (A)SCS IP3), which is configured on the
Azure internal load balancer.
SAP ERS2 has its own virtual IP address (SID2 ERS2 IP4), which is configured on the
Azure internal load balancer.
Infrastructure preparation
You install a new SAP SID PR2 instance, in addition to the existing clustered SAP PR1
ASCS/SCS instance.
Here are the details for an SAP deployment in an Azure availability set:
ノ Expand table
Here are the details for an SAP deployment in Azure availability zones:
ノ Expand table
The steps in this article remain the same for both deployment types. But if your cluster is
running in an availability set, you need to deploy LRS for Azure Premium SSD shared
disks ( Premium_LRS ). If your cluster is running in an availability zone, you need to deploy
ZRS for Azure Premium SSD shared disks ( Premium_ZRS ).
Configure additional frontend IP and load balancing rule for SAP SID, PR2 system on the
existing load balancer using following guidelines. This section assumes that the
configuration of standard internal load balancer for SAP SID, PR1 is already in place as
described in create load balancer.
1. Open the same standard internal load balancer that you have created for SAP SID,
PR1 system.
2. Frontend IP Configuration: Create frontend IP (example: 10.0.0.45).
3. Backend Pool: Backend Pool is same as that of SAP SID PR1 system.
4. Inbound rules: Create load balancing rule.
7 Note
) Important
7 Note
When VMs without public IP addresses are placed in the back-end pool of an
internal (no public IP address) Standard Azure load balancer, there will be no
outbound internet connectivity unless you perform additional configuration to
allow routing to public endpoints. For details on how to achieve outbound
connectivity, see Public endpoint connectivity for virtual machines using Azure
Standard Load Balancer in SAP high-availability scenarios.
PowerShell
$ResourceGroupName = "MyResourceGroup"
$location = "MyRegion"
$SAPSID = "PR2"
$DiskSizeInGB = 512
$DiskName = "$($SAPSID)ASCSSharedDisk"
$NumberOfWindowsClusterNodes = 2
# For SAP deployment in an availability set, use this storage SkuName value
$SkuName = "Premium_LRS"
# For SAP deployment in an availability zone, use this storage SkuName value
$SkuName = "Premium_ZRS"
PowerShell
PowerShell
PowerShell
PowerShell
The IP address that you assigned to the virtual host name in DNS must be the
same as the IP address that you assigned in Azure Load Balancer.
2. If you're using a clustered instance of SAP ERS2, you need to reserve in DNS a
virtual host name for ERS2.
The IP address that you assigned to the virtual host name for ERS2 in DNS must be
the same as the IP address that you assigned in Azure Load Balancer.
3. To define the IP address assigned to the virtual host name, select DNS Manager >
Domain.
SAP installation
Install the SAP first cluster node
Follow the SAP-described installation procedure. Be sure to select First Cluster Node as
the option for starting installation. Select Cluster Shared Disk as the configuration
option. Choose the newly created shared disk.
1. Add this profile parameter to the SAP ASCS/SCS instance profile, if you're using
ERS1:
PowerShell
enque/encni/set_so_keepalive = true
For both ERS1 and ERS2, make sure that the keepalive OS parameters are set as
described in SAP note 1410736 .
2. To apply the changes to the SAP profile parameter, restart the SAP ASCS/SCS
instance.
However, this approach won't work in some cluster configurations because only one
instance is active. The other instance is passive and can't accept any of the workload. A
probe functionality helps when the Azure internal load balancer detects which instance
is active and targets only the active instance.
) Important
In this example configuration, the probe port is set to 620nr. For SAP ASCS with
instance number 02, it's 62002.
You need to adjust the configuration to match your SAP instance numbers and your
SAP SID.
To add a probe port, run this PowerShell module on one of the cluster VMs:
PowerShell
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
PR2 -ProbePort 62002
If you're using ERS2 with instance number 12, configure a probe port. There's no
need to configure a probe port for ERS1. ERS2 with instance number 12 is
clustered, whereas ERS1 isn't clustered.
PowerShell
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID
PR2 -ProbePort 62012 -IsSAPERSClusteredInstance $True
PowerShell
function Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource {
<#
.SYNOPSIS
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer health probe port on the SAP $SAPSID IP cluster
resource.
.DESCRIPTION
Set-AzureLoadBalancerHealthProbePortOnSAPClusterIPResource will set a new
Azure Load Balancer health probe port on the SAP $SAPSID IP cluster
resource.
It will also restart the SAP cluster group (default behavior), to activate
the changes.
You need to run it on one of the SAP ASCS/SCS Windows cluster nodes.
The expectation is that the SAP group is installed with the official SWPM
installation tool, which will set the default expected naming convention
for:
- SAP cluster group: SAP $SAPSID
- SAP cluster IP address resource: SAP $SAPSID IP
.PARAMETER SAPSID
SAP SID - three characters, starting with a letter.
.PARAMETER ProbePort
Azure Load Balancer health check probe port.
.PARAMETER RestartSAPClusterGroup
Optional parameter. Default value is $True, so the SAP cluster group will
be restarted to activate the changes.
.PARAMETER IsSAPERSClusteredInstance
Optional parameter. Default value is $False.
If it's set to $True, then handle the clustered new SAP ERS2 instance.
.EXAMPLE
# Set the probe port to 62000 on SAP cluster resource SAP AB1 IP, and
restart the SAP cluster group SAP AB1 to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000
.EXAMPLE
# Set the probe port to 62000 on SAP cluster resource SAP AB1 IP. SAP
cluster group SAP AB1 is not restarted, so the changes are not active.
# To activate the changes, you need to manually restart the SAP AB1 cluster
group.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -RestartSAPClusterGroup $False
.EXAMPLE
# Set the probe port to 62001 on SAP cluster resource SAP AB1 ERS IP. SAP
cluster group SAP AB1 ERS is restarted to activate the changes.
Set-AzureLoadBalancerHealthCheckProbePortOnSAPClusterIPResource -SAPSID AB1
-ProbePort 62000 -IsSAPERSClusteredInstance $True
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[ValidateLength(3,3)]
[string]$SAPSID,
[Parameter(Mandatory=$True)]
[ValidateNotNullOrEmpty()]
[int] $ProbePort,
[Parameter(Mandatory=$False)]
[bool] $RestartSAPClusterGroup = $True,
[Parameter(Mandatory=$False)]
[bool] $IsSAPERSClusteredInstance = $False
)
BEGIN{}
PROCESS{
try{
if($IsSAPERSClusteredInstance){
#Handle clustered SAP ERS instance
$SAPClusterRoleName = "SAP $SAPSID ERS"
$SAPIPresourceName = "SAP $SAPSID ERS IP"
}else{
#Handle clustered SAP ASCS/SCS instance
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
}
$SAPIPResourceClusterParameters = Get-ClusterResource
$SAPIPresourceName | Get-ClusterParameter
$IPAddress = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters |
Where-Object {$_.Name -eq "OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object
{$_.Name -eq "ProbePort" }).Value
if($RestartSAPClusterGroup){
Write-Output ""
Write-Output "Activating changes..."
END {}
}
2. Install SAP on the second cluster node by following the steps that are described in
the SAP installation guide.
3. Install the SAP Primary Application Server (PAS) instance on the virtual machine
that is designated to host the PAS.
Follow the process described in the SAP installation guide. There are no
dependencies on Azure.
4. Install additional SAP application servers on the virtual machines that are
designated to host SAP application server instances.
Follow the process described in the SAP installation guide. There are no
dependencies on Azure.
1. Verify that the SAP system can successfully fail over from node A to node B. In this
example, the test is for SAP SID PR2.
Make sure that each SAP SID can successfully move to the other cluster node.
Choose one of these options to initiate a failover of the SAP <SID> cluster group
from cluster node A to cluster node B:
PowerShell
2. Restart cluster node A within the Windows guest operating system. This step
initiates an automatic failover of the SAP <SID> cluster group from node A to
node B.
3. Restart cluster node A from the Azure portal. This step initiates an automatic
failover of the SAP <SID> cluster group from node A to node B.
4. Restart cluster node A by using Azure PowerShell. This step initiates an automatic
failover of the SAP <SID> cluster group from node A to node B.
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and shared disk for an SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and shared disk for an
SAP ASCS/SCS instance
SAP ASCS/SCS instance multi-SID high
availability with Windows Server
Failover Clustering and shared disk on
Azure
Article • 02/10/2023
Windows
If you have an SAP deployment, you must use an internal load balancer to create a
Windows cluster configuration for SAP Central Services (ASCS/SCS) instances.
This article focuses on how to move from a single ASCS/SCS installation to an SAP
multi-SID configuration by installing additional SAP ASCS/SCS clustered instances into
an existing Windows Server Failover Clustering (WSFC) cluster with shared disk, using
SIOS to simulate shared disk. When this process is completed, you have configured an
SAP multi-SID cluster.
7 Note
This feature is available only in the Azure Resource Manager deployment model.
There is a limit on the number of private front-end IPs for each Azure internal load
balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to
the maximum number of private front-end IPs for each Azure internal load
balancer.
For more information about load-balancer limits, see the "Private front-end IP per load
balancer" section in Networking limits: Azure Resource Manager.
) Important
We recommend that you use the Azure Az PowerShell module to interact with
Azure. See Install Azure PowerShell to get started. To learn how to migrate to the
Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by
using file share, as shown in this diagram.
) Important
The SAP ASCS/SCS instances must share the same WSFC cluster.
Each database management system (DBMS) SID must have its own dedicated
WSFC cluster.
SAP application servers that belong to one SAP system SID must have their
own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in
the same cluster is not supported.
The complete landscape with two high-availability SAP systems would look like this:
Prepare the infrastructure for an SAP multi-SID
scenario
To prepare your infrastructure, you can install an additional SAP ASCS/SCS instance with
the following parameters:
SAP ASCS/SCS virtual host IP address (additional Azure load balancer IP address) 10.0.0.50
7 Note
For SAP ASCS/SCS cluster instances, each IP address requires a unique probe port.
For example, if one IP address on an Azure internal load balancer uses probe port
62300, no other IP address on that load balancer can use probe port 62300.
For our purposes, because probe port 62300 is already reserved, we are using
probe port 62350.
You can install additional SAP ASCS/SCS instances in the existing WSFC cluster with two
nodes:
pr5-sap-cl 10.0.0.50
The new host name and IP address are displayed in DNS Manager, as shown in the
following screenshot:
7 Note
The new IP address that you assign to the virtual host name of the additional
ASCS/SCS instance must be the same as the new IP address that you assigned to
the SAP Azure load balancer.
The following script adds a new IP address to an existing load balancer. Update the
PowerShell variables for your environment. The script creates all the required load-
balancing rules for all SAP ASCS/SCS ports.
PowerShell
$count = $ILB.FrontendIpConfigurations.Count + 1
$FrontEndConfigurationName ="lbFrontendASCS$count"
$LBProbeName = "lbProbeASCS$count"
Write-Host "Creating load balancing rules for the ports: '$Ports' ... " -
ForegroundColor Green
$ILB | Set-AzLoadBalancer
Do the following:
1. Add an additional disk or disks of the same size (which you need to stripe) to each
of the cluster nodes, and format them.
2. Configure storage replication with SIOS DataKeeper.
This procedure assumes that you have already installed SIOS DataKeeper on the WSFC
cluster machines. If you have installed it, you must now configure replication between
the machines. The process is described in detail in Install SIOS DataKeeper Cluster
Edition for the SAP ASCS/SCS cluster share disk.
Deploy VMs for SAP application servers and the DBMS
cluster
To complete the infrastructure preparation for the second SAP system, do the following:
1. Deploy dedicated VMs for the SAP application servers, and put each in its own
dedicated availability group.
2. Deploy dedicated VMs for the DBMS cluster, and put each in its own dedicated
availability group.
6. Open Windows Firewall ports for the SAP ASCS/SCS instance and probe port.
On both cluster nodes that are used for SAP ASCS/SCS instances, you are opening
all Windows Firewall ports that are used by SAP ASCS/SCS. These SAP ASCS/SCS
instance ports are listed in the chapter SAP ASCS / SCS Ports.
For a list of all other SAP ports, see TCP/IP ports of all SAP products .
Also open the Azure internal load balancer probe port, which is 62350 in our
scenario. It is described in this article.
7. Install the SAP primary application server on the new dedicated VM, as described
in the SAP installation guide.
8. Install the SAP additional application server on the new dedicated VM, as
described in the SAP installation guide.
Next steps
Networking limits: Azure Resource Manager
Multiple VIPs for Azure Load Balancer
Install HA SAP NetWeaver with Azure
Files SMB
Article • 04/18/2023
Microsoft and SAP now fully support Azure Files premium Server Message Block (SMB)
file shares. SAP Software Provisioning Manager (SWPM) 1.0 SP32 and SWPM 2.0 SP09
(and later) support Azure Files premium SMB storage.
There are special requirements for sizing Azure Files premium SMB shares. This article
contains specific recommendations on how to distribute workloads, choose an adequate
storage size, and meet minimum installation requirements for Azure Files premium SMB.
High-availability (HA) SAP solutions need a highly available file share for hosting
sapmnt, transport, and interface directories. Azure Files premium SMB is a simple Azure
platform as a service (PaaS) solution for shared file systems for SAP on Windows
environments. You can use Azure Files premium SMB with availability sets and
availability zones. You can also use Azure Files premium SMB for disaster recovery (DR)
scenarios to another region.
7 Note
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
systems with SAP Kernel 7.22 (and later). For details, see SAP Note 2698948 .
The file share name sapmnt can be created once per storage account. It's possible
to create additional storage IDs (SIDs) as directories on the same /sapmnt share,
such as /sapmnt/<SID1> and /sapmnt/<SID2>.
Choose an appropriate size, IOPS, and throughput. A suggested size for the share
is 256 GB per SID. The maximum size for a share is 5,120 GB.
Azure Files premium SMB might not perform well for very large sapmnt shares with
more than 1 million files per storage account. Customers who have millions of
batch jobs that create millions of job log files should regularly reorganize them, as
described in SAP Note 16083 . If needed, you can move or archive old job logs to
another Azure Files premium SMB file share. If you expect sapmnt to be very large,
consider other options (such as Azure NetApp Files).
We recommend that you use a private network endpoint.
Avoid putting too many SIDs in a single storage account and its file share.
As general guidance, don't put together more than four nonproduction SIDs.
Don't put the entire development, production, and quality assurance system (QAS)
landscape in one storage account or file share. Failure of the share leads to
downtime of the entire SAP landscape.
We recommend that you put the sapmnt and transport directories on different
storage accounts, except in smaller systems. During the installation of the SAP
primary application server, SAPinst will request the transport host name. Enter the
FQDN of a different storage account as <storage_account>.file.core.windows.net.
Don't put the file system used for interfaces onto the same storage account as
/sapmnt/<SID>.
You must add the SAP users and groups to the sapmnt share. Set the Storage File
Data SMB Share Elevated Contributor permission for them in the Azure portal.
Distributing transport, interface, and sapmnt among separate storage accounts improves
throughput and resiliency. It also simplifies performance analysis. If you put many SIDs
and other file systems in a single Azure Files storage account, and the storage account's
performance is poor because you're hitting the throughput limits, it's difficult to identify
which SID or application is causing the problem.
Planning
) Important
The installation of SAP HA systems on Azure Files premium SMB with Active
Directory integration requires cross-team collaboration. We recommend that the
following teams work together to achieve tasks:
Azure team: Set up and configure storage accounts, script execution, and
Active Directory synchronization.
Active Directory team: Create user accounts and groups.
Basis team: Run SWPM and set access control lists (ACLs), if necessary.
Here are prerequisites for the installation of SAP NetWeaver HA systems on Azure Files
premium SMB with Active Directory integration:
Installation sequence
7 Note
1. On the Basics tab, create a storage account with either premium zone-redundant
storage (ZRS) or locally redundant storage (LRS). Customers with zonal deployment
should choose ZRS. Here, the administrator needs to make the choice between
setting up a Standard or Premium account.
) Important
4. Create the sapmnt file share with an appropriate size. The suggested size is 256 GB,
which delivers 650 IOPS, 75-MB/sec egress, and 50-MB/sec ingress.
5. Download the Azure Files GitHub content and run the script.
The user who's running the script must have permission to create objects in
the Active Directory domain that contains the SAP servers. Typically, an
organization uses a Domain Administrator account such as
SAPCONT_ADMIN@SAPCONTOSO.local.
Before the user runs the script, confirm that this Active Directory domain user
account is synchronized with Azure AD. An example of this would be to open
the Azure portal and go to Azure AD users, check that the user
SAPCONT_ADMIN@SAPCONTOSO.local exists, and verify the Azure AD user
account.
Grant the Contributor role-based access control (RBAC) role to this Azure AD
user account for the resource group that contains the storage account that
holds the file share. In this example, the user
SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com is granted the
Contributor role to the respective resource group.
The user should run the script while logged on to a Windows Server instance
by using an Active Directory domain user account with the permission as
specified earlier.
In this example scenario, the Active Directory administrator would log on to the
Windows Server instance as SAPCONT_ADMIN@SAPCONTOSO.local. When the
administrator is using the PowerShell command Connect-AzAccount , the
administrator connects as user SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com.
Ideally, the Active Directory administrator and the Azure administrator should work
together on this task.
) Important
7. Check the ACL on the sapmnt file share after the installation. Then add the
DOMAIN\CLUSTER_NAME$ account, DOMAIN\<sid>adm account,
DOMAIN\SAPService<SID> account, and SAP_<SID>_GlobalAdmin group. These
accounts and group should have full control of the sapmnt directory.
) Important
) Important
To initialize the Windows ACL for the SMB share, mount the share once to a
drive letter.
The storage key is the password, and the user is Azure\<SMB share name>.
Complete SAP Basis tasks
An SAP Basis administrator should complete these tasks:
1. Install the Windows cluster on ASCS/ERS nodes and add the cloud witness.
2. The first cluster node installation asks for the Azure Files SMB storage account
name. Enter the FQDN <storage_account_name>.file.core.windows.net. If SAPinst
doesn't accept more than 13 characters, the SWPM version is too old.
3. Modify the SAP profile of the ASCS/SCS instance.
4. Update the probe port for the SAP <SID> role in Windows Server Failover Cluster
(WSFC).
5. Continue with SWPM installation for the second ASCS/ERS node. SWPM requires
only the path of the profile directory. Enter the full UNC path to the profile
directory.
6. Enter the UNC profile path for the database and for the installation of the primary
application server (PAS) and additional application server (AAS).
7. The PAS installation asks for the transport host name. Provide the FQDN of a
separate storage account name for the transport directory.
8. Verify the ACLs on the SID and transport directory.
After a DR event and failover of the ASCS instance to the DR region, change the
SAPGLOBALHOST profile parameter to point to Azure Files SMB in the DR region. Perform
the same preparation steps on the DR storage account to join the storage account to
Active Directory and assign RBAC roles for SAP users and groups.
Troubleshooting
The PowerShell scripts that you downloaded earlier contain a debug script to conduct
basic checks for validating the configuration.
PowerShell
Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Windows
Server Failover Cluster to reduce the total number of VMs.
This configuration can be either local SAP application servers on an SAP ASCS/SCS
cluster or an SAP ASCS/SCS cluster role on Microsoft SQL Server Always On nodes.
) Important
Installing a local SAP application server on a SQL Server Always On node is not
supported.
Both SAP ASCS/SCS and the Microsoft SQL Server database are single points of failure
(SPOFs). Using Azure Files SMB helps protect these SPOFs in a Windows environment.
Although the resource consumption of the SAP ASCS/SCS is fairly small, we recommend
a reduction of the memory configuration by 2 GB for either SQL Server or the SAP
application server.
The diagram shows the use of additional local disks. This setup is optional for
customers who won't install application software on the OS drive (drive C).
) Important
Using Azure Files SMB for any SQL Server volume is not supported.
7 Note
The diagram shows the use of additional local disks. This setup is optional for
customers who won't install application software on the OS drive (drive C).
High availability for SAP NetWeaver on
Azure VMs on Windows with Azure
NetApp Files(SMB) for SAP applications
Article • 02/10/2023
This article describes how to deploy, configure the virtual machines, install the cluster
framework, and install a highly available SAP NetWeaver 7.50 system on Windows VMs,
using SMB on Azure NetApp Files.
The database layer isn't covered in detail in this article. We assume that the Azure virtual
network has already been created.
Overview
SAP developed a new approach, and an alternative to cluster shared disks, for clustering
an SAP ASCS/SCS instance on a Windows failover cluster. Instead of using cluster shared
disks, one can use an SMB file share to deploy SAP global host files. Azure NetApp Files
supports SMBv3 (along with NFS) with NTFS ACL using Active Directory. Azure NetApp
Files is automatically highly available (as it is a PaaS service). These features make Azure
NetApp Files great option for hosting the SMB file share for SAP global.
Both Azure Active Directory (AD) Domain Services and Active Directory Domain Services
(AD DS) are supported. You can use existing Active Directory domain controllers with
Azure NetApp Files. Domain controllers can be in Azure as virtual machines, or on
premises via ExpressRoute or S2S VPN. In this article, we will use Domain controller in an
Azure VM.
High availability(HA) for SAP Netweaver central services requires shared storage. To
achieve that on Windows, so far it was necessary to build either SOFS cluster or use
cluster shared disk s/w like SIOS. Now it is possible to achieve SAP Netweaver HA by
using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files for the
shared storage eliminates the need for either SOFS or SIOS.
7 Note
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
systems with SAP Kernel 7.22 (and later). For details see SAP note 2698948
The prerequisites for an SMB file share are:
The share for the SAP Central services in this reference architecture is offered by Azure
NetApp Files:
Create and mount SMB volume for Azure
NetApp Files
Perform the following steps, as preparation for using Azure NetApp Files.
1. Create Azure NetApp account, following the steps described in Create a NetApp
account
3. Azure NetApp Files resources must reside in delegated subnet. Follow the
instructions in Delegate a subnet to Azure NetApp Files to create delegated
subnet.
) Important
When creating the Active Directory connection, make sure to enter SMB
Server (Computer Account) Prefix no longer than 8 characters to avoid the 13
characters hostname limitation for SAP Applications (a suffix is automatically
added to the SMB Computer Account name).
The hostname limitations for SAP applications are described in 2718300 -
Physical and Virtual hostname length limitations and 611361 - Hostnames
of SAP ABAP Platform servers .
5. Create SMB Azure NetApp Files SMB volume, following the instructions in Add an
SMB volume.
Tip
You can find the instructions on how to mount the Azure NetApp Files volume, if
you navigate in Azure Portal to the Azure NetApp Files object, click on the
Volumes blade, then Mount Instructions.
Important considerations
When considering Azure NetApp Files for the SAP Netweaver architecture, be aware of
the following important considerations:
The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1
TiB increments.
The minimum volume is 100 GiB
The selected virtual network must have a subnet, delegated to Azure NetApp Files.
The throughput and performance characteristics of an Azure NetApp Files volume
is a function of the volume quota and service level, as documented in Service level
for Azure NetApp Files. While sizing the SAP Azure NetApp volumes, make sure
that the resulting throughput meets the application requirements.
Prepare the infrastructure for SAP HA by using
a Windows failover cluster
1. Set the ASCS/SCS load balancing rules for the Azure internal load balancer.
2. Add Windows virtual machines to the domain.
3. Add registry entries on both cluster nodes of the SAP ASCS/SCS instance
4. Set up a Windows Server failover cluster for an SAP ASCS/SCS instance
5. If you are using Windows Server 2016, we recommend that you configure Azure
Cloud Witness.
3. When prompted at step SAP System Cluster Parameters, enter the host name for
the Azure NetApp Files SMB share you already created as File Share Host Name. In
this example, the SMB share host name is anfsmb-9562.
) Important
If Pre-requisite checker Results in SWPM shows Swap Size condition not met,
you can adjust the SWAP size by navigating to My Computer>System
Properties>Performance Settings> Advanced> Virtual memory> Change.
gw/netstat_once 0
enque/encni/set_so_keepalive true
service/ha_check_node 1
A DBMS instance
A primary SAP application server
An additional SAP application server
2. Restart cluster node A. The SAP cluster resources will move to cluster node B.
Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Microsoft
Windows Failover Cluster to reduce the total number of VMs.
This can either be local SAP Application Servers on a SAP ASCS/SCS cluster or a SAP
ASCS/SCS Cluster Role on Microsoft SQL Server Always On nodes.
) Important
Installing a local SAP Application Server on a SQL Server Always On node is not
supported.
Both, SAP ASCS/SCS and the Microsoft SQL Server database, are single points of failure
(SPOF). To protect these SPOFs in a Windows environment Azure NetApp Files SMB is
used.
While the resource consumption of the SAP ASCS/SCS is fairly small, a reduction of the
memory configuration for either SQL Server or the SAP Application Server by 2 GB is
recommended.
SAP Application Servers on WSFC nodes using NetApp
Files SMB
7 Note
The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)
) Important
Using Azure NetApp Files SMB for any SQL Server volume is not supported.
7 Note
The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure (large instances), see SAP HANA (large instances) high availability
and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP
HANA on Azure VMs, see High Availability of SAP HANA on Azure Virtual Machines
(VMs)
Cluster an SAP ASCS/SCS instance on a
Windows failover cluster by using a file
share in Azure
Article • 02/10/2023
Windows
A failover cluster is a group of 1+n independent servers (nodes) that work together to
increase the availability of applications and services. If a node failure occurs, Windows
Server failover clustering calculates the number of failures that can occur and still
maintain a healthy cluster to provide applications and services. You can choose from
different quorum modes to achieve failover clustering.
Prerequisites
Before you begin the tasks that are described in this article, review the following articles
and SAP notes:
7 Note
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
systems with SAP Kernel 7.22 (and later). For details see SAP note 2698948
The Azure Load Balancer service provides an internal load balancer for Azure. With the
internal load balancer, clients reach the cluster over the cluster virtual IP address.
Deploy the internal load balancer in the resource group that contains the cluster nodes.
Then, configure all necessary port forwarding rules by using the probe ports of the
internal load balancer. The clients can connect via the virtual host name. The DNS server
resolves the cluster IP address. The internal load balancer handles port forwarding to the
active node of the cluster.
Figure 1: Windows Server failover clustering configuration in Azure without a shared disk
7 Note
An SMB file share is an alternative to using cluster shared disks for clustering SAP
ASCS/SCS instances.
SAP central services (with its own file structure and message and enqueue
processes) are separate from the SAP global host files.
SAP central services run under an SAP ASCS/SCS instance.
SAP ASCS/SCS instance is clustered and is accessible by using the <ASCS/SCS
virtual host name> virtual host name.
SAP global files are placed on the SMB file share and are accessed by using the
<SAP global host> host name: \\<SAP global host>\sapmnt\<SID>\SYS...
The SAP ASCS/SCS instance is installed on a local disk on both cluster nodes.
The <ASCS/SCS virtual host name> network name is different from <SAP global
host>.
The SAP <SID> cluster role does not contain cluster shared disks or a generic file share
cluster resource.
Figure 3: SAP <SID> cluster role resources for using a file share
) Important
Scale-out file shares are fully supported in the Microsoft Azure cloud, and in on-
premises environments.
A scale-out file share offers a highly available and horizontally scalable SAPMNT file
share.
Storage Spaces Direct is used as a shared disk for a scale-out file share. You can use
Storage Spaces Direct to build highly available and scalable storage using servers with
local storage. Shared storage that is used for a scale-out file share, like for SAP global
host files, is not a single point of failure.
The virtual machines used to build the Storage Spaces Direct cluster need to be
deployed in an Azure availability set.
For disaster recovery of a Storage Spaces Direct Cluster, you can use Azure Site
Recovery Services.
It is not supported to stretch the Storage Space Direct Cluster across different
Azure Availability Zones.
) Important
You cannot rename the SAPMNT file share, which points to <SAP global host>. SAP
supports only the share name "sapmnt."
For more information, see SAP Note 2492395 - Can the share name sapmnt be
changed?
Configure SAP ASCS/SCS instances and a scale-out file
share in two clusters
You must deploy the SAP ASCS/SCS instances in a separate cluster, with their own SAP
<SID> cluster role. In this case, you configure the scale-out file share on another cluster,
with another cluster role.
) Important
The setup must meet the following requirement: the SAP ASCS/SCS instances and
the SOFS share must be deployed in separate clusters.
) Important
In this scenario, the SAP ASCS/SCS instance is configured to access the SAP global
host by using UNC path \\<SAP global host>\sapmnt\<SID>\SYS.
Figure 5: An SAP ASCS/SCS instance and a scale-out file share deployed in two clusters
Optional configurations
The following diagrams show multiple SAP instances on Azure VMs running Microsoft
Windows Failover Cluster to reduce the total number of VMs.
This can either be local SAP Application Servers on a SAP ASCS/SCS cluster or a SAP
ASCS/SCS Cluster Role on Microsoft SQL Server Always On nodes.
) Important
Installing a local SAP Application Server on a SQL Server Always On node is not
supported.
Both, SAP ASCS/SCS and the Microsoft SQL Server database, are single points of failure
(SPOF). To protect these SPOFs in a Windows environment WSFC is used.
While the resource consumption of the SAP ASCS/SCS is fairly small, a reduction of the
memory configuration for either SQL Server or the SAP Application Server by 2 GB is
recommended.
7 Note
The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)
The picture shows the use of additional local disks. This is optional for customers
who will not install application software on the OS drive (C:)
) Important
In the Azure cloud, each cluster that is used for SAP and scale-out file shares must
be deployed in its own Azure availability set or across Azure Availability Zones. This
ensures distributed placement of the cluster VMs across the underlying Azure
infrastructure. Availability Zone deployments are supported with this technology.
Next steps
Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster
and file share for an SAP ASCS/SCS instance
Install SAP NetWeaver HA on a Windows failover cluster and file share for an SAP
ASCS/SCS instance
Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in
Azure
Storage Spaces Direct in Windows Server 2016
Deep dive: Volumes in Storage Spaces Direct
Prepare Azure infrastructure for SAP
high availability by using a Windows
failover cluster and file share for SAP
ASCS/SCS instances
Article • 02/10/2023
This article describes the Azure infrastructure preparation steps that are needed to
install and configure high-availability SAP systems on a Windows Server Failover
Clustering cluster (WSFC), using scale-out file share as an option for clustering SAP
ASCS/SCS instances.
Prerequisite
Before you start the installation, review the following article:
PR1 00
Virtual host name role Virtual host name Static IP address Availability set
Virtual host name role Virtual host name Static IP address Availability set
SAP global host name sapglobal Use IPs of all cluster nodes n/a
If using Enqueue replication server 2 (ERS2), perform the Azure Load Balancer
configuration for ERS2 .
Add registry entries on both cluster nodes of the SAP ASCS/SCS instance.
As you use Windows Server 2016, we recommend that you configure Azure Cloud
Witness.
PowerShell
# Test cluster
Test-Cluster -node $nodes -Verbose
# Install cluster
$ClusterNetworkName = "sofs-cl"
$ClusterIP = "10.0.6.13"
New-Cluster -Name $ClusterNetworkName -Node $nodes –NoStorage –StaticAddress
$ClusterIP -Verbose
) Important
We recommend that you have three or more cluster nodes for Scale-Out File Server
with three-way mirroring.
In the Scale-Out File Server Resource Manager template UI, you must specify the
VM count.
Figure 1: UI screen for Scale-Out File Server Resource Manager template with managed
disks
Figure 2: UI screen for the Scale-Out File Server Azure Resource Manager template without
managed disks
In the Storage Account Type box, select Premium Storage. All other settings are the
same as the settings for managed disks.
SameSubNetDelay = 2000
SameSubNetThreshold = 15
RouteHistoryLength = 30
These settings were tested with customers, and offer a good compromise. They are
resilient enough, but they also provide fast enough failover in real error conditions or
VM failure.
Next steps
Install SAP NetWeaver high availability on a Windows failover cluster and file share
for SAP ASCS/SCS instances
Install SAP NetWeaver high availability
on a Windows failover cluster and file
share for SAP ASCS/SCS instances on
Azure
Article • 02/10/2023
This article describes how to install and configure a high-availability SAP system on
Azure, with Windows Server Failover Cluster (WSFC) and Scale-Out File Server as an
option for clustering SAP ASCS/SCS instances.
Prerequisites
Before you start the installation, review the following articles:
) Important
Clustering SAP ASCS/SCS instances by using a file share is supported for SAP
NetWeaver 7.40 (and later), with SAP Kernel 7.49 (and later).
) Important
The setup must meet the following requirement: the SAP ASCS/SCS instances and
the SOFS share must be deployed in separate clusters.
We do not describe the Database Management System (DBMS) setup because setups
vary depending on the DBMS you use. However, we assume that high-availability
concerns with the DBMS are addressed with the functionalities that various DBMS
vendors support for Azure. Such functionalities include Always On or database mirroring
for SQL Server, and Oracle Data Guard for Oracle databases. In the scenario we use in
this article, we didn't add more protection to the DBMS.
There are no special considerations when various DBMS services interact with this kind
of clustered SAP ASCS/SCS configuration in Azure.
7 Note
The installation procedures of SAP NetWeaver ABAP systems, Java systems, and
ABAP+Java systems are almost identical. The most significant difference is that an
SAP ABAP system has one ASCS instance. The SAP Java system has one SCS
instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance
running in the same Microsoft failover cluster group. Any installation differences for
each SAP NetWeaver installation stack are explicitly mentioned. You can assume
that all other parts are the same.
Set security on the SAPMNT file share and folder with full control for:
The <DOMAIN>\SAP_<SID>_GlobalAdmin user group
The SAP ASCS/SCS cluster node computer objects <DOMAIN>\ClusterNode1$
and <DOMAIN>\ClusterNode2$
To create a CSV volume with mirror resiliency, execute the following PowerShell cmdlet
on one of the SOFS cluster nodes:
PowerShell
PowerShell
$UsrSAPFolder = "C:\ClusterStorage\SAP$SAPSID\usr\sap\"
# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
Create a virtual host name for the clustered
SAP ASCS/SCS instance
Create an SAP ASCS/SCS cluster network name (for example, pr1-ascs [10.0.6.7]), as
described in Create a virtual host name for the clustered SAP ASCS/SCS instance.
<Product> > <DBMS> > Installation > Application Server ABAP (or Java) > High-
Availability System > ASCS/SCS instance > First cluster node.
<Product> > <DBMS> > Installation > Application Server ABAP (or Java) > High-
Availability System > ASCS/SCS instance > Additional cluster node.
gw/netstat_once 0
enque/encni/set_so_keepalive true
service/ha_check_node 1
A DBMS instance.
A primary SAP application server.
An additional SAP application server.
Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks - Official
SAP guidelines for high-availability file share
Windows
You can manage multiple virtual IP addresses by using an Azure internal load balancer.
If you have an SAP deployment, you can use an internal load balancer to create a
Windows cluster configuration for SAP Central Services (ASCS/SCS) instances.
This article focuses on how to move from a single ASCS/SCS installation to an SAP
multi-SID configuration by installing additional SAP ASCS/SCS clustered instances into
an existing Windows Server Failover Clustering (WSFC) cluster with file share. When this
process is completed, you have configured an SAP multi-SID cluster.
7 Note
This feature is available only in the Azure Resource Manager deployment model.
There is a limit on the number of private front-end IPs for each Azure internal load
balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to
the maximum number of private front-end IPs for each Azure internal load
balancer.
For more information about load-balancer limits, see the "Private front-end IP per load
balancer" section in Networking limits: Azure Resource Manager. Also consider using the
Azure Standard Load Balancer SKU instead of the basic SKU of the Azure load balancer.
Prerequisites
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance by
using file share, as shown in this diagram.
) Important
The SAP ASCS/SCS instances must share the same WSFC cluster.
Different SAP Global Hosts file shares belonging to different SAP SIDs must
share the same SOFS cluster.
The SAP ASCS/SCS instances and the SOFS shares must not be combined in
the same cluster.
Each database management system (DBMS) SID must have its own dedicated
WSFC cluster.
SAP application servers that belong to one SAP system SID must have their
own dedicated VMs.
A mix of Enqueue Replication Server 1 and Enqueue Replication Server 2 in
the same cluster is not supported.
SAP ASCS/SCS multi-SID architecture with file
share
The goal is to install multiple SAP Advanced Business Application Programming (ASCS)
or SAP Java (SCS) clustered instances in the same WSFC cluster, as illustrated here:
Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS
server.
Add an IP address to an existing Azure internal load balancer by using PowerShell.
These steps are described in Infrastructure preparation for an SAP multi-SID scenario.
Figure 3: Multi-SID SOFS is the same as the SAP Global Host name
) Important
For the second SAP <SID2> system, the same Volume1 and the same
<SAPGlobalHost> network name are used. Because you have already set SAPMNT
as the share name for various SAP systems, to reuse the <SAPGlobalHost> network
name, you must use the same Volume1.
For the <SID2> system, you must prepare the SAP Global Host ..\SYS.. folder on the
SOFS cluster.
To prepare the SAP Global Host for the <SID2> instance, execute the following
PowerShell script:
PowerShell
##################
# SAP multi-SID
##################
$SAPSID2 = "PR2"
$DomainName2 = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName2 = "$DomainName2\SAP_" + $SAPSID2 +
"_GlobalAdmin"
$UsrSAPFolder = "C:\ClusterStorage\Volume1\usr\sap\"
# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
To create the second SOFS role with <SAPGlobalHost2>, execute this PowerShell script:
PowerShell
PowerShell
Create an SAP Global folder for the second <SID2>, and set file security.
PowerShell
# Create a folder for <SID2> on a second Volume2 and set file security
$SAPSID = "PR2"
$DomainName = "SAPCLUSTER"
$SAPSIDGlobalAdminGroupName = "$DomainName\SAP_" + $SAPSID + "_GlobalAdmin"
$UsrSAPFolder = "C:\ClusterStorage\Volume2\usr\sap\"
# Set security
Set-Acl $UsrSAPFolder $Acl -Verbose
To create a SAPMNT file share on Volume2 with the <SAPGlobalHost2> host name for
the second SAP <SID2>, start the Add File Share wizard in Failover Cluster Manager.
Right-click the saoglobal2 SOFS cluster group, and then select Add File Share.
Next steps
Install an ASCS/SCS instance on a failover cluster with no shared disks : Official
SAP guidelines for an HA file share
Introduction
SAP instances like ASCS/SCS based on WSFC require SAP files being installed on a
shared drive. SAP supports either a Cluster Shared Disks or a File Share Cluster to host
these files.
For installations based on Azure NetApp Files SMB, the option File Share Cluster needs
to be selected. In the follow-up screen, the File Share Host Name needs to be supplied.
SWPM selection screen for Cluster Share Host Name configuration
The Cluster Share Host Name is based on the chosen installation option. For Azure
NetApp Files SMB, it is the used to join the NetApp account to the Active Directory of
the installation. In SAP terms, this name is the so called SAPGLOBALHOST. SWPM
internally adds sapmnt to the host name resulting in the \\SAPGLOBALHOST\sapmnt
share. Unfortunately sapmnt can only be created once per either NetApp account. This
is restrictive. DFS-N can be used to create virtual share names, that can be assigned to
differently named shares. Rather than having to use sapmnt as the share name as
mandated by SWPM, a unique name like sapmnt-sid can be used. The same is valid for
the global transport directory. Since trans is the expected name of global transport
directory, the SAP DIR_TRANS profile parameter in the DEFAULT.PFL profile needs to be
adjusted.
Microsoft DFS-N
DFS Namespaces overview provides an introduction and the installation instructions for
DFS-N
Start the DFS Management console from the Windows Administrative Tools in the
Windows Server Start Menu.
Under the Namespace root, numerous Namespace folders can be created. Each of them
points to a Folder Target. While the name of the Folder Target can be chosen freely, the
name of the Namespace folder has to match a valid SAP SID. In combination, this will
create a valid SWPM compliant UNC share. This mechanism is also be used to create the
trans-directory in order to provide a SAP transport directory.
The screenshot shows an example for such a configuration.
By right-clicking on the Namespace root, the Add Namespace Server dialog is opened.
In this screen, the name of the Namespace server can be directly supplied. Alternatively
the Browse button can be pushed to list already existing servers will be shown.
This step opens the New Folder dialog. Supply either a valid SID like in this case P01 or
use trans if the intention is to create a transport directory.
In the portal, get the mount instructions for the volume you want to use as a folder
target and copy the UNC name and paste as shown above.
This screen shows as an example the folder setup for an SAP landscape.
Deploy SAP dialog instances with SAP
ASCS/SCS high-availability VMs on
RHEL
Article • 02/29/2024
This article describes how to install and configure Primary Application Server (PAS) and
Additional Application Server (AAS) dialog instances on the same ABAP SAP Central
Services (ASCS)/SAP Central Services (SCS) high-availability cluster running on Red Hat
Enterprise Linux (RHEL).
References
Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in
Pacemaker
Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL
7.5+ and RHEL 8
SAP Note 1928533 , which has:
A list of Azure virtual machine (VM) sizes that are supported for the deployment
of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 lists the recommended OS settings for Red Hat Enterprise
Linux 7.x.
SAP Note 2772999 lists the recommended OS settings for Red Hat Enterprise
Linux 8.x.
SAP Note 2009879 has SAP HANA guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in Pacemaker cluster
General RHEL documentation:
High-Availability Add-On Overview
High-Availability Add-On Administration
High-Availability Add-On Reference
Azure-specific RHEL documentation:
Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
Overview
This article describes the cost optimization scenario where you deploy PAS and AAS
dialog instances with SAP ASCS/SCS and Enqueue Replication Server (ERS) instances in a
high-availability setup. To minimize the number of VMs for a single SAP system, you
want to install PAS and AAS on the same host where SAP ASCS/SCS and SAP ERS are
running. With SAP ASCS/SCS being configured in a high-availability cluster setup, you
want PAS and AAS also to be managed by cluster. The configuration is basically an
addition to an already configured SAP ASCS/SCS cluster setup. In this setup, PAS and
AAS are installed on a virtual host name, and its instance directory is managed by the
cluster.
For this setup, PAS and AAS require a highly available instance directory
( /usr/sap/<SID>/D<nr> ). You can place the instance directory file system on the same
high-available storage that you used for ASCS and ERS instance configuration. The
presented architecture showcases NFS on Azure Files or Azure NetApp Files for a highly
available instance directory for the setup.
The example shown in this article to describe deployment uses the following system
information:
ノ Expand table
(ERS)
7 Note
Install more SAP application instances on separate VMs if you want to scale out.
7 Note
To install more application servers on separate VMs, you can either use NFS shares
or a local managed disk for an instance directory file system. If you're installing
more application servers for an SAP J2EE system, /usr/sap/<SID>/J<nr> on NFS on
Azure Files isn't supported.
In a traditional SAP ASCS/SCS high-availability configuration, application server
instances running on separate VMs aren't affected when there's any effect on SAP
ASCS and ERS cluster nodes. But with the cost-optimization configuration, either
the PAS or AAS instance restarts when there's an effect on one of the nodes in the
cluster.
See NFS on Azure Files considerations and Azure NetApp Files considerations
because the same considerations apply to this setup.
Prerequisites
The configuration described in this article is an addition to your already configured SAP
ASCS/SCS cluster setup. In this configuration, PAS and AAS are installed on a virtual host
name, and its instance directory is managed by the cluster. Based on your storage, use
the steps described in the following articles to configure the SAPInstance resource for
the SAP ASCS and SAP ERS instance in the cluster.
NFS on Azure Files: Azure VMs high availability for SAP NW on RHEL with NFS on
Azure Files
Azure NetApp Files: Azure VMs high availability for SAP NW on RHEL with Azure
NetApp Files
After you install the ASCS, ERS, and Database instance by using Software Provisioning
Manager (SWPM), follow the next steps to install the PAS and AAS instances.
1. Open the internal load balancer that was created for the SAP ASCS/SCS cluster
setup.
2. Frontend IP Configuration: Create two front-end IPs, one for PAS and another for
AAS (for example, 10.90.90.30 and 10.90.90.31).
3. Backend Pool: This pool remains the same because we're deploying PAS and AAS
on the same back-end pool.
4. Inbound rules: Create two load-balancing rules, one for PAS and another for AAS.
Follow the same steps for both load-balancing rules.
5. Frontend IP address: Select the front-end IP.
a. Backend pool: Select the back-end pool.
b. High availability ports: Select this option.
c. Protocol: Select TCP.
d. Health Probe: Create a health probe with the following details (applies for both
PAS and AAS):
i. Protocol: Select TCP.
ii. Port: For example, 620<Instance-no.> for PAS and 620<Instance-no.> for
AAS.
iii. Interval: Enter 5.
iv. Probe Threshold: Enter 2.
e. Idle timeout (minutes): Enter 30.
f. Enable Floating IP: Select this option.
) Important
When VMs without public IP addresses are placed in the back-end pool of an internal
(no public IP address) Standard Azure Load Balancer instance, there's no outbound
internet connectivity unless more configuration is performed to allow routing to public
endpoints. For steps on how to achieve outbound connectivity, see Public endpoint
connectivity for virtual machines using Azure Standard Load Balancer in SAP high-
availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.
Bash
sudo vi /etc/hosts
2. [1] Create the SAP directories on the NFS share. Mount the NFS share sapnw1
temporarily on one of the VMs, and create the SAP directories to be used as
nested mount points.
Bash
# If using NFSv3
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp
10.90.91.5:/sapnw1 /saptmp
# If using NFSv4.1
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp
10.90.91.5:/sapnw1 /saptmp
Bash
4. [A] Configure swap space. When you install a dialog instance with central services,
you must configure more swap space.
Bash
sudo vi /etc/waagent.conf
Bash
Bash
Bash
2. [1] Create file system, virtual IP, and health probe resources for the PAS instance.
Bash
Make sure that the cluster status is okay and that all resources are started. It isn't
important on which node the resources are running.
Bash
3. [1] Change the ownership of the /usr/sap/SID/D02 folder after the file system is
mounted.
Bash
Install the SAP NetWeaver PAS as a root on the first node by using a virtual host
name that maps to the IP address of the load balancer front-end configuration for
the PAS. For example, use sappas, 10.90.90.30, and the instance number that you
used for the probe of the load balancer, for example 02.
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.
Bash
sudo vi /usr/sap/sapservices
# On the node where PAS is installed, comment out the following lines.
# LD_LIBRARY_PATH=/usr/sap/NW1/D02/exe:$LD_LIBRARY_PATH;export
LD_LIBRARY_PATH;/usr/sap/NW1/D02/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_D02_sappas -D -u nw1adm
Bash
Bash
# Node List:
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-
cl1
7. Configure a constraint to start the PAS resource group only after the ASCS instance
is started.
Bash
Bash
sudo pcs status
# Node List:
# Node sap-cl2: standby
# Online: [ sap-cl1 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl1
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-
cl1
2. [2] Create file system, virtual IP, and health probe resources for the AAS instance.
Bash
Make sure that the cluster status is okay and that all resources are started. It isn't
important on which node the resources are running. Because the g-NW1_PAS
resource group is stopped, all the PAS resources are stopped in the (disabled)
state.
Bash
# Node List:
# Node sap-cl1: standby
# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Stopped
(disabled)
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Stopped
(disabled)
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Stopped
(disabled)
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Stopped
(disabled)
# Resource Group: g-NW1_AAS:
# vip_NW1_AAS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# nc_NW1_AAS (ocf::heartbeat:azure-lb): Started sap-
cl2
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-
cl2
3. [2] Change the ownership of the /usr/sap/SID/D03 folder after the file system is
mounted.
Bash
Install an SAP NetWeaver AAS as the root on the second node by using a virtual
host name that maps to the IP address of the load balancer front-end
configuration for the PAS. For example, use sapaas, 10.90.90.31, and the instance
number that you used for the probe of the load balancer, for example, 03.
Bash
# Allow access to SWPM. This rule is not permanent. If you reboot the
machine, you have to run the command again.
sudo firewall-cmd --zone=public --add-port=4237/tcp
To prevent the start of the instances by the sapinit startup script, all instances
managed by Pacemaker must be commented out from the /usr/sap/sapservices
file.
Bash
sudo vi /usr/sap/sapservices
# On the node where AAS is installed, comment out the following lines.
#LD_LIBRARY_PATH=/usr/sap/NW1/D03/exe:$LD_LIBRARY_PATH;export
LD_LIBRARY_PATH;/usr/sap/NW1/D03/exe/sapstartsrv
pf=/usr/sap/NW1/SYS/profile/NW1_D03_sapaas -D -u nw1adm
Bash
Bash
# Node List:
# Node sap-cl1: standby
# Online: [ sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Stopped
(disabled)
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Stopped
(disabled)
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Stopped
(disabled)
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Stopped
(disabled)
# Resource Group: g-NW1_AAS:
# vip_NW1_AAS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# nc_NW1_AAS (ocf::heartbeat:azure-lb): Started sap-
cl2
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-
cl2
# rsc_sap_NW1_AAS03 (ocf::heartbeat:SAPInstance): Started sap-
cl2
7. Configure a constraint to start the AAS resource group only after the ASCS
instance is started.
Bash
Bash
# As PAS and AAS is installed using virtual hostname, you need to copy
virtual hostname directory in /home/nw1adm/.hdb
# Copy sappas directory from sap-cl1 to sap-cl2
sap-cl1:nw1adm > scp -r sappas nw1adm@sap-cl2:/home/nw1adm/.hdb
# Copy sapaas directory from sap-cl2 to sap-cl1. Execute the command
from the same sap-cl1 host.
sap-cl1:nw1adm > scp -r nw1adm@sap-cl2:/home/nw1adm/.hdb/sapaas .
2. [1] To ensure the PAS and AAS instances don't run on the same nodes whenever
both nodes are running, add a negative colocation constraint with the following
command:
Bash
The score of -1000 ensures that if only one node is available, both the instances
continue to run on the other node. If you want to keep the AAS instance down in
such a situation, you can use score=-INFINITY to enforce this condition.
Bash
# Node List:
# Online: [ sap-cl1 sap-cl2 ]
#
# Full list of resources:
#
# rsc_st_azure (stonith:fence_azure_arm): Started sap-cl2
# Resource Group: g-NW1_ASCS
# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-
cl2
# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-
cl2
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-
cl2
# Resource Group: g-NW1_AERS
# fs_NW1_AERS (ocf::heartbeat:Filesystem): Started sap-
cl1
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-
cl1
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_PAS:
# vip_NW1_PAS (ocf::heartbeat:IPaddr2): Started sap-
cl1
# nc_NW1_PAS (ocf::heartbeat:azure-lb): Started sap-
cl1
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-
cl1
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-
cl1
# Resource Group: g-NW1_AAS:
# vip_NW1_AAS (ocf::heartbeat:IPaddr2): Started sap-
cl2
# nc_NW1_AAS (ocf::heartbeat:azure-lb): Started sap-
cl2
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-
cl2
# rsc_sap_NW1_AAS03 (ocf::heartbeat:SAPInstance): Started sap-
cl2
This article describes how to install and configure SAP HANA along with ABAP SAP
Central Services (ASCS)/SAP Central Services (SCS) and Enqueue Replication Server (ERS)
instances on the same high-availability cluster running on Red Hat Enterprise Linux
(RHEL).
References
Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in
Pacemaker
Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL
7.5+ and RHEL 8
SAP Note 1928533 , which has:
A list of Azure virtual machine (VM) sizes that are supported for the deployment
of SAP software.
Important capacity information for Azure VM sizes.
Supported SAP software and operating system (OS) and database combinations.
Required SAP kernel version for Windows and Linux on Azure.
SAP Note 2015553 lists prerequisites for SAP-supported SAP software
deployments in Azure.
SAP Note 2002167 lists the recommended OS settings for Red Hat Enterprise
Linux 7.x.
SAP Note 2772999 lists the recommended OS settings for Red Hat Enterprise
Linux 8.x.
SAP Note 2009879 has SAP HANA guidelines for Red Hat Enterprise Linux.
SAP Note 2178632 has detailed information about all monitoring metrics
reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1999351 has more troubleshooting information for the Azure
Enhanced Monitoring Extension for SAP.
SAP Community Wiki has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP Netweaver in Pacemaker cluster
General RHEL documentation:
High-Availability Add-On Overview
High-Availability Add-On Administration
High Availability Add-On Reference
Azure-specific RHEL documentation:
Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual
Machines as Cluster Members
Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-
Availability Cluster on Microsoft Azure
Overview
This article describes the cost-optimization scenario where you deploy SAP HANA, SAP
ASCS/SCS, and SAP ERS instances in the same high-availability setup. To minimize the
number of VMs for a single SAP system, you want to install SAP ASCS/SCS and SAP ERS
on the same hosts where SAP HANA is running. With SAP HANA being configured in a
high-availability cluster setup, you want SAP ASCS/SCS and SAP ERS also to be managed
by cluster. The configuration is basically an addition to an already configured SAP HANA
cluster setup. In this setup, SAP ASCS/SCS and SAP ERS are installed on a virtual host
name, and its instance directory is managed by the cluster.
The presented architecture showcases NFS on Azure Files or Azure NetApp Files for a
highly available instance directory for the setup.
The example shown in this article to describe deployment uses the following system
information:
ノ Expand table
7 Note
To install more application servers on separate VMs, you can either use NFS shares
or a local managed disk for an instance directory file system. If you're installing
more application servers for SAP J2EE system, /usr/sap/<SID>/J<nr> on NFS on
Azure Files isn't supported.
See NFS on Azure Files considerations and Azure NetApp Files considerations
because the same considerations apply to this setup.
Prerequisites
The configuration described in this article is an addition to your already-configured SAP
HANA cluster setup. In this configuration, an SAP ASCS/SCS and ERS instance are
installed on a virtual host name. The instance directory is managed by the cluster.
Install a HANA database and set up a HANA system replication (HSR) and Pacemaker
cluster by following the steps in High availability of SAP HANA on Azure VMs on Red
Hat Enterprise Linux or High availability of SAP HANA Scale-up with Azure NetApp Files
on Red Hat Enterprise Linux depending on what storage option you're using.
After you install, configure, and set up the HANA Cluster, follow the next steps to install
ASCS and ERS instances.
1. Open the internal load balancer that was created for SAP HANA cluster setup.
2. Frontend IP Configuration: Create two front-end IPs, one for ASCS and another for
ERS (for example, 10.66.0.20 and 10.66.0.30).
3. Backend Pool: This pool remains the same because we're deploying ASCS and ERS
on the same back-end pool.
4. Inbound rules: Create two load-balancing rules, one for ASCS and another for ERS.
Follow the same steps for both load-balancing rules.
5. Frontend IP address: Select the front-end IP.
a. Backend pool: Select the back-end pool.
b. High availability ports: Select this option.
c. Protocol: Select TCP.
d. Health Probe: Create a health probe with the following details (applies for both
ASCS and ERS):
i. Protocol: Select TCP.
ii. Port: For example, 620<Instance-no.> for ASCS and 621<Instance-no.> for
ERS.
iii. Interval: Enter 5.
iv. Probe Threshold: Enter 2.
e. Idle timeout (minutes): Enter 30.
f. Enable Floating IP: Select this option.
) Important
When VMs without public IP addresses are placed in the back-end pool of an internal
(no public IP address) Standard Azure Load Balancer instance, there's no outbound
internet connectivity unless more configuration is performed to allow routing to public
endpoints. For steps on how to achieve outbound connectivity, see Public endpoint
connectivity for virtual machines using Azure Standard Load Balancer in SAP high-
availability scenarios.
) Important
Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer.
Enabling TCP timestamps causes the health probes to fail. Set the parameter
net.ipv4.tcp_timestamps to 0 . For more information, see Load Balancer health
probes.
NFS on Azure Files: Azure VMs high availability for SAP NW on RHEL with NFS on
Azure Files
Azure NetApp Files: Azure VMs high availability for SAP NW on RHEL with Azure
NetApp Files
This article describes requirements and setup of a third HANA replication site to
complement an existing Pacemaker cluster. Both SUSE Linux Enterprise Server (SLES) and
RedHat Enterprise Linux (RHEL) specifics are covered.
Overview
SAP HANA supports system replication (HSR) with more than two sites connected. You
can add a third site to an existing HSR pair, managed by Pacemaker in a highly available
setup. You can deploy the third site in a second Azure region for disaster recovery (DR)
purposes.
Pacemaker and the HANA cluster resource agent manage the first two sites. The
Pacemaker cluster doesn't control the third site.
Multitarget replicates data changes from primary to more than one target
system. The third site is connected to primary replication in a star topology.
Multitier is a two-tier replication. A cascading, or chained, setup of three
different HANA tiers. The third site connects to the secondary.
For more conceptual details about HANA HSR within one region and across different
regions, see SAP HANA availability across Azure regions.
7 Note
7 Note
HANA scale-up only: See RedHat support policies for RHEL HA clusters for
details on the minimum OS, SAP HANA, and cluster resource agents version.
HANA scale-out only: HANA multitarget replication isn't supported on Azure with
a Pacemaker cluster.
Failure of the third node won't trigger any cluster action. The cluster detects the
replication status of connected sites and the monitored attribute for the third site can
change between SOK and SFAIL states. Any takeover tests to the third/DR site or
executing your DR exercise process should first place the cluster resources into
maintenance mode to prevent any undesired cluster action.
The following example shows a multitarget system replication system. For more
information, see SAP documentation .
1. Deploy Azure resources for the third node. Depending on your requirements, you
can use a different Azure region for DR purposes.
Steps required for the third site are similar to virtual machines (VMs) for HANA
scale-up cluster. The third site uses Azure infrastructure. The OS and HANA version
match the existing Pacemaker cluster, with the following exceptions:
No load balancer is deployed for the third site. There's no integration with
the existing cluster load balancer for the VM of the third site.
Don't install OS packages SAPHanaSR, SAPHanaSR-doc, and the OS package
pattern ha_sles on the third site VM.
No integration into the cluster for VM or HANA resources of the third site.
No HANA HA hook setup for the third site in global.ini.
The same HANA SID and HANA installation number must be used for the third site.
3. With SAP HANA on the third site installed and running, register the third site with
the primary site.
The following example uses SITE-DR as the name for the third site.
Bash
4. Verify that the HANA system replication shows the secondary site and the third
site.
Bash
5. Check the SAPHanaSR attribute for the third site. SITE-DR should show up with the
status SOK in the Sites section.
Bash
The cluster detects the replication status of connected sites. The monitored
attributes can change between SOK and SFAIL . There's no cluster action if the
replication to the DR site fails.
The following example shows a multitarget system replication system. For more
information, see SAP documentation .
1. Deploy Azure resources for the third site. Depending on your requirements, you
can use a different Azure region for DR purposes.
Steps required for the HANA scale-out on the third site mirror the steps to deploy
the HANA scale-out cluster. The third site uses Azure infrastructure, OS, and HANA
installation steps for SITE1 of the scale-out cluster, with the following exceptions:
No load balancer is deployed for the third site. There's no integration with
the existing cluster load balancer for the VMs of the third site.
Don't install the OS packages SAPHanaSR-ScaleOut, SAPHanaSR-ScaleOut-
doc, and the OS package pattern ha_sles on the third site VMs.
No majority maker VM for the third site because there's no cluster
integration.
Create the NFS volume /hana/shared for the third site's exclusive use.
No integration into the cluster for the VMs or HANA resources of the third
site.
No HANA HA hook setup for the third site in global.ini.
You must use the same HANA SID and HANA installation number for the third site.
2. With SAP HANA scale-out on the third site installed and running, register the third
site with the primary site.
The following example uses SITE-DR as the name for the third site.
Bash
3. Verify that the HANA system replication shows the secondary site and the third
site.
Bash
4. Check the SAPHanaSR attribute for the third site. SITE-DR should show up with the
status SOK in the Sites section.
Bash
The cluster detects the replication status of connected sites. The monitored
attribute can change between SOK and SFAIL . There's no cluster action if the
replication to the DR site fails.
Autoregister the third site
During a planned or unplanned takeover event between the two Pacemaker cluster sites,
HSR to the third site is also interrupted. Pacemaker doesn't modify HANA replication to
the third site.
HANA sites in the Linux cluster. Both SITE1 and SITE2 need the parameter in the
respective HANA global.ini configuration file. The parameter can also be used outside a
Pacemaker cluster.
For HSR multitier , no automatic SAP HANA registration of the third site exists. You
need to manually register the third site to the current secondary to keep the HSR
replication chain for multitier.
Next steps
Disaster recovery overview and infrastructure
Disaster recovery for SAP workloads
High-availability architecture and scenarios for SAP NetWeaver
Exchange Online Integration for Email-
Outbound from SAP NetWeaver
Article • 02/10/2023
Sending emails from your SAP backend is a standard feature widely distributed for use
cases such as alerting for batch jobs, SAP workflow state changes or invoice distribution.
Many customers established the setup using Exchange Server On-Premises. With a shift
to Microsoft 365 and Exchange Online comes a set of cloud-native approaches
impacting that setup.
This article describes the setup for outbound email-communication from NetWeaver-
based SAP systems to Exchange Online. That applies to SAP ECC, S/4HANA, SAP RISE
managed, and any other NetWeaver based system.
Overview
Existing implementations relied on SMTP Auth and elevated trust relationship because
the legacy Exchange Server on-premises could live close to the SAP system itself and
was governed by customers themselves. With Exchange Online there's a shift in
responsibilities and connectivity paradigm. Microsoft supplies Exchange Online as a
Software-as-a-Service offering built to be consumed securely and as effectively as
possible from anywhere in the world over the public Internet.
Follow our standard guide to understand the general configuration of a "device" that
wants to send email via Microsoft 365.
) Important
Microsoft disabled Basic Authentication for Exchange online as of 2020 for newly
created Microsoft 365 tenants. In addition to that, the feature gets disabled for
existing tenants with no prior usage of Basic Authentication starting October 2020.
See our developer blog for reference.
) Important
SMTP Auth was exempted from the Basic Auth feature sunset process. However,
this is a security risk for your estate, so we advise against it. See the latest post by
our Exchange Team on the matter.
) Important
Setup considerations
Given the sunset-exception of SMTP Auth there are four different options supported by
SAP NetWeaver that we want to describe. The first three correlate with the scenarios
described in the Exchange Online documentation.
For brevity we'll refer to the SAP Connect administration tool used for the mail server
setup only by its transaction code SCOT.
Connect SAP applications directly to Microsoft 365 using SMTP Auth endpoint
smtp.office365.com in SCOT.
A valid email address will be required to authenticate with Microsoft 365. The email
address of the account that's used to authenticate with Microsoft 365 will appear as the
sender of messages from the SAP application.
1. For a single account (per mailbox) that overrides the tenant-wide setting or
2. at organization level.
7 Note
if your authentication policy disables basic authentication for SMTP, clients cannot
use the SMTP AUTH protocol even if you enable the settings outlined in this article.
The per-mailbox setting to enable SMTP AUTH is available in the Microsoft 365 Admin
Center or via Exchange Online PowerShell.
1. Open the Microsoft 365 admin center and go to Users -> Active users.
This will enable SMTP AUTH for that individual user in Exchange Online that you require
for SCOT.
2. Make sure SAP Internet Communication Manager (ICM) parameter is set in your
instance profile. See below an example:
parameter value
icm/server-port-1 PROT=SMTP,PORT=25000,TIMEOUT=180,TLS=1
3. Restart ICM service from SMICM transaction and make sure SMTP service is active.
4. Activate SAPConnect service in SICF transaction.
5. Go to SCOT and select SMTP node (double click) as shown below to proceed with
configuration:
Add mail host smtp.office365.com with port 587. Check the Exchange Online docs
for reference.
Click on the "Settings" button (next to the Security field) to add TLS settings and
basic authentication details as mentioned in point 2 if required. Make sure your
ICM parameter is set accordingly.
Make sure to use a valid Microsoft 365 email ID and password. In addition to that
it needs to be the same user that you've enabled for SMTP Auth at the beginning.
This email ID will show up as the sender.
Coming back to the previous screen: Click on "Set" button and check "Internet"
under "Supported Address Types". Using the wildcard "*" option will allow to send
emails to all domains without restriction.
Next Step: set default Domain in SCOT.
6. Schedule Job to send email to the submission queue. From SCOT select "Send
Job":
SMTP relay lets Microsoft 365 relay emails on your behalf by using a connector that's
configured with your public IP address or a TLS certificate. Compared to the other
options, the connector setup increases complexity.
Transport Layer Security (TLS): SAP application must be able to use TLS version 1.2
and above.
Port: port 25 is required and must be unblocked on your network. Some network
firewalls or ISPs block ports, especially port 25 due to the risk of misuse for
spamming.
MX record: your Mail Exchanger (MX) endpoint, for e.g.,
yourdomain.mail.protection.outlook.com. Find more information on the next
section.
Relay Access: A Public IP address or SSL certificate is required to authenticate
against the relay connector. To avoid configuring direct access it's recommended
to use Source Network Translation (SNAT) as described in this article. Use Source
Network Address Translation (SNAT) for outbound connections.
7 Note
Find above information on the Azure portal using the Virtual Machine overview of
the SAP application server.
3. Go to Settings -> Domains, select your domain (for example, contoso.com), and
find the Mail Exchanger (MX) record.
The Mail Exchanger (MX) record will have data for Points to address or value that
looks similar to yourdomain.mail.protection.outlook.com .
4. Make a note of the data of Points to address or value for the Mail Exchanger (MX)
record, which we refer to as your MX endpoint.
5. In Microsoft 365, select Admin and then Exchange to go to the new Exchange
Admin Center.
Choose By verifying that the IP address of the sending server matches one of these IP
addresses which belong exclusively to your organization and add the IP address
from Step 1 of the Step-by-step configuration instructions for SMTP relay in
Microsoft 365 section.
Review and click on Create connector.
11. Now that you're done with configuring your Microsoft 365 settings, go to your
domain registrar's website to update your DNS records. Edit your Sender Policy
Framework (SPF) record. Include the IP address that you noted in step 1. The
finished string should look similar to this v=spf1 ip4:10.5.3.2
include:spf.protection.outlook.com \~all , where 10.5.3.2 is your public IP
address. Skipping this step may cause emails to be flagged as spam and end up in
the recipient's Junk Email folder.
Port: 25
4. Click "Settings" next to the Security field and make sure TLS is enabled if possible.
Also make sure no prior logon data regarding SMTP AUTH is present. Otherwise
delete existing records with the corresponding button underneath.
5. Test the configuration using a test email from your SAP application with
transaction SBWP and check the status in SOST transaction.
The advantage of this solution is that it can be deployed in the hub of a hub-spoke
virtual network within your Azure environment or within a DMZ to protect your SAP
application hosts from direct access. It also allows for centralized outbound routing to
immediately offload all mail traffic to a central relay when sending from multiple
application servers.
The configuration steps are the same as for the Microsoft 365 SMTP Relay Connector
(Option 3) with the only differences being that the SCOT configuration should reference
the mail host that will perform the relay rather than direct to Microsoft 365. Depending
on the mail system that is being used for the relay it will also be configured directly to
connect to Microsoft 365 using one of the supported methods and a valid user with
password. It is recommended to send a test mail from the relay directly to ensure it can
communicate successfully with Microsoft 365 before completing the SAP SCOT
configuration and testing as normal.
The example architecture shown illustrates multiple SAP application servers with a single
mail relay host in the hub. Depending on the volume of mail to be sent it is
recommended to follow a detailed sizing guide for the mail vendor to be used as the
relay. This may require multiple mail relay hosts which operate with an Azure Load
Balancer.
Next Steps
Understand mass-mailing with Azure Twilio - SendGrid
Understand Exchange Online Service limitations (e.g., attachment size, message limits,
throttling etc.)
Scenario - Using Microsoft Entra ID to
secure access to SAP platforms and
applications
Article • 10/23/2023
This document provides advice on the technical design and configuration of SAP
platforms and applications when using Microsoft Entra ID as the primary user
authentication service for SAP Cloud Identity Services . SAP Cloud Identity Services
includes Identity Authentication, Identity Provisioning, Identity Directory, and
Authorization Management. Learn more about the initial setup for authentication in the
Microsoft Entra single sign-on (SSO) integration with SAP Cloud Identity Services
tutorial. For more information on provisioning and other scenarios, see plan deploying
Microsoft Entra for user provisioning with SAP source and target applications and
manage access to your SAP applications.
Abbreviation Description
BTP SAP Business Technology Platform is an innovation platform optimized for SAP
applications in the cloud. Most of the SAP technologies discussed here are part of
BTP. The products formally known as SAP Cloud Platform are part of SAP BTP.
IAS SAP Cloud Identity Services - Identity Authentication, a component of SAP Cloud
Identity Services, is a cloud service for authentication, single sign-on and user
management in SAP cloud and on-premises applications. IAS helps users
authenticate to their own SAP BTP service instances, as a proxy that integrates
with Microsoft Entra single-sign on.
IPS SAP Cloud Identity Services - Identity Provisioning, a component of SAP Cloud
Identity Services, is a cloud service that helps you provision identities and their
authorization to SAP cloud and on-premises application.
XSUAA Extended Services for Cloud Foundry User Account and Authentication. Cloud
Foundry , a platform as a service (PaaS) that can be deployed on different
infrastructures, is the environment on which SAP built SAP Business Technology
Platform. XSUAA is a multitenant OAuth authorization server that is the central
infrastructure component of the Cloud Foundry environment. XSUAA provides for
business user authentication and authorization within the SAP BTP.
Abbreviation Description
Fiori The web-based user experience of SAP (as opposed to the desktop-based
experience).
Overview
There are many services and components in the SAP and Microsoft technology stack
that play a role in user authentication and authorization scenarios. The main services are
listed in the diagram below.
You want to govern all your identities centrally and only from Microsoft Entra ID.
You want to reduce maintenance efforts as much as possible and automate
authentication and app access across Microsoft and SAP.
The general guidance for Microsoft Entra ID with IAS applies for apps deployed on
BTP and SAP SaaS apps configured in IAS. Specific recommendations will also be
provided where applicable to BTP (for example, using role mappings with
Microsoft Entra groups) and SAP SaaS apps (for example, using identity
provisioning service for role-based authorization).
We also assume that users are already provisioned in Microsoft Entra ID and
towards any SAP systems that require users to be provisioned to function.
Regardless of how that was achieved: provisioning could have been through
manually, from on-premises Active Directory through Microsoft Entra Connect, or
through HR systems like SAP SuccessFactors. In this document therefore,
SuccessFactors is considered to be an application like any other that (existing)
users will sign on to. We don't cover actual provisioning of users from
SuccessFactors into Microsoft Entra ID.
Based on these assumptions, we focus mostly on the products and services presented in
the diagram below. These are the various components that are most relevant to
authentication and authorization in a cloud-based environment.
7 Note
Most of the guidance here applies to Azure Active Directory B2C as well, but there
are some important differences. For more information, see Using Azure AD B2C as
the Identity Provider.
2 Warning
Be aware of the SAP SAML assertion limits and impact of the length of SAP Cloud
Foundry role collection names and amount of collections proxied by groups in SAP
Cloud Identity Service. For more information, see SAP note 2732890 in SAP for
Me. Exceeded limits result in authorization issues.
Recommendations
Summary
1 - Use Federated Authentication in SAP Business Technology Platform and SAP
SaaS applications through SAP Identity Authentication Service
2 - Use Microsoft Entra ID for Authentication and IAS/BTP for Authorization
3 - Use Microsoft Entra groups for Authorization through Role Collections in
IAS/BTP
4 - Use a single BTP Subaccount only for applications that have similar Identity
requirements
5 - Use the Production IAS tenant for all end user Authentication and Authorization
6 - Define a Process for Rollover of SAML Signing Certificates
Context
Your applications in BTP can use identity providers through Trust Configurations to
authenticate users by using the SAML 2.0 protocol between BTP/XSUAA and the identity
provider. Note that only SAML 2.0 is supported, even though the OpenID Connect
protocol is used between the application itself and BTP/XSUAA (not relevant in this
context).
In BTP, you can choose to set up a trust configuration towards SAP Cloud Identity
Services - Identity Authentication (which is the default) but when your authoritative user
directory is Microsoft Entra ID, you can set up federation so that users can sign in with
their existing Microsoft Entra accounts.
On top of federation, you can optionally also set up user provisioning so that Microsoft
Entra users are provisioned upfront in BTP. However, there's no native support for this
(only for Microsoft Entra ID -> SAP Identity Authentication Service); an integrated
solution with native support would be the BTP Identity Provisioning Service. Provisioning
user accounts upfront could be useful for authorization purposes (for example, to add
users to roles). Depending on requirements however, you can also achieve this with
Microsoft Entra groups (see below) which could mean you don't need user provisioning
at all.
You can choose to federate towards Microsoft Entra ID directly from BTP/XSUAA.
You can choose to federate with IAS that in turn is set up to federate with
Microsoft Entra ID as a Corporate Identity Provider (also known as "SAML
Proxying").
For SAP SaaS applications IAS is provisioned and pre-configured for easy onboarding of
end users. (Examples of this include SuccessFactors, Marketing Cloud, Cloud for
Customer, Sales Cloud, and others.) This scenario is less complex, because IAS is directly
connected with the target app and not proxied to XSUAA. In any case, the same rules
apply for this setup as for Microsoft Entra ID with IAS in general.
On the trust configuration in BTP, we recommend that "Create Shadow Users During
Logon" is enabled. This way, users who haven't yet been created in BTP, automatically
get an account when they sign in through IAS / Microsoft Entra ID for the first time. If
this setting would be disabled, only pre-provisioned users would be allowed to sign in.
Summary of implementation
In BTP:
Set up a trust configuration towards IAS (SAP doc ) and ensure that "Available for
User Logon " and "Create Shadow Users During Logon" are both enabled.
Optionally, disable "Available for User Logon" on the default "SAP ID Service" trust
configuration so that users always authenticate via Microsoft Entra ID and aren't
presented with a screen to choose their identity provider.
Context
When BTP and IAS have been configured for user authentication via federation towards
Microsoft Entra ID, there are multiple options for configuring authorization:
In Microsoft Entra ID, you can assign Microsoft Entra users and groups to the
Enterprise Application representing your SAP IAS instance in Microsoft Entra ID.
In IAS, you can use Risk-based Authentication to allow or block sign-ins and by
doing that preventing access to the application in BTP.
In BTP, you can use Role Collections to define which users and groups can access
the application and get certain roles.
When the application is federated through IAS, from the point of view of Microsoft Entra
ID the user is essentially "authenticating to IAS" during the sign-in flow. This means that
Microsoft Entra ID has no information about which final BTP application the user is
trying to sign in to. That also implies that authorization in Microsoft Entra ID can only be
used to do very coarse-grained authorization, for example allowing the user to sign in to
any application in BTP, or to none. This also emphasizes SAP's strategy to isolate apps
and authentication mechanisms on the BTP Subaccount level.
While that could be a valid reason for using "User assignment required", it does mean
there are now potentially two different places where authorization information needs to
be maintained: both in Microsoft Entra ID on the Enterprise Application (where it applies
to all BTP applications), as well as in each BTP Subaccount. This could lead to confusion
and misconfigurations where authorization settings are updated in one place but not
the other. For example: a user was allowed in BTP but not assigned to the application in
Microsoft Entra ID resulting in a failed authentication.
Summary of implementation
On the Microsoft Entra Enterprise Application representing the federation relation with
IAS, disable "User assignment required". This also means you can safely skip assignment
of users.
You can configure fine-grained access control inside the application itself, based
on the signed-in user.
You can specify access through Roles and Role Collections in BTP, based on user
assignments or group assignments.
The final implementation can use a combination of both strategies. However, for the
assignment through Role Collections, this can be done on a user-by-user basis, or one
can use groups of the configured identity provider.
With this configuration, we recommend using the Microsoft Entra group's Group ID
(Object ID) as the unique identifier of the group, not the display name
("sAMAccountName"). This means you must use the Group ID as the "Groups" assertion
in the SAML token issued by Microsoft Entra ID. In addition the Group ID is used for the
assignment to the Role Collection in BTP.
Why this recommendation?
If you would assign users directly to Role Collections in BTP, you aren't centralizing
authorization decisions in Microsoft Entra ID. It also means the user must already exist in
IAS before they can be assigned to a Role Collection in BTP - and given that we
recommend federation instead of user provisioning this means the user's shadow
account may not exist yet in IAS at the time you want to do the user assignment. Using
Microsoft Entra groups and assigning them to Role Collections eliminates these issues.
Assigning groups to Role Collections may seem to contradict the prior recommendation
to not use Microsoft Entra ID for authorization. Even in this case however, the
authorization decision is still being taken in BTP, it's just that the decision is now based
on group membership maintained in Microsoft Entra ID.
We recommend using the Microsoft Entra group's Group ID rather than its name
because the Group ID is globally unique, immutable and can never be reused for
another group later on; whereas using the group name could lead to issues when the
name is changed, and there's a security risk in having a group being deleted and
another one getting created with the same name but with users in it that should have
no access to the application.
Summary of implementation
In Microsoft Entra ID:
Create groups to which users can be added that need access to applications in BTP
(for example, create a Microsoft Entra group for each Role Collection in BTP).
On the Microsoft Entra Enterprise Application representing the federation relation
with IAS, configure the SAML User Attributes & Claims to add a group claim for
security groups:
Set the Source attribute to "Group ID" and the Name to Groups (spelled exactly
like this, with upper case 'G').
Further, in order to keep claims payloads small and to avoid running into the
limitation whereby Microsoft Entra ID will limit the number of group claims to
150 in SAML assertions, we highly recommend limiting the groups returned in
the claims to only those groups that explicitly were assigned:
Under "Which groups associated with the user should be returned in the
claim?" answer with "Groups assigned to the application".Then for the groups
you want to include as claims, assign them to the Enterprise Application
using the "Users and Groups" section and selecting "Add user/group".
In IAS:
7 Note
If you need to use the Identity Authentication user store (for example, to include
claims which cannot be sourced from Microsoft Entra ID but that are available in
the IAS user store), you can keep this setting enabled. In that case however, you will
need to configure the Default Attributes sent to the application to include the
relevant claims coming from Microsoft Entra ID (for example with the
${corporateIdP.Groups} format).
In BTP:
On the Role Collections that are used by the applications in that Subaccount, map
the Role Collections to User Groups by adding a configuration for the IAS
Identity Provider and setting the Name to the Group ID (Object ID) of the
Microsoft Entra group.
7 Note
In case you would have another claim in Microsoft Entra ID to contain the
authorization information to be used in BTP, you don't have to use the Groups
claim name. This is what BTP uses when you map the Role Collections to user
groups as above, but you can also map the Role Collections to User Attributes
which gives you a bit more flexibility.
Context
Within BTP, each Subaccount can contain multiple applications. However, from the IAS
point of view a "Bundled Application" is a complete BTP Subaccount, not the more
granular applications within it. This means that all Trust settings, Authentication, and
Access configuration as well as Branding and Layout options in IAS applies to all
applications within that Subaccount. Similarly, all Trust Configurations and Role
Collections in BTP also apply to all applications within that Subaccount.
Summary of implementation
Carefully consider how you want to group multiple applications across Subaccounts in
BTP. For more information, see the SAP Account Model documentation .
Context
When working with IAS, you typically have a Production and a Dev/Test tenant. For
different Subaccounts or applications in BTP, you can choose which identity provider
(IAS tenant) to use.
Because IAS is the centralized component which has been set up to federate with
Microsoft Entra ID, there's only a single place where the federation and identity
configuration must be set up and maintained. Duplicating this in other IAS tenants can
lead to misconfigurations or inconsistencies in end user access between environments.
Note: the default validity period of the initial Microsoft Entra certificate used to sign
SAML assertions is 3 years (and note that the certificate is specific to the Enterprise
Application, unlike OpenID Connect and OAuth 2.0 tokens which are signed by a global
certificate in Microsoft Entra ID). You can choose to generate a new certificate with a
different expiration date, or create and import your own certificate.
When certificates expire, they can no longer be used, and new certificates must be
configured. Therefore, a process must be established to keep the certificate
configuration inside the relying party (which needs to validate the signatures) up to date
with the actual certificates being used to sign the SAML tokens.
In some cases, the relying party can do this automatically by providing it with a
metadata endpoint which returns the latest metadata information dynamically - i.e.,
typically a publicly accessible URL from which the relying party can periodically retrieve
the metadata and update its internal configuration store.
However, IAS only allows Corporate Identity Providers to be set up through an import of
the metadata XML file, it does not support providing a metadata endpoint for dynamic
retrieval of the Microsoft Entra metadata (for example
https://login.microsoftonline.com/my-azuread-tenant/federationmetadata/2007-
06/federationmetadata.xml?appid=my-app-id ). Similarly, BTP does not allow a new Trust
Configuration to be set up from the IAS metadata endpoint (for example https://my-
ias-tenant.accounts.ondemand.com/saml2/metadata ), it also needs a one-time upload of a
The Subaccount certificate in BTP: when this changes, the Application's SAML 2.0
Configuration in IAS must be updated.
The tenant certificate in IAS: when this changes, both the Enterprise Application's
SAML 2.0 Configuration in Microsoft Entra ID and the Trust Configuration in BTP
must be updated.
The Enterprise Application certificate in Microsoft Entra ID: when this changes, the
Corporate Identity Provider's SAML 2.0 Configuration in IAS must be updated.
SAP has example implementations for client certificate notifications with SAP Cloud
Integration and near-expiry handling . This could be adapted with Azure Integration
Services or PowerAutomate. However, they would need to be adapted to work with
server certificates. Such approach requires a custom implementation.
Summary of implementation
Add an email notification address for certificate expiration in Microsoft Entra ID and set
it to a group mailbox so that it isn't sent to a single individual (who may even no longer
have an account by the time the certificate is about to expire). By default, only the user
who created the Enterprise Application will receive a notification.
Consider building automation to execute the entire certificate rollover process. For
example, one can periodically check for expiring certificates and replace them while
updating all relying parties with the new metadata.
There are a few important differences, however. Setting up Azure AD B2C as a corporate
identity provider in IAS and configuring federation between both tenants is described in
more detail in this blog post .
Fortunately, Azure AD B2C is highly customizable, so you can configure the SAML
tokens it sends to IAS to include any custom information. For various options on
supporting authorization claims, see the documentation accompanying the Azure AD
B2C App Roles sample , but in summary: through its API Connector extensibility
mechanism you can optionally still use groups, app roles, or even a custom database to
determine what the user is allowed to access.
Regardless of where the authorization information comes from, it can then be emitted
as the Groups attribute inside the SAML token by configuring that attribute name as the
default partner claim type on the claims schema or by overriding the partner claim type
on the output claims. Note however that BTP allows you to map Role Collections to User
Attributes , which means that any attribute name can be used for authorization
decisions, even if you don't use the Groups attribute name.
Next Steps
Learn more about the initial setup in this tutorial
plan deploying Microsoft Entra for user provisioning with SAP source and target
applications and
manage access to your SAP applications
Discover additional SAP integration scenarios with Microsoft Entra ID and beyond
Expose SAP legacy middleware securely
with Azure PaaS
Article • 02/10/2023
Enabling internal systems and external partners to interact with SAP back ends is a
common requirement. Existing SAP landscapes often rely on the legacy middleware SAP
Process Orchestration (PO) or Process Integration (PI) for their integration and
transformation needs. For simplicity, this article uses the term SAP Process Orchestration
to refer to both offerings.
7 Note
Overview
Existing implementations based on SAP middleware have often relied on SAP's
proprietary dispatching technology called SAP Web Dispatcher . This technology
operates on layer 7 of the OSI model . It acts as a reverse proxy and addresses load-
balancing needs for downstream SAP application workloads like SAP Enterprise
Resource Planning (ERP), SAP Gateway, or SAP Process Orchestration.
7 Note
Azure Firewall handles public internet-based and internal private routing for traffic types
on layers 4 to 7 of the OSI model. It offers filtering and threat intelligence that feed
directly from Microsoft Security.
Azure API Management handles public internet-based and internal private routing
specifically for APIs. It offers request throttling, usage quota and limits, governance
features like policies, and API keys to break down services per client.
Azure VPN Gateway and Azure ExpressRoute serve as entry points to on-premises
networks. They're abbreviated in the diagrams as VPN and XR.
Setup considerations
Integration architecture needs differ, depending on the interface that an organization
uses. SAP-proprietary technologies like Intermediate Document (IDoc) framework ,
Business Application Programming Interface (BAPI) , transactional Remote Function
Calls (tRFCs) , or plain RFCs require a specific runtime environment. They operate on
layers 4 to 7 of the OSI model, unlike modern APIs that typically rely on HTP-based
communication (layer 7 of the OSI model). Because of that, the interfaces can't be
treated the same way.
This article focuses on modern APIs and HTTP, including integration scenarios like
Applicability Statement 2 (AS2) . File Transfer Protocol (FTP) serves as an example to
handle non-HTTP integration needs. For more information about Microsoft load-
balancing solutions, see Load-balancing options.
7 Note
SAP publishes dedicated connectors for its proprietary interfaces. Check SAP's
documentation for Java and .NET , for example. They're supported by
Microsoft gateways too. Be aware that IDocs can also be posted via HTTP .
Security concerns require the usage of firewalls for lower-level protocols and WAFs to
address HTTP-based traffic with Transport Layer Security (TLS) . To be effective, TLS
sessions need to be terminated at the WAF level. To support zero-trust approaches, we
recommend that you re-encrypt again afterward to provide end-to-encryption.
Integration protocols such as AS2 can raise alerts by using standard WAF rules. We
recommend using the Application Gateway WAF triage workbook to identify and
better understand why the rule is triggered, so you can remediate effectively and
securely. Open Web Application Security Project (OWASP) provides the standard rules.
For a detailed video session on this topic with emphasis on SAP Fiori exposure, see the
SAP on Azure webcast .
You can further enhance security by using mutual TLS (mTLS), which is also called
mutual authentication. Unlike normal TLS, it verifies the client identity.
7 Note
Virtual machine (VM) pools require a load balancer. For better readability, the
diagrams in this article don't show a load balancer.
7 Note
If you don't need SAP-specific balancing features that SAP Web Dispatcher
provides, you can replace them with Azure Load Balancer. This replacement gives
the benefit of a managed PaaS offering instead of an infrastructure as a service
(IaaS) setup.
You can avoid unintentional access through access control lists on SAP Web
Dispatcher.
One of the scenarios for SAP Process Orchestration communication is inbound flow.
Traffic might originate from on-premises, external apps or users, or an internal system.
The following example focuses on HTTPS.
The following outbound scenario shows two possible methods. One uses HTTPS via
Azure Application Gateway calling a web service (for example, SOAP adapter). The other
uses FTP over SSH (SFTP) via Azure Firewall transferring files to a business partner's SFTP
server.
Scenario: API Management focused
Compared to the scenarios for inbound and outbound connectivity, the introduction of
Azure API Management in internal mode (private IP only and virtual network
integration) adds built-in capabilities like:
Throttling.
API governance.
Additional security options like modern authentication flows.
Azure Active Directory integration.
The opportunity to add SAP APIs to a central API solution across the company.
When you don't need a WAF, you can deploy Azure API Management in external mode
by using a public IP address. That deployment simplifies the setup while keeping the
throttling and API governance capabilities. Basic protection is implemented for all Azure
PaaS offerings.
Scenario: Global reach
Azure Application Gateway is a region-bound service. Compared to the preceding
scenarios, Azure Front Door ensures cross-region global routing, including a web
application firewall. For details about the differences, see this comparison.
The following diagram condenses SAP Web Dispatcher, SAP Process Orchestration, and
the back end into single image for better readability.
Scenario: File-based
Non-HTTP protocols like FTP can't be addressed with Azure API Management,
Application Gateway, or Azure Front Door as shown in the preceding scenarios. Instead,
the managed Azure Firewall instance or the equivalent network virtual appliance (NVA)
takes over the role of securing inbound requests.
Files need to be stored before SAP can process them. We recommend that you use
SFTP. Azure Blob Storage supports SFTP natively.
Alternative SFTP options are available in Azure Marketplace if necessary.
The following diagram shows a variation of this scenario with integration targets
externally and on-premises. Different types of secure FTP illustrate the communication
path.
For insights into Network File System (NFS) file shares as an alternative to Blob Storage,
see NFS file shares in Azure Files.
The following diagrams show two setups as examples. For more information, see the
SAP RISE reference guide.
) Important
Contact SAP to ensure that communication ports for your scenario are allowed and
opened in NSGs.
HTTP inbound
In the first setup, the customer governs the integration layer, including SAP Process
Orchestration and the complete inbound path. Only the final SAP target runs on the
RISE subscription. Communication to the RISE-hosted workload is configured through
virtual network peering, typically over the hub. A potential integration could be IDocs
posted to the SAP ERP web service /sap/bc/idoc_xml by an external party.
This second example shows a setup where SAP RISE runs the whole integration chain,
except for the API Management layer.
File outbound
In this scenario, the SAP-managed Process Orchestration instance writes files to the
customer-managed file share on Azure or to a workload sitting on-premises. The
customer handles the breakout.
Comparison of gateway setups
7 Note
Depending on the integration protocols you're using, you might need multiple
components. For more information about the benefits of the various combinations of
chaining Azure Application Gateway with Azure Firewall, see Azure Firewall and
Application Gateway for virtual networks.
High availability and disaster recovery for VM-based SAP integration workloads
A managed key store like Azure Key Vault for all involved credentials, certificates,
and keys
For more information, view the Azure Logic Apps connectors for your desired SAP
interfaces.
Next steps
Protect APIs with Application Gateway and API Management
Working with SAP datasets in Microsoft Excel or Power BI is a common requirement for
customers.
This article describes the required configurations and components to enable SAP
dataset consumption via OData with Power Query. The SAP data integration is
considered "live" because it can be refreshed from clients such as Microsoft Excel or
Power BI on-demand, unlike data exports (like SAP List Viewer (ALV) CSV exports) for
instance. Those exports are static by nature and have no continuous relationship with
the data origin.
The article puts emphasis on end-to-end user mapping between the known Microsoft
Entra identity in Power Query and the SAP backend user. This mechanism is often
referred to as SAP Principal Propagation.
The focus of the described configuration is on the Azure API Management, SAP
Gateway , SAP OAuth 2.0 Server with AS ABAP , and OData sources, but the concepts
used apply to any web-based resource.
) Important
Note: SAP Principal Propagation ensures user-mapping to the licensed named SAP
user. For any SAP license related questions please contact your SAP representative.
The mechanism described in this article uses the standard built-in OData capabilities of
Power Query and puts emphasis for SAP landscapes deployed on Azure. Address on-
premises landscapes with the Azure API Management self-hosted Gateway.
For more information on which Microsoft products support Power Query in general, see
the Power Query documentation.
Setup considerations
End users have a choice between local desktop or web-based clients (for instance Excel
or Power BI). The client execution environment needs to be considered for the network
path between the client application and the target SAP workload. Network access
solutions such as VPN aren't in scope for apps like Excel for the web.
Azure API Management reflects local and web-based environment needs with different
deployment modes that can be applied to Azure landscapes (internal or external).
Internal refers to instances that are fully restricted to a private virtual network whereas
require a hybrid deployment to apply the approach as is using the Azure API
Management self-hosted Gateway.
Power Query requires matching API service URL and Microsoft Entra application ID URL.
Configure a custom domain for Azure API Management to meet the requirement.
SAP Gateway needs to be configured to expose the desired target OData services.
Discover and activate available services via SAP transaction code /IWFND/MAINT_SERVICE .
For more information, see SAP's OData configuration .
If custom domain for Azure API Management isn't an option for you, you need to
use a custom Power Query Connector instead.
XML
<!-- if empty Bearer token supplied assume Power Query sign-in request as
described [here:](/power-query/connectorauthentication#supported-workflow) -
->
<when
condition="@(context.Request.Headers.GetValueOrDefault("Authorization","").T
rim().Equals("Bearer"))">
<return-response>
<set-status code="401" reason="Unauthorized" />
<set-header name="WWW-Authenticate" exists-action="override">
<!-- Check the client ID for Power Query [here:](/power-
query/connectorauthentication#supported-workflow) -->
<value>Bearer
authorization_uri=https://login.microsoftonline.com/{{AADTenantId}}/oauth2/a
uthorize?response_type=code%26client_id=a672d62c-fc7b-4e81-a576-
e60dc46e951d</value>
</set-header>
</return-response>
</when>
In addition to the support of the Organizational Account login flow, the policy supports
OData URL response rewriting because the target server replies with original URLs. See
below a snippet from the mentioned policy:
XML
<!-- URL rewrite in body only required for GET operations -->
<when condition="@(context.Request.Method == "GET")">
<!-- ensure downstream API metadata matches Azure API Management caller
domain in Power Query -->
<find-and-replace from="@(context.Api.ServiceUrl.Host +":"+
context.Api.ServiceUrl.Port + context.Api.ServiceUrl.Path)"
to="@(context.Request.OriginalUrl.Host + ":" +
context.Request.OriginalUrl.Port + context.Api.Path)" />
</when>
7 Note
For more information about secure SAP access from the Internet and SAP perimeter
network design, see this guide. Regarding securing SAP APIs with Azure, see this
article.
Switch the login method to Organizational account and click Sign in. Supply the
Microsoft Entra account that is mapped to the named SAP user on the SAP Gateway
using SAP Principal Propagation. For more information about the configuration, see this
Microsoft tutorial. Learn more about SAP Principal Propagation from this SAP
community post and this video series .
Continue to choose at which level the authentication settings should be applied by
Power Query on Excel. Below example shows a setting that would apply to all OData
services hosted on the target SAP system (not only to the sample service
GWSAMPLE_BASIC).
7 Note
The authorization scope setting on URL level in below screen is independent of the
actual authorizations on the SAP backend. SAP Gateway remains the final validator
of each request and associated authorizations of a mapped named SAP user.
) Important
Learn more about SAP Principal Propagation from this SAP community post and
this video series .
The policy relies on an established SSO setup between Microsoft Entra ID and SAP
Gateway (use SAP NetWeaver from the Microsoft Entra gallery). See below an example
with the demo user Adele Vance. User mapping between Microsoft Entra ID and the SAP
system happens based on the user principal name (UPN) as the unique user identifier.
The UPN mapping is maintained on the SAP back end using transaction SAML2.
According to this configuration named SAP users will be mapped to the respective
Microsoft Entra user. See below an example configuration from the SAP back end using
transaction code SU01.
For more information about the required SAP OAuth 2.0 Server with AS ABAP
configuration, see this Microsoft tutorial about SSO with SAP NetWeaver using OAuth.
Using the described Azure API Management policies any Power Query enabled
Microsoft product may call SAP hosted OData services, while honoring the SAP named
user mapping.
SAP OData access via other Power Query
enabled applications and services
Above example shows the flow for Excel Desktop, but the approach is applicable to any
Power Query OData enabled Microsoft product. For more information on the OData
connector of Power Query and which products support it, see the Power Query
Connectors documentation. For more information which products support Power Query
in general, see the Power Query documentation.
Popular consumers are Power BI, Excel for the web , Power Apps (Dataflows) and
Analysis Service.
7 Note
Use the Azure API Management policy for SAP to handle the authentication,
refresh tokens, CSRF tokens and overall caching of tokens outside of the flow.
Next steps
Learn from where you can use OData with Power Query
Understand Azure Application Gateway and Web Application Firewall for SAP
Printing from your SAP landscape is a requirement for many customers. Depending on
your business, printing needs can come in different areas and SAP applications.
Examples can be data list printing, mass- or label printing. Such production and batch
print scenarios are often solved with specialized hardware, drivers and printing
solutions. This article addresses options to use Universal Print for SAP front-end printing
of the SAP users.
Prerequisites
SAP front-end printing sends an output to a printer available for the user on their
front-end device. In other words, a printer accessible by the operating system. Same
client computer runs SAP GUI or browser. To use Universal Print, you need to have
access to such printer(s).
See the Universal Print documentation for details on these prerequisites. As a result, one
or more Universal Print printers are visible in your device’s printer list. For SAP front-end
printing, it's not necessary to make it your default printer.
With such SAP printer definition, SAP GUI uses the operating system printer details. The
operating system already knows your added Universal Print printers. As with SAP web
applications, there's no direct communication between the SAP system and Universal
Print APIs. No settings to configure for your SAP system beyond the available output
device for front-end printing.
When using SAP GUI for HTML and front-end printing, you can print to an SAP defined
printer, too. In the SAP system, you need a front-end printer with access method ‘G’ and
a device type of PDF or derivate. For more information, see SAP’s documentation .
Such print output is displayed in browser as a PDF from the SAP system. You open the
common OS printing dialog and select a Universal Print printer installed on your
computer.
Limitations
SAP defines front-end printing with several constraints . It can't be used for
background printing, nor should it be relied upon for production or mass printing. See if
your SAP printer definition is correct, as printers with access method ‘F’ don't work
correctly with current SAP releases. More details can be found in SAP note 2028598 -
Technical changes for front-end printing with access method F .
Next steps
Check out the documentation:
7 Note
This reference is part of the sap-hana extension for the Azure CLI (version 2.0.46 or
higher). The extension will automatically install the first time you run an az
hanainstance command. Learn more about extensions.
Commands
ノ Expand table
az hanainstance update Update the Tags field of a SAP HANA Instance. Extension GA
az hanainstance create
Azure CLI
Required Parameters
--instance-name -n
--ip-address
--location -l
Location of the SAP HANA instance. Default is the location of target resource group.
--os-computer-name
--partner-node-id
ARM ID of a HANA Instance on the network to connect the SAP HANA instance.
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--ssh-public-key
Global Parameters
--debug
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
az hanainstance delete
Azure CLI
Optional Parameters
--ids
--instance-name -n
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
JMESPath query string. See http://jmespath.org/ for more information and
examples.
--subscription
--verbose
az hanainstance list
Azure CLI
Optional Parameters
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
Global Parameters
--debug
--help -h
--only-show-errors
Only show errors, suppressing warnings.
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
az hanainstance restart
Azure CLI
Optional Parameters
--ids
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
Name or ID of subscription. You can configure the default subscription using az
account set -s NAME_OR_ID .
--verbose
az hanainstance show
Azure CLI
Optional Parameters
--ids
--instance-name -n
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
az hanainstance shutdown
Azure CLI
Optional Parameters
--ids
--instance-name -n
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
az hanainstance start
Azure CLI
Optional Parameters
--ids
--instance-name -n
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Azure CLI
Optional Parameters
--add
Add an object to a list of objects by specifying a path and key value pairs. Example: -
-add property.listProperty <key=value, string or JSON string> .
default value: []
--force-string
When using 'set' or 'add', preserve string literals instead of attempting to convert to
JSON.
default value: False
--ids
--instance-name -n
--no-wait
Do not wait for the long-running operation to finish.
default value: False
--remove
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--set
Update an object by specifying a property path and value to set. Example: --set
property1.property2=<value> .
default value: []
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Update-AzSapMonitor Patches the Tags field of a SAP monitor for the specified
subscription, resource group, and monitor name.
6 Collaborate with us on
Azure PowerShell feedback
GitHub
Azure PowerShell is an open source
The source for this content can project. Select a link to provide
be found on GitHub, where you feedback:
can also create and review
issues and pull requests. For Open a documentation issue
more information, see our
contributor guide. Provide product feedback
SAP HANA database
Article • 01/24/2024
Summary
ノ Expand table
Item Description
Products Excel
Power BI (Semantic models)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Analysis Services
7 Note
Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.
Prerequisites
You'll need an SAP account to sign in to the website and download the drivers. If you're
unsure, contact the SAP administrator in your organization.
To use SAP HANA in Power BI Desktop or Excel, you must have the SAP HANA ODBC
driver installed on the local client computer for the SAP HANA data connection to work
properly. You can download the SAP HANA Client tools from SAP Development Tools ,
which contains the necessary ODBC driver. Or you can get it from the SAP Software
Download Center . In the Software portal, search for the SAP HANA CLIENT for
Windows computers. Since the SAP Software Download Center changes its structure
frequently, more specific guidance for navigating that site isn't available. For instructions
about installing the SAP HANA ODBC driver, go to Installing SAP HANA ODBC Driver on
Windows 64 Bits .
To use SAP HANA in Excel, you must have either the 32-bit or 64-bit SAP HANA ODBC
driver (depending on whether you're using the 32-bit or 64-bit version of Excel) installed
on the local client computer.
This feature is only available in Excel for Windows if you have Office 2019 or a Microsoft
365 subscription . If you're a Microsoft 365 subscriber, make sure you have the latest
version of Office .
HANA 1.0 SPS 12rev122.09, 2.0 SPS 3rev30 and BW/4HANA 2.0 is supported.
Capabilities Supported
Import
Direct Query (Power BI semantic models)
Advanced
SQL Statement
1. Select Get Data > SAP HANA database in Power BI Desktop or From Database >
From SAP HANA Database in the Data ribbon in Excel.
2. Enter the name and port of the SAP HANA server you want to connect to. The
example in the following figure uses SAPHANATestServer on port 30015 .
By default, the port number is set to support a single container database. If your
SAP HANA database can contain more than one multitenant database container,
select Multi-container system database (30013). If you want to connect to a
tenant database or a database with a non-default instance number, select Custom
from the Port drop-down menu.
If you're connecting to an SAP HANA database from Power BI Desktop, you're also
given the option of selecting either Import or DirectQuery. The example in this
article uses Import, which is the default (and the only mode for Excel). For more
information about connecting to the database using DirectQuery in Power BI
Desktop, go to Connect to SAP HANA data sources by using DirectQuery in Power
BI.
You can also enter an SQL statement or enable column binding from Advanced
options. More information, Connect using advanced options
3. If you're accessing a database for the first time, you'll be asked to enter your
credentials for authentication. In this example, the SAP HANA server requires
database user credentials, so select Database and enter your user name and
password. If necessary, enter your server certificate information.
Also, you may need to validate the server certificate. For more information about
using validate server certificate selections, see Using SAP HANA encryption. In
Power BI Desktop and Excel, the validate server certificate selection is enabled by
default. If you've already set up these selections in ODBC Data Source
Administrator, clear the Validate server certificate check box. To learn more about
using ODBC Data Source Administrator to set up these selections, go to Configure
SSL for ODBC client access to SAP HANA.
4. From the Navigator dialog box, you can either transform the data in the Power
Query editor by selecting Transform Data, or load the data by selecting Load.
2. Enter the name and port of the SAP HANA server you want to connect to. The
example in the following figure uses SAPHANATestServer on port 30015 .
4. Select the name of the on-premises data gateway to use for accessing the
database.
7 Note
You must use an on-premises data gateway with this connector, whether your
data is local or online.
5. Choose the authentication kind you want to use to access your data. You'll also
need to enter a username and password.
7 Note
8. From the Navigator dialog box, you can either transform the data in the Power
Query editor by selecting Transform Data, or load the data by selecting Load.
The following table describes all of the advanced options you can set in Power Query.
ノ Expand table
SQL Statement More information, Import data from a database using native database
query
Enable column Binds variables to the columns of a SAP HANA result set when fetching
binding data. May potentially improve performance at the cost of slightly higher
memory utilization. This option is only available in Power Query Desktop.
More information: Enable column binding
ConnectionTimeout A duration that controls how long to wait before abandoning an attempt
to make a connection to the server. The default value is 15 seconds.
CommandTimeout A duration that controls how long the server-side query is allowed to run
before it is canceled. The default value is ten minutes.
Both the Power BI Desktop and Excel connector for an SAP HANA database use the
SAP ODBC driver to provide the best user experience.
In Power BI Desktop, SAP HANA supports both DirectQuery and Import options.
With SAP HANA, you can also use SQL commands in the native database query
SQL statement to connect to Row and Column Tables in HANA Catalog tables,
which aren't included in the Analytic/Calculation Views provided by the Navigator
experience. You can also use the ODBC connector to query these tables.
There are currently some limitations for HANA variables attached to HDI-based
Calculation Views. These limitations are because of errors on the HANA side.
First, it isn't possible to apply a HANA variable to a shared column of an HDI-
container-based Calculation View. To fix this limitation, upgrade to HANA 2
version 37.02 and onwards or to HANA 2 version 42 and onwards. Second,
multi-entry default values for variables and parameters currently don't show up
in the Power BI UI. An error in SAP HANA causes this limitation, but SAP hasn't
announced a fix yet.
Currently, when you use Power Query Desktop to connect to an SAP HANA database,
you can select the Enable column binding advanced option to enable column binding.
You can also enable column binding in existing queries or in queries used in Power
Query Online by manually adding the EnableColumnBinding option to the connection in
the Power Query formula bar or advanced editor. For example:
Power Query M
There are limitations associated with manually adding the EnableColumnBinding option:
Enable column binding works in both Import and DirectQuery mode. However,
retrofitting an existing DirectQuery query to use this advanced option isn't
possible. Instead, a new query must be created for this feature to work correctly.
In SAP HANA Server version 2.0 or later, column binding is all or nothing. If some
columns can’t be bound, none will be bound, and the user will receive an
exception, for example, DataSource.Error: Column MEASURE_UNIQUE_NAME of type
VARCHAR cannot be bound (20002 > 16384) .
SAP HANA version 1.0 servers don't always report correct column lengths. In this
context, EnableColumnBinding allows for partial column binding. For some queries,
this could mean that no columns are bound. When no columns are bound, no
performance benefits are gained.
7 Note
In the Power Query SAP HANA database connector, native queries don't support
duplicate column names when EnableFolding is set to true.
Unlike other connectors, the SAP HANA database connector supports EnableFolding =
True and specifying parameters at the same time.
To use parameters in a query, you place question marks (?) in your code as placeholders.
To specify the parameter, you use the SqlType text value and a value for that SqlType in
Value . Value can be any M value, but must be assigned to the value of the specified
SqlType .
Power Query M
Power Query M
Power Query M
SqlType follows the standard type names defined by SAP HANA. For example, the
BIGINT
BINARY
BOOLEAN
CHAR
DATE
DECIMAL
DOUBLE
INTEGER
NVARCHAR
SECONDDATE
SHORTTEXT
SMALLDECIMAL
SMALLINT
TIME
TIMESTAMP
VARBINARY
VARCHAR
Power Query M
let
Source = Value.NativeQuery(
SapHana.Database(
"myhanaserver:30015",
[Implementation = "2.0"]
),
"select ""VARCHAR_VAL"" as ""VARCHAR_VAL""
from ""_SYS_BIC"".""DEMO/CV_ALL_TYPES""
where ""VARCHAR_VAL"" = ? and ""DATE_VAL"" = ?
group by ""VARCHAR_VAL""
",
{"Seattle", #date(1957, 6, 13)},
[EnableFolding = true]
)
in
Source
The following example demonstrates how to provide a list of records (or mix values and
records):
Power Query M
let
Source = Value.NativeQuery(
SapHana.Database(Server, [Implementation="2.0"]),
"select
""COL_VARCHAR"" as ""COL_VARCHAR"",
""ID"" as ""ID"",
sum(""DECIMAL_MEASURE"") as ""DECIMAL_MEASURE""
from ""_SYS_BIC"".""DEMO/CV_ALLTYPES""
where
""COL_ALPHANUM"" = ? or
""COL_BIGINT"" = ? or
""COL_BINARY"" = ? or
""COL_BOOLEAN"" = ? or
""COL_DATE"" = ?
group by
""COL_ALPHANUM"",
""COL_BIGINT"",
""COL_BINARY"",
""COL_BOOLEAN"",
""COL_DATE"",
{
[ SqlType = "CHAR", Value = "M" ],
// COL_ALPHANUM - CHAR
[ SqlType = "BIGINT", Value = 4 ],
// COL_BIGINT - BIGINT
[ SqlType = "BINARY", Value = Binary.FromText("AKvN",
BinaryEncoding.Base64) ], // COL_BINARY - BINARY
[ SqlType = "BOOLEAN", Value = true ],
// COL_BOOLEAN - BOOLEAN
[ SqlType = "DATE", Value = #date(2022, 5, 27) ],
// COL_DATE - TYPE_DATE
} ,
[EnableFolding=false]
)
in
Source
Before, when you added a table column (or another transformation that internally adds
a column), the query would "drop out of cube space", and all operations would be done
at a table level. At some point, this drop out could cause the query to stop folding.
Performing cube operations after adding a column was no longer possible.
With this change, the added columns are treated as dynamic attributes within the cube.
Having the query remain in cube space for this operation has the advantage of letting
you continue using cube operations even after adding columns.
7 Note
This new functionality is only available when you connect to Calculation Views in
SAP HANA Server version 2.0 or higher.
The following sample query takes advantage of this new capability. In the past, you
would get a "the value is not a cube" exception when applying
Cube.CollapseAndRemoveColumns.
Power Query M
let
Source = SapHana.Database(“someserver:someport”,
[Implementation="2.0"]),
Contents = Source{[Name="Contents"]}[Data],
SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models =
Contents{[Name="SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models"]}[Data],
PURCHASE_ORDERS1 =
SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models{[Name="PURCHASE_ORDERS"]}
[Data],
#"Added Items" = Cube.Transform(PURCHASE_ORDERS1,
{
{Cube.AddAndExpandDimensionColumn, "[PURCHASE_ORDERS]", {"
[HISTORY_CREATEDAT].[HISTORY_CREATEDAT].Attribute", "[Product_TypeCode].
[Product_TypeCode].Attribute", "[Supplier_Country].
[Supplier_Country].Attribute"}, {"HISTORY_CREATEDAT", "Product_TypeCode",
"Supplier_Country"}},
{Cube.AddMeasureColumn, "Product_Price", "[Measures].
[Product_Price]"}
}),
#"Inserted Year" = Table.AddColumn(#"Added Items", "Year", each
Date.Year([HISTORY_CREATEDAT]), Int64.Type),
#"Filtered Rows" = Table.SelectRows(#"Inserted Year", each
([Product_TypeCode] = "PR")),
#"Added Conditional Column" = Table.AddColumn(#"Filtered Rows",
"Region", each if [Supplier_Country] = "US" then "North America" else if
[Supplier_Country] = "CA" then "North America" else if [Supplier_Country] =
"MX" then "North America" else "Rest of world"),
#"Filtered Rows1" = Table.SelectRows(#"Added Conditional Column", each
([Region] = "North America")),
#"Collapsed and Removed Columns" =
Cube.CollapseAndRemoveColumns(#"Filtered Rows1", {"HISTORY_CREATEDAT",
"Product_TypeCode"})
in
#"Collapsed and Removed Columns"
Next steps
Enable encryption for SAP HANA
The following articles contain more information that you might find useful when
connecting to an SAP HANA debase.
Feedback
Was this page helpful? Yes No
You can connect to SAP HANA data sources directly using DirectQuery. There are two
options when connecting to SAP HANA:
Treat SAP HANA as a relational source: In this case, Power BI treats SAP HANA as
a relational source. This approach offers greater flexibility. Care must be taken with
this approach to ensure that measures are aggregated as expected, and to avoid
performance issues.
The connection approach is determined by a global tool option, which is set by selecting
File > Options and settings and then Options > DirectQuery, then selecting the option
Treat SAP HANA as a relational source, as shown in the following image.
The option to treat SAP HANA as a relational source controls the approach used for any
new report using DirectQuery over SAP HANA. It has no effect on any existing SAP
HANA connections in the current report, nor on connections in any other reports that
are opened. So if the option is currently unchecked, then upon adding a new connection
to SAP HANA using Get Data, that connection is made treating SAP HANA as a multi-
dimensional source. However, if a different report is opened that also connects to SAP
HANA, then that report continues to behave according to the option that was set at the
time it was created. This fact means that any reports connecting to SAP HANA that were
created prior to February 2018 continue to treat SAP HANA as a relational source.
The two approaches constitute different behavior, and it's not possible to switch an
existing report from one approach to the other.
In the Get Data Navigator, a single SAP HANA view can be selected. It isn't
possible to select individual measures or attributes. There's no query defined at the
time of connecting, which is different from importing data or when using
DirectQuery while treating SAP HANA as a relational source. This consideration
also means that it's not possible to directly use an SAP HANA SQL query when
selecting this connection method.
All the measures, hierarchies, and attributes of the selected view are displayed in
the field list.
To ensure the correct aggregate values can always be obtained from SAP HANA,
certain restrictions must be imposed. For example, it's not possible to add
calculated columns, or to combine data from multiple SAP HANA views within the
same report.
Treating SAP HANA as a multi-dimensional source doesn't offer the greater flexibility
provided by the alternative relational approach, but it's simpler. The approach also
ensures correct aggregate values when dealing with more complex SAP HANA
measures, and generally results in higher performance.
The Field list includes all measures, attributes, and hierarchies from the SAP HANA view.
Note the following behaviors that apply when using this connection method:
In SAP HANA, an attribute can be defined to use another attribute as its label. For
example, Product, with values 1 , 2 , 3 , and so on, could use ProductName, with
values Bike , Shirt , Gloves , and so on, as its label. In this case, a single field
Product is shown in the field list, whose values are the labels Bike , Shirt , Gloves ,
and so on, but which is sorted by, and with uniqueness determined by, the key
values 1 , 2 , 3 . A hidden column Product.Key is also created, allowing access to
the underlying key values if necessary.
Any variables defined in the underlying SAP HANA view are displayed at the time of
connecting, and the necessary values can be entered. Those values can later be changed
by selecting Transform data from the ribbon, and then Edit parameters from the
dropdown menu displayed.
The modeling operations allowed are more restrictive than in the general case when
using DirectQuery, given the need to ensure that correct aggregate data can always be
obtained from SAP HANA. However, it's still possible to make many additions and
changes, including defining measures, renaming and hiding fields, and defining display
formats. All such changes are preserved on refresh, and any non-conflicting changes
made to the SAP HANA view are applied.
It's useful to start by clarifying the behavior of a relational source such as SQL Server,
when the query defined in Get Data or Power Query Editor performs an aggregation. In
the example that follows, a query defined in Power Query Editor returns the average
price by ProductID.
If the data is being imported into Power BI versus using DirectQuery, the following
situation would result:
The data is imported at the level of aggregation defined by the query created in
Power Query Editor. For example, average price by product. This fact results in a
table with the two columns ProductID and AveragePrice that can be used in visuals.
In a visual, any subsequent aggregation, such as Sum, Average, Min, and others, is
performed over that imported data. For example, including AveragePrice on a
visual uses the Sum aggregate by default, and would return the sum over the
AveragePrice for each ProductID, in this example, 13.67. The same applies to any
alternative aggregate function, such as Min or Average, used on the visual. For
example, Average of AveragePrice returns the average of 6.66, 4 and 3, which
equates to 4.56, and not the average of Price on the six records in the underlying
table, which is 5.17.
If DirectQuery over that same relational source is being used instead of Import, the
same semantics apply and the results would be exactly the same:
Given the same query, logically exactly the same data is presented to the reporting
layer – even though the data isn't actually imported.
In a visual, any subsequent aggregation, such as Sum, Average, and Min, is again
performed over that logical table from the query. And again, a visual containing
Average of AveragePrice returns the same 4.56.
Consider SAP HANA when the connection is treated as a relational source. Power BI can
work with both Analytic Views and Calculation Views in SAP HANA, both of which can
contain measures. Yet today the approach for SAP HANA follows the same principles as
described previously in this section: the query defined in Get Data or Power Query
Editor determines the data available, and then any subsequent aggregation in a visual is
over that data, and the same applies for both Import and DirectQuery. However, given
the nature of SAP HANA, the query defined in the initial Get Data dialog or Power
Query Editor is always an aggregate query, and generally includes measures where the
actual aggregation that are used is defined by the SAP HANA view.
The equivalent of the previous SQL Server example is that there's an SAP HANA view
containing ID, ProductID, DepotID, and measures including AveragePrice, defined in the
view as Average of Price.
If in the Get Data experience, the selections made were for ProductID and the
AveragePrice measure, then that is defining a query over the view, requesting that
aggregate data. In the earlier example, for simplicity pseudo-SQL is used that doesn’t
match the exact syntax of SAP HANA SQL. Then any further aggregations defined in a
visual are further aggregating the results of such a query. Again, as described previously
for SQL Server, this result applies both for the Import and DirectQuery case. In the
DirectQuery case, the query from Get Data or Power Query Editor are used in a
subselect within a single query sent to SAP HANA, and thus it isn't actually the case that
all the data would be read in, prior to aggregating further.
In Get Data or Power Query Editor, only the required columns should be included
to retrieve the necessary data, reflecting the fact that the result is a query that
must be a reasonable query that can be sent to SAP HANA. For example, if dozens
of columns were selected, with the thought that they might be needed on
subsequent visuals, then even for DirectQuery a simple visual means the aggregate
query used in the subselect contains those dozens of columns, which generally
perform poorly.
In the following example, selecting five columns (CalendarQuarter, Color, LastName,
ProductLine, SalesOrderNumber) in the Get Data dialog, along with the measure
OrderQuantity, means that later creating a simple visual containing the Min
OrderQuantity results in the following SQL query to SAP HANA. The shaded is the
subselect, containing the query from Get Data / Power Query Editor. If this subselect
gives a high cardinality result, then the resulting SAP HANA performance is likely to be
poor.
Because of this behavior, we recommend the items selected in Get Data or Power Query
Editor be limited to those items that are needed, while still resulting in a reasonable
query for SAP HANA.
Best practices
For both approaches to connecting to SAP HANA, recommendations for using
DirectQuery also apply to SAP HANA, particularly recommendations related to ensuring
good performance. For more information, see using DirectQuery in Power BI.
Parent Child Hierarchies: Parent child hierarchies aren't visible in Power BI. This
fact is because Power BI accesses SAP HANA using the SQL interface, and parent
child hierarchies can't be fully accessed by using SQL.
Other hierarchy metadata: The basic structure of hierarchies is displayed in Power
BI, however some hierarchy metadata, such as controlling the behavior of ragged
hierarchies, have no effect. Again, this fact is due to the limitations imposed by the
SQL interface.
Connection using SSL: You can connect using Import and multi-dimensional with
TLS, but can't connect to SAP HANA instances configured to use TLS for the
relational connector.
Support for Attribute views: Power BI can connect to Analytic and Calculation
views, but can't connect directly to Attribute views.
Support for Catalog objects: Power BI can't connect to Catalog objects.
Change to Variables after publish: You can't change the values for any SAP HANA
variables directly in the Power BI service, after the report is published.
Known issues
The following list describes all known issues when connecting to SAP HANA
(DirectQuery) using Power BI.
SAP HANA issue when query for Counters, and other measures: Incorrect data is
returned from SAP HANA if connecting to an Analytical View, and a Counter
measure and some other ratio measure, are included in the same visual. This issue
is covered by SAP Note 2128928 (Unexpected results when query a Calculated
Column and a Counter) . The ratio measure is incorrect in this case.
Multiple Power BI columns from single SAP HANA column: For some calculation
views, where an SAP HANA column is used in more than one hierarchy, SAP HANA
exposes the column as two separate attributes. This approach results in two
columns being created in Power BI. Those columns are hidden by default, however,
and all queries involving the hierarchies, or the columns directly, behave correctly.
Related content
For more information about DirectQuery, check out the following resources:
DirectQuery in Power BI
Data sources supported by DirectQuery
DirectQuery and SAP BW
On-premises data gateway
Use the SAP Business Warehouse
connector in Power BI Desktop
Article • 03/26/2024
You can use Power BI Desktop to access SAP Business Warehouse (SAP BW) data. The
SAP BW Connector Implementation 2.0 has significant improvements in performance
and capabilities from version 1.0.
For information about how SAP customers can benefit from connecting Power BI to their
SAP BW systems, see the Power BI and SAP BW whitepaper . For details about using
DirectQuery with SAP BW, see DirectQuery and SAP Business Warehouse (BW).
) Important
Prerequisite
Implementation 2.0 of the SAP Connector requires the SAP .NET Connector 3.0 or 3.1.
You can download the SAP .NET Connector 3.0 or 3.1 from SAP. Access to the
download requires a valid S-user sign-in.
The .NET Framework connector comes in 32-bit and 64-bit versions. Choose the version
that matches your Power BI Desktop installation version.
When you install, in Optional setup steps, make sure you select Install assemblies to
GAC.
7 Note
The first version of the SAP BW Connector required the NetWeaver DLLs. The
current version doesn't require NetWeaver DLLs.
2. On the Get Data screen, select Database, and then select either SAP Business
Warehouse Application Server or SAP Business Warehouse Message Server.
3. Select Connect.
4. On the next screen, enter server, system, and client information, and whether to
use Import or DirectQuery connectivity method. For detailed instructions, see:
7 Note
You can use the SAP BW Connector to import data from your SAP BW Server
cubes, which is the default, or you can use DirectQuery to connect to the data.
For more information about using the SAP BW Connector with DirectQuery,
see DirectQuery and SAP Business Warehouse (BW).
You can also select Advanced options, and select a Language code, a custom
MDX statement to run against the specified server, and other options. For more
information, see Use advanced options.
6. Provide any necessary authentication data and select Connect. For more
information about authentication, see Authentication with a data source.
7. If you didn't specify a custom MDX statement, the Navigator screen shows a list of
all cubes available on the server. You can drill down and select items from the
available cubes, including dimensions and measures. Power BI shows queries and
cubes that the Open Analysis Interfaces expose.
When you select one or more items from the server, the Navigator shows a
preview of the output table.
Only selected items. By default, Navigator displays all items. This option is
useful to verify the final set of items you select. Alternatively, you can select
the column names in the preview area to view the selected items.
Enable data previews. This value is the default, and displays data previews.
Deselect this option to reduce the number of server calls by no longer
requesting preview data.
Technical names. SAP BW supports user-defined technical names for objects
within a cube. Cube owners can expose these friendly names for cube
objects, instead of exposing only the physical names for the objects.
8. After you select all the objects you want, choose one of the following options:
Load to load the entire set of rows for the output table into the Power BI
Desktop data model. The Report view opens. You can begin visualizing the
data, or make further modifications by using the Data or Model views.
Transform Data to open Power Query Editor with the data. You can specify
more data transformation and filtering steps before you bring the entire set
of rows into the Power BI Desktop data model.
Along with data from SAP BW cubes, you can also import data from a wide range of
other data sources in Power BI Desktop, and combine them into a single report. This
ability presents many interesting scenarios for reporting and analytics on top of SAP BW
data.
Advanced options
You can set the following options under Advanced options on the SAP BW connection
screen:
Execution mode specifies how the MDX interface executes queries on the server.
The following options are valid:
BasXml
BasXmlGzip
DataStream
The default value is BasXmlGzip. This mode can improve performance for low
latency or high volume queries.
Batch size specifies the maximum number of rows to retrieve at a time when
executing an MDX statement. A small number means more calls to the server while
retrieving a large semantic model. A large value might improve performance, but
could cause memory issues on the SAP BW server. The default value is 50000.
Other improvements
The following list describes other Implementation 2.0 improvements:
Better performance.
Ability to retrieve several million rows of data, and fine-tuning through the batch
size parameter.
Ability to switch execution modes.
Support for compressed mode, especially beneficial for high-latency connections
or large semantic models.
Improved detection of Date variables.
Date (ABAP type DATS ) and Time (ABAP type TIMS ) dimensions exposed as dates
and times, instead of text values. For more information, see Support for typed
dates in SAP BW.
Better exception handling. Errors that occur in BAPI calls are now surfaced.
Column folding in BasXml and BasXmlGzip modes. For example, if the generated
MDX query retrieves 40 columns but the current selection only needs 10, this
request passes on to the server to retrieve a smaller semantic model.
1. From the existing report in Power BI Desktop, select Transform data in the ribbon,
and then select the SAP Business Warehouse query to update.
4. Determine whether the query already contains an option record, such as the
following examples:
If so, add the [Implementation 2.0] option, and remove any ScaleMeasures option:
7 Note
5. If the query doesn't already include an options record, add it. For example, change
the following entry:
to:
7 Note
Troubleshooting
This section provides some troubleshooting situations and solutions for the SAP BW
connector. For more information, see SAP Business Warehouse connector
troubleshooting.
SAP BW returns decimal data with either a comma or a period as the decimal separator.
To specify which of these characters SAP BW should use for the decimal separator, the
Power BI Desktop driver makes a call to BAPI_USER_GET_DETAIL . This call returns a
structure called DEFAULTS , which has a field called DCPFM that stores Decimal Format
Notation as one of the following values:
With this issue, the call to BAPI_USER_GET_DETAIL fails for a particular user, who gets the
misformatted data, with an error message similar to the following message:
XML
To solve this error, the SAP admin must grant the Power BI SAP BW user the right to
execute BAPI_USER_GET_DETAIL . Also, verify that the user's data has the correct DCPFM
value.
SAP users need access to the following specific BAPI function modules to get metadata
and retrieve data from SAP BW's InfoProviders:
BAPI_MDPROVIDER_GET_CATALOGS
BAPI_MDPROVIDER_GET_CUBES
BAPI_MDPROVIDER_GET_DIMENSIONS
BAPI_MDPROVIDER_GET_HIERARCHYS
BAPI_MDPROVIDER_GET_LEVELS
BAPI_MDPROVIDER_GET_MEASURES
BAPI_MDPROVIDER_GET_MEMBERS
BAPI_MDPROVIDER_GET_VARIABLES
BAPI_IOBJ_GETDETAIL
To solve this issue, verify that the user has access to the MDPROVIDER modules and
BAPI_IOBJ_GETDETAIL .
Enable tracing
To further troubleshoot these or similar issues, you can enable tracing:
1. In Power BI Desktop, select File > Options and settings > Options.
2. In Options, select Diagnostics, and then select Enable tracing under Diagnostic
Options.
3. Try to get data from SAP BW while tracing is active, and examine the trace file for
more detail.
ノ Expand table
Related content
SAP BW fundamentals
DirectQuery and SAP HANA
DirectQuery and SAP Business Warehouse (BW)
Use DirectQuery in Power BI
Power BI data sources
Power BI and SAP BW whitepaper
What is Azure Center for SAP solutions?
Article • 05/15/2023
Azure Center for SAP solutions is an Azure offering that makes SAP a top-level workload
on Azure. Azure Center for SAP solutions is an end-to-end solution that enables you to
create and run SAP systems as a unified workload on Azure and provides a more
seamless foundation for innovation. You can take advantage of the management
capabilities for both new and existing Azure-based SAP systems.
The guided deployment experience takes care of creating the necessary compute,
storage and networking components needed to run your SAP system. Azure Center for
SAP solutions then helps automate the installation of the SAP software according to
Microsoft best practices.
In Azure Center for SAP solutions, you either create a new SAP system or register an
existing one, which then creates a Virtual Instance for SAP solutions (VIS). The VIS brings
SAP awareness to Azure by providing management capabilities, such as being able to
see the status and health of your SAP systems. Another example is quality checks and
insights, which allow you to know when your system isn't following documented best
practices and standards.
You can use Azure Center for SAP solutions to deploy the following types of SAP
systems:
Single server
Distributed
Distributed with High Availability (HA)
For existing SAP systems that run on Azure, there's a simple registration experience. You
can register the following types of existing SAP systems that run on Azure:
Azure Center for SAP solutions brings services, tools and frameworks together to
provide an end-to-end unified experience for deployment and management of SAP
workloads on Azure, creating the foundation for you to build innovative solutions for
your unique requirements.
What is a Virtual Instance for SAP solutions?
When you use Azure Center for SAP solutions, you'll create a Virtual Instance for SAP
solutions (VIS) resource. The VIS is a logical representation of an SAP system on Azure.
Every time that you create a new SAP system through Azure Center for SAP solutions, or
register an existing SAP system to Azure Center for SAP solutions, Azure creates a VIS. A
VIS contains the metadata for the entire SAP system.
The SAP system itself, referred to by the SAP System Identifier (SID)
An ABAP Central Services (ASCS) instance
A database instance
One or more SAP Application Server instances
Inside the VIS, the SID is the parent resource. Your VIS resource is named after the SID of
your SAP system. Any ASCS, Application Server, or database instances are child
resources of the SID. The child resources are associated with one or more VM resources
outside of the VIS. A standalone system has all three instances mapped to a single VM.
A distributed system has one ASCS and one Database instance, with each mapped to a
VM. High Availability (HA) deployments have the ASCS and Database instances mapped
to multiple VMs to enable HA. A distributed or HA type SAP system can have multiple
Application Server instances linked to their respective VMs.
See an overview of the entire SAP system, including the different parts of the VIS.
View the SAP system metadata. For example, properties of ASCS, database, and
Application Server instances; properties of SAP environment details; and properties
of associated VM resources.
Get the latest status and health check for your SAP system.
Start and stop the SAP application tier.
Get quality checks and insights about your SAP system.
Monitor your Azure infrastructure metrics for your SAP system resources. For
example, the CPU percentage used for ASCS and Application Server VMs, or disk
input/output operations per second (IOPS).
Analyze the cost of running your SAP System on Azure [VMs, Disks, Loadbalancers]
Next steps
Create a network for a new VIS deployment
Register an existing SAP system in Azure Center for SAP solutions
Common questions about Azure
Center for SAP solutions
FAQ
This article answers commonly asked questions about Azure Center for SAP solutions.
General
What capabilities do you gain with Azure Center
for SAP solutions?
Before the availability of Azure Center for SAP solutions, customers relied on
documentation and frameworks to help them set up system architecture. Then they had
to figure out how to access the VMs to install SAP. With Azure Center for SAP solutions,
customers can deploy SAP by using a guided experience that streamlines their ability to
select and configure resources. When a customer deploys or registers an existing SAP
system, Azure Center for SAP solutions creates a logical representation of the system, or
a Virtual Instance for SAP solutions (VIS). The VIS unlocks management capabilities, such
as the ability to run quality checks and to manage and monitor the system at the SAP
layer as well as the virtual machine (VM) layer.
Azure Center for SAP solutions also offers the ability to customize the names of the
Azure resources deployed through it via API, CLI, and PowerShell. Consider referring to
sample API payload templates to deploy a system with custom resource naming
conventions.
West Europe, North Europe, East US, East US 2, West US, West US 2, West US 3,
Central US, South Central US, North Central US, India Central, East Asia, Southeast
Asia, Korea Central, Japan East, Australia East, Australia Central, Canada Central,
Brazil South, UK South, Germany West Central, Sweden Central, France Central,
Switzerland North, Norway East, South Africa North and UAE North.
You can also see Products available by region page for information about
availability of Azure Center for SAP solutions in different Azure regions.
Refer to the product documentation to review the complete list of supported and
unsupported scenarios.
7 Note
If you are registering a system based on Oracle Linux OS and/or Oracle Database,
there will be limited support for quality checks, health and status and start-stop
offerings.
Appropriate role access on the Azure subscription or resource groups where you
have the SAP system resources.
Azure Center for SAP solutions administrator and Managed Identity Operator or
equivalent role access.
A User-assigned managed identity that has Azure Center for SAP solutions service
role access on the Compute and Storage resource groups and Reader role
access on the Virtual Network resource group of the SAP system. Azure Center
for SAP solutions service uses this identity to discover your SAP system
resources and register the system as a VIS resource.
Make sure ASCS, Application Server, and Database virtual machines of the SAP
system are in Running state.
sapcontrol and saphostctrl exe files on ASCS, App server and Database.
Confirm the sapstartsrv process is running on all SAP instances and for SAP hostctrl
agent on all the VMs in the SAP system.
Along with these logical resources, ACSS also creates a Managed resource group with a
storage account. ACSS uses this resource group to enable the SAP management
capabilities.
ACSS also creates Azure resources that are required to run the SAP S/4HANA system in
the application resource group. These include virtual machines and storage. Customers
can create a separate transport resource group if required.
Along with these resources, ACSS creates a Managed resource group containing a
storage account and key vault. ACSS uses the Managed resource group to enable the
SAP management capabilities.
The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to deploy infrastructure for an SAP system with non
highly available (HA) Distributed architecture on Azure with Azure Center for SAP
solutions using Az PowerShell module. Alternatively, you can deploy SAP systems using
the Azure CLI, or in the Azure Portal.
After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Azure portal. For example, you can:
View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.
Prerequisites
An Azure subscription.
If you are using Azure Center for SAP solutions for the first time, Register the
Microsoft.Workloads Resource Provider on the subscription in which you are
deploying the SAP system. Use Register-AzResourceProvider, as follows:
PowerShell
Register-AzResourceProvider -ProviderNamespace "Microsoft.Workloads"
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage). If you wish to install SAP Software through the Azure Center for
SAP solutions, also provide Reader and Data Access role to the identity on SAP
bits storage account where you would store the SAP Media.
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.
The steps in this quickstart run the Azure PowerShell cmdlets interactively in Azure
Cloud Shell. To run the commands in the Cloud Shell, select Open Cloudshell at
the upper-right corner of a code block. Select Copy to copy the code and then
paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the
Azure portal.
You can also install Azure PowerShell locally to run the cmdlets. The steps in this
article require Azure PowerShell module version 5.4.1 or later. Run Get-Module -
ListAvailable Az to find your installed version. If you need to upgrade, see Update
PowerShell
PowerShell
The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to Install SAP software for infrastructure deployed for
an SAP system. In the previous step, you created infrastructure for an SAP system with
non highly available (HA) Distributed architecture on Azure with Azure Center for SAP
solutions using Az PowerShell module.
After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Virtual Instance for SAP solutions. For example, you can:
View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.
Prerequisites
An Azure subscription.
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage).
A storage account where you would store the SAP Media
Reader and Data Access role to the User-assigned managed identity on the
storage account where you would store the SAP Media.
A network set up for your infrastructure deployment.
A deployment of S/4HANA infrastructure.
The SSH private key for the virtual machines in the SAP system. You generated this
key during the infrastructure deployment.
You should have the SAP installation media available in a storage account. For
more information, see how to download the SAP installation media.
The json configuration file that you used to create infrastructure in the previous
step for SAP system using PowerShell or Azure CLI.
ml
Software version: Azure Center for SAP solutions supports the following SAP
software versions viz. SAP S/4HANA 1909 SPS03 or SAP S/4HANA 2020 SPS 03
or SAP S/4HANA 2021 ISS 00 or SAP S/4HANA 2022
Storage account ID: This is the resource ID for the storage account where the
BOM file is created
You can use the sample software installation payload file
PowerShell
Next steps
In this quickstart, you installed SAP software on the deployed infrastructure in Azure for
an SAP system using Azure Center for SAP solutions. Continue to the next article to
learn how to Manage your SAP system on Azure using Virtual Instance for SAP solutions
The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to use Azure CLI to deploy infrastructure for an SAP
system with highly available (HA) Three-tier Distributed architecture. You also see how
to customize resource names for the Azure infrastructure that gets deployed.
Alternatively, you can deploy SAP systems with customized using the Azure PowerShell
Module
After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Azure portal. For example, you can:
View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.
Prerequisites
An Azure subscription.
If you're using Azure Center for SAP solutions for the first time, Register the
Microsoft.Workloads Resource Provider on the subscription in which you're
deploying the SAP system:
Azure CLI
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage). If you wish to install SAP Software through the Azure Center for
SAP solutions, also provide Reader and Data Access role to the identity on SAP
bits storage account where you would store the SAP Media.
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.
Option Example/Link
Select the Cloud Shell button on the menu bar at the upper right in
the Azure portal .
2. Select the Copy button on a code block (or command block) to copy the code or
command.
3. Paste the code or command into the Cloud Shell session by selecting Ctrl+Shift+V
on Windows and Linux, or by selecting Cmd+Shift+V on macOS.
Azure CLI
Azure CLI
Next steps
In this quickstart, you deployed infrastructure in Azure for an SAP system using Azure
Center for SAP solutions. You used custom resource names for the infrastructure.
Continue to the next article to learn how to install SAP software on the infrastructure
deployed.
The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to Install SAP software for infrastructure deployed for
an SAP system. In the previous step, you created infrastructure for an SAP system with
highly available (HA) Distributed architecture on Azure with Azure Center for SAP
solutions using Azure CLI. You also provided customized resource names for the
deployed Azure resources.
After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Virtual Instance for SAP solutions. For example, you can:
View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.
Prerequisites
An Azure subscription.
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or atleast all resource groups (Compute,
Network,Storage).
A storage account where you would store the SAP Media
Reader and Data Access role to the User-assigned managed identity on the
storage account where you would store the SAP Media.
A network set up for your infrastructure deployment.
A deployment of S/4HANA infrastructure.
The SSH private key for the virtual machines in the SAP system. You generated this
key during the infrastructure deployment.
You should have the SAP installation media available in a storage account. For
more information, see how to download the SAP installation media.
The json configuration file that you used to create infrastructure in the previous
step for SAP system using PowerShell or Azure CLI.
As you're installing a Highly Available (HA) SAP system, get the Service Principal
identifier (SPN ID) and password to authorize the Azure fence agent (fencing
device) against Azure resources. For more information, see Use Azure CLI to create
an Azure AD app and configure it to access Media Services API.
For an example, see the Red Hat documentation for Creating an Azure Active
Directory Application .
To avoid frequent password expiry, use the Azure Command-Line Interface
(Azure CLI) to create the Service Principal identifier and password instead of the
Azure portal.
Option Example/Link
Select the Cloud Shell button on the menu bar at the upper right in
the Azure portal .
2. Select the Copy button on a code block (or command block) to copy the code or
command.
3. Paste the code or command into the Cloud Shell session by selecting Ctrl+Shift+V
on Windows and Linux, or by selecting Cmd+Shift+V on macOS.
ml
Software version: Azure Center for SAP solutions supports three SAP software
versions viz. SAP S/4HANA 1909 SPS03 or SAP S/4HANA 2020 SPS 03 or SAP
S/4HANA 2021 ISS 00
Storage account ID: This is the resource ID for the storage account where the
BOM file is created
As you are deploying an HA system, you need to provide the High Availability
software configuration with the following two inputs:
Fencing Client ID: The client identifier for the STONITH Fencing Agent service
principal
Fencing Client Password: The password for the Fencing Agent service
principal
You can use the sample software installation payload file
Note: The commands for infrastructure deployment and installation are the same but
the payload file for the two needs to be different.
Next steps
In this quickstart, you installed SAP software on the deployed infrastructure in Azure for
an SAP system with Highly Available architecture type using Azure Center for SAP
solutions. You also noted that the resource names were customized for the system while
deploying infrastructure. Continue to the next article to learn how to Manage your SAP
system on Azure using Virtual Instance for SAP solutions
The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to register an existing SAP system running on Azure
with Azure Center for SAP solutions using Az PowerShell module. Alternatively, you can
register systems using the Azure CLI, or in the Azure portal.
After you register an SAP system with Azure Center for SAP solutions, you can use its
visualization, management and monitoring capabilities through the Azure portal.
This quickstart requires the Az PowerShell module version 1.0.0 or later. Run Get-Module
-ListAvailable Az to find the version. If you need to install or upgrade, see Install Azure
PowerShell module.
Grant access to Azure Storage accounts from the virtual network where the SAP
system exists. Use one of these options:
Allow outbound internet connectivity for the VMs.
Use a Storage service tag to allow connectivity to any Azure storage account
from the VMs.
Use a Storage service tag with regional scope to allow storage account
connectivity to the Azure storage accounts in the same region as the VMs.
Allowlist the region-specific IP addresses for Azure Storage.
The first time you use Azure Center for SAP solutions, you must register the
Microsoft.Workloads Resource Provider in the subscription where you have the
SAP system with Register-AzResourceProvider, as follows:
PowerShell
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Compute resource group and Reader role access on the
Virtual Network resource group of the SAP system. Azure Center for SAP solutions
service uses this identity to discover your SAP system resources and register the
system as a VIS resource.
Make sure ASCS, Application Server and Database virtual machines of the SAP
system are in Running state.
sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
File path on Linux VMs: /usr/sap/hostctrl/exe
File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\
Make sure the sapstartsrv process is running on all SAP instances and for SAP
hostctrl agent on all the VMs in the SAP system.
To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -
start'
To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -
function StartService S0S'
To check status of hostctrl sapstartsrv use this command for Windows VMs:
C:\Program Files\SAP\hostctrl\exe\saphostexec –status
For successful discovery and registration of the SAP system, ensure there's network
connectivity between ASCS, App and DB VMs. 'ping' command for App instance
hostname must be successful from ASCS VM. 'ping' for Database hostname must
be successful from App server VM.
On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right
values configured for the discovery and registration of Database instance details.
PowerShell
New-AzWorkloadsSapVirtualInstance `
-ResourceGroupName 'TestRG' `
-Name L46 `
-Location eastus `
-Environment 'NonProd' `
-SapProduct 'S4HANA' `
-CentralServerVmId
'/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.compute/vir
tualmachines/l46ascsvm' `
-Tag @{k1 = "v1"; k2 = "v2"} `
-ManagedResourceGroupName "acss-L46-rg" `
-ManagedRgStorageAccountName 'acssstoragel46' `
-IdentityType 'UserAssigned' `
-UserAssignedIdentity
@{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedId
entity/userAssignedIdentities/ACSS-MSI'= @{}} `
ノ Expand table
SAP application location Azure Center for SAP solutions service location
East US East US
East US 2 East US 2
West US West US 3
West US 2 West US 2
West US 3 West US 3
SAP application location Azure Center for SAP solutions service location
UK South UK South
2. Once you trigger the registration process, you can view its status by getting the
status of the Virtual Instance for SAP solutions resource that gets deployed as part
of the registration process.
PowerShell
Next steps
Monitor SAP system from Azure portal
Manage a VIS
Quickstart: Register an existing SAP
system with Azure Center for SAP
solutions with CLI
Article • 07/21/2023
The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. This article shows you how to register an existing SAP system running on Azure
with Azure Center for SAP solutions using Az CLI. Alternatively, you can register systems
using the Azure PowerShell or in the Azure portal. After you register an SAP system with
Azure Center for SAP solutions, you can use its visualization, management and
monitoring capabilities through the Azure portal. For example, you can:
This quickstart enables you to register an existing SAP system with Azure Center for SAP
solutions.
Grant access to Azure Storage accounts from the virtual network where the SAP
system exists. Use one of these options:
Allow outbound internet connectivity for the VMs.
Use a Storage service tag to allow connectivity to any Azure storage account
from the VMs.
Use a Storage service tag with regional scope to allow storage account
connectivity to the Azure storage accounts in the same region as the VMs.
Allowlist the region-specific IP addresses for Azure Storage.
The first time you use Azure Center for SAP solutions, you must register the
Microsoft.Workloads Resource Provider in the subscription where you have the
SAP system with Register-AzResourceProvider, as follows:
Azure CLI
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Compute resource group and Reader role access on the
Virtual Network resource group of the SAP system. Azure Center for SAP solutions
service uses this identity to discover your SAP system resources and register the
system as a VIS resource.
Make sure ASCS, Application Server and Database virtual machines of the SAP
system are in Running state.
sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
File path on Linux VMs: /usr/sap/hostctrl/exe
File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\
Make sure the sapstartsrv process is running on all SAP instances and for SAP
hostctrl agent on all the VMs in the SAP system.
To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -
start'
To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -
function StartService S0S'
To check status of hostctrl sapstartsrv use this command for Windows VMs:
C:\Program Files\SAP\hostctrl\exe\saphostexec –status
For successful discovery and registration of the SAP system, ensure there is
network connectivity between ASCS, App and DB VMs. 'ping' command for App
instance hostname must be successful from ASCS VM. 'ping' for Database
hostname must be successful from App server VM.
On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right
values configured for the discovery and registration of Database instance details.
Azure CLI
az workloads sap-virtual-instance create -g <Resource Group Name> \
-n C36 \
--environment NonProd \
--sap-product s4hana \
--central-server-vm <Virtual Machine resource ID> \
--identity "{type:UserAssigned,userAssignedIdentities:{<Managed
Identity resource ID>:{}}}" \
--managed-rg-name "acss-C36" \
g is used to specify the name of the existing Resource Group into which you
want the Virtual Instance for SAP solutions resource to be deployed. It could
be the same RG in which you have Compute, Storage resources of your SAP
system or a different one.
n parameter is used to specify the SAP System ID (SID) that you are
registering with Azure Center for SAP solutions.
environment parameter is used to specify the type of SAP environment you
are registering. Valid values are NonProd and Prod.
sap-product parameter is used to specify the type of SAP product you are
registering. Valid values are S4HANA, ECC, Other.
managed-rg-name parameter is used to specify the name of the managed
resource group which is deployed by ACSS service in your Subscription. This
RG is unique for each SAP system (SID) you register. If you do not specify the
name, ACSS service sets a name with this naming convention 'mrg-{SID}-
{random string}'.
2. Once you trigger the registration process, you can view its status by getting the
status of the Virtual Instance for SAP solutions resource that gets deployed as part
of the registration process.
Azure CLI
Next steps
Monitor SAP system from Azure portal
Manage a VIS
Quickstart: Start and stop SAP systems
from Azure Center for SAP solutions
with PowerShell
Article • 05/23/2023
The Azure PowerShell AZ module is used to create and manage Azure resources from
the command line or in scripts.
In this how-to guide, you'll learn to start and stop your SAP systems through the Virtual
Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions using
PowerShell.
Through the Azure PowerShell module, you can start and stop:
The entire SAP Application tier, which includes ABAP SAP Central Services (ASCS)
and Application Server instances.
Individual SAP instances, which include Central Services and Application server
instances.
HANA Database
You can start and stop instances in the following types of deployments:
Single-Server
High Availability (HA)
Distributed Non-HA
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
SAP HA systems that use SUSE and RHEL Pacemaker clustering software and
Windows Server Failover Clustering (WSFC). Other certified cluster software isn't
currently supported.
Prerequisites
The following are prerequisites that you need to ensure before using the Start or Stop
capability on the Virtual Instance for SAP solutions resource.
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions as a Virtual Instance for SAP solutions resource.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
For the start operation to work, the underlying virtual machines (VMs) of the SAP
instances must be running. This capability starts or stops the SAP application
instances, not the VMs that make up the SAP system resources.
The sapstartsrv service must be running on all VMs related to the SAP system.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
The Stop operation function for the HANA Database can only be initiated when the
cluster maintenance mode is in Disabled status. Similarly, Start operation can only
be initiated when the cluster maintenance mode is in Enabled status.
Option 1:
Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to start.
PowerShell
Option 2:
Use the InputObject parameter and pass the resource ID of the Virtual Instance for SAP
solutions resource you intend to start.
PowerShell
Start-AzWorkloadsSapVirtualInstance -InputObject
/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 `
Option 1:
Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to stop.
PowerShell
Option 2:
Use the InputObject parameter and pass the resource ID of the Virtual Instance for SAP
solutions resource you intend to stop.
PowerShell
Stop-AzWorkloadsSapVirtualInstance -InputObject
/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 `
Next steps
Monitor SAP system from the Azure portal
Quickstart: Start and stop SAP systems
from Azure Center for SAP solutions
with CLI
Article • 05/23/2023
The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.
In this how-to guide, you'll learn how to start and stop your SAP systems through the
Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions using
the Azure CLI.
The entire SAP Application tier, which includes ABAP SAP Central Services (ASCS)
and Application Server instances.
Individual SAP instances, which include Central Services and Application server
instances.
HANA Database
You can start and stop instances in the following types of deployments:
Single-Server
High Availability (HA)
Distributed Non-HA
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
SAP HA systems that use SUSE and RHEL Pacemaker clustering software and
Windows Server Failover Clustering (WSFC). Other certified cluster software isn't
currently supported.
Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions as a Virtual Instance for SAP solutions resource.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
For the start operation to work, the underlying virtual machines (VMs) of the SAP
instances must be running. This capability starts or stops the SAP application
instances, not the VMs that make up the SAP system resources.
The sapstartsrv service must be running on all VMs related to the SAP system.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
The Stop operation function for the HANA Database can only be initiated when the
cluster maintenance mode is in Disabled status. Similarly, the Start operation
function can only be initiated when the cluster maintenance mode is in Enabled
status.
Option 1:
Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to start.
Azure CLI
Option 2:
Use the id parameter and pass the resource ID of the Virtual Instance for SAP solutions
resource you intend to start.
Azure CLI
Option 1:
Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to
identify the system you intend to stop.
Azure CLI
Option 2:
Use the id parameter and pass the resource ID of the Virtual Instance for SAP solutions
resource you intend to stop.
Azure CLI
Next steps
Monitor SAP system from the Azure portal
Tutorial: Use Azure CLI to create
infrastructure for a distributed highly
available (HA) SAP system with Azure
Center for SAP solutions with
customized resource names
Article • 05/15/2023
Azure Center for SAP solutions enables you to deploy and manage SAP systems on
Azure. After you deploy infrastructure and install SAP software with Azure Center for SAP
solutions, you can use its visualization, management and monitoring capabilities through
the Virtual Instance for SAP solutions
Introduction
The Azure CLI is used to create and manage Azure resources from the command line or
in scripts.
This tutorial shows you how to use Azure CLI to deploy infrastructure for an SAP system
with highly available (HA) Three-tier Distributed architecture. You also see how to
customize resource names for the Azure infrastructure that gets deployed. See the
following steps:
Prerequisites
An Azure subscription.
If you're using Azure Center for SAP solutions for the first time, Register the
Microsoft.Workloads Resource Provider on the subscription in which you're
deploying the SAP system:
Azure CLI
An Azure account with Azure Center for SAP solutions administrator and
Managed Identity Operator role access to the subscriptions and resource groups
in which you create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Azure Center for SAP solutions
service role access on the Subscription or at least all resource groups (Compute,
Network,Storage). If you wish to install SAP Software through the Azure Center for
SAP solutions, also provide Reader and Data Access role to the identity on SAP
bits storage account where you would store the SAP Media.
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.
Option Example/Link
Select the Cloud Shell button on the menu bar at the upper right in
the Azure portal .
2. Select the Copy button on a code block (or command block) to copy the code or
command.
3. Paste the code or command into the Cloud Shell session by selecting Ctrl+Shift+V
on Windows and Linux, or by selecting Cmd+Shift+V on macOS.
Azure CLI
You can use any of these SKUs recommended for App tier and Database tier when
deploying infrastructure in the later steps. Or you can use the recommended SKUs by
Azure Center for SAP solutions in the next step.
Check for recommended SKUs for SAPS and
Memory requirements for your SAP system
Use az workloads sap-sizing-recommendation to get SAP system sizing
recommendations by providing SAPS input for application tier and memory required for
database tier
Azure CLI
You can download the sample payload and replace the resource names and any other
parameter as needed
Azure CLI
This will deploy your SAP system and the Virtual instance for SAP solutions (VIS)
resource representing your SAP system in Azure.
Cleanup
If you no longer wish to use the VIS resource, you can delete it by using az workloads
sap-virtual-instance delete
Azure CLI
This command will only delete the VIS and other resources created by Azure Center for
SAP solutions. This will not delete the deployed infrastructure like VMs, Disks etc.
Next steps
In this tutorial, you deployed infrastructure in Azure for an SAP system using Azure
Center for SAP solutions. You used custom resource names for the infrastructure.
Continue to the next article to learn how to install SAP software on the infrastructure
deployed.
Azure role-based access control (Azure RBAC) enables granular access management for
Azure. You can use Azure RBAC to manage Virtual Instance for SAP solutions resources
within Azure Center for SAP solutions. For example, you can separate duties within your
team and grant only the amount of access that users need to perform their jobs.
There are Azure built-in roles for Azure Center for SAP solutions, or you can create
Azure custom roles for more control. Azure Center for SAP solutions provides the
following built-in roles to deploy and manage SAP systems on Azure:
The Azure Center for SAP solutions administrator role has the required
permissions for a user to deploy infrastructure, install SAP, and manage SAP
systems from Azure Center for SAP solutions. The role allows users to:
Deploy infrastructure for a new SAP system
Install SAP software
Register existing SAP systems as a Virtual Instance for SAP solutions (VIS)
resource.
View the health and status of SAP systems.
Perform operations such as Start and Stop on the VIS resource.
Do all possible actions with Azure Center for SAP solutions, including the
deletion of the VIS resource.
The Azure Center for SAP solutions service role is intended for use by the user-
assigned managed identity. The Azure Center for SAP solutions service uses this
identity to deploy and manage SAP systems. This role has permissions to support
the deployment and management capabilities in Azure Center for SAP solutions.
The Azure Center for SAP solutions reader role has permissions to view all VIS
resources.
7 Note
To use an existing user-assigned managed identity for deploying a new SAP system
or registering an existing system, the user must also have the Managed Identity
Operator role. This role is required to assign a user-assigned managed identity to
the Virtual Instance for SAP solutions resource.
7 Note
If you're creating a new user-assigned managed identity when you deploy a new
SAP system or register an existing system, the user must also have the Managed
Identity Contributor and Managed Identity Operator roles. These roles are
required to create a user-assigned identity, make necessary role assignments to it
and assign it to the VIS resource.
Microsoft.Workloads/sapVirtualInstances/write
Microsoft.Workloads/Operations/read
Microsoft.Workloads/Locations/OperationStatuses/read
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSizingRecommendations/action
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSapSupportedSku/action
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getDiskConfigurations/action
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action
Microsoft.Resources/subscriptions/resourcegroups/deployments/read
Microsoft.Resources/subscriptions/resourcegroups/deployments/write
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/read
Microsoft.Network/virtualNetworks/subnets/write
Minimum permissions for users
Microsoft.Compute/sshPublicKeys/write
Microsoft.Compute/sshPublicKeys/read
Microsoft.Compute/sshPublicKeys /*/generateKeyPair/action
Microsoft.Storage/storageAccounts/read
Microsoft.Storage/storageAccounts/blobServices/read
Microsoft.Storage/storageAccounts/blobServices/containers/read
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
Microsoft.Storage/storageAccounts/fileServices/read
Microsoft.Storage/storageAccounts/fileServices/shares/read
Microsoft.Compute/disks/read
Microsoft.Compute/disks/write
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/write
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/extensions/delete
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Compute/availabilitySets/read
Microsoft.Compute/availabilitySets/write
Microsoft.Network/loadBalancers/read
Microsoft.Network/loadBalancers/write
Microsoft.Network/loadBalancers/backendAddressPools/read
Minimum permissions for user-assigned managed identities
Microsoft.Network/loadBalancers/backendAddressPools/write
Microsoft.Network/loadBalancers/backendAddressPools/join/action
Microsoft.Network/loadBalancers/frontendIPConfigurations/read
Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action
Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read
Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/write
Microsoft.Network/networkInterfaces/read
Microsoft.Network/networkInterfaces/write
Microsoft.Network/networkInterfaces/join/action
Microsoft.Network/networkInterfaces/ipconfigurations/read
Microsoft.Network/networkInterfaces/ipconfigurations/join/action
Microsoft.Network/privateEndpoints/read
Microsoft.Network/privateEndpoints/write
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/read
Microsoft.Network/virtualNetworks/subnets/joinLoadBalancer/action
Microsoft.Network/virtualNetworks/subnets/join/action
Microsoft.Storage/storageAccounts/read
Microsoft.Storage/storageAccounts/write
Microsoft.Storage/storageAccounts/listAccountSas/action
Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action
Microsoft.Storage/storageAccounts/blobServices/read
Microsoft.Storage/storageAccounts/blobServices/containers/read
Microsoft.Storage/storageAccounts/fileServices/read
Microsoft.Storage/storageAccounts/fileServices/write
Minimum permissions for user-assigned managed identities
Microsoft.Storage/storageAccounts/fileServices/shares/read
Microsoft.Storage/storageAccounts/fileServices/shares/write
Microsoft.Workloads/sapVirtualInstances/write
Microsoft.Workloads/sapVirtualInstances/applicationInstances/read
Microsoft.Workloads/sapVirtualInstances/centralInstances/read
Microsoft.Workloads/sapVirtualInstances/databaseInstances/read
Microsoft.Workloads/sapVirtualInstances/read
Microsoft.Workloads/Operations/read
Microsoft.Workloads/Locations/OperationStatuses/read
Microsoft.Storage/storageAccounts/read
Microsoft.Storage/storageAccounts/blobServices/read
Microsoft.Storage/storageAccounts/blobServices/containers/read
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
Microsoft.Storage/storageAccounts/fileServices/read
Microsoft.Storage/storageAccounts/fileServices/shares/read
Microsoft.Compute/disks/read
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/disks/write
Microsoft.Compute/virtualMachines/write
Microsoft.Compute/virtualMachines/extensions/delete
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Network/loadBalancers/read
Microsoft.Network/loadBalancers/backendAddressPools/read
Microsoft.Network/loadBalancers/frontendIPConfigurations/read
Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read
Microsoft.Network/networkInterfaces/read
Microsoft.Network/networkInterfaces/ipconfigurations/read
Microsoft.Network/privateEndpoints/read
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/read
Microsoft.Storage/storageAccounts/read
Microsoft.Storage/storageAccounts/listAccountSas/action
Microsoft.Storage/storageAccounts/blobServices/containers/read
Microsoft.Storage/storageAccounts/fileServices/read
Microsoft.Storage/storageAccounts/fileServices/shares/read
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action
Microsoft.Storage/storageAccounts/write
Minimum permissions for user-assigned managed identities
Microsoft.Storage/storageAccounts/listAccountSas/action
Microsoft.Storage/storageAccounts/fileServices/write
Microsoft.Storage/storageAccounts/fileServices/shares/write
Microsoft.Workloads/sapvirtualInstances/*/read
Microsoft.Workloads/sapVirtualInstances/*/write
Microsoft.Workloads/Locations/*/read
Microsoft.Resources/subscriptions/resourceGroups/read
Microsoft.Resources/subscriptions/read
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/disks/read
Microsoft.Compute/disks/write
Microsoft.Compute/virtualMachines/write
Minimum permissions for user-assigned managed identities
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Network/loadBalancers/read
Microsoft.Network/loadBalancers/backendAddressPools/read
Microsoft.Network/loadBalancers/frontendIPConfigurations/read
Microsoft.Network/loadBalancers/frontendIPConfigurations/loadBalancerPools/read
Microsoft.Network/networkInterfaces/read
Microsoft.Network/networkInterfaces/ipconfigurations/read
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/read
Microsoft.Resources/subscriptions/resourceGroups/write
Microsoft.Resources/subscriptions/resourceGroups/read
Microsoft.Resources/subscriptions/read
Microsoft.Resources/subscriptions/resourcegroups/deployments/*
Microsoft.Resources/tags/*
Microsoft.Workloads/sapVirtualInstances/applicationInstances/read
Microsoft.Workloads/sapVirtualInstances/centralInstances/read
Minimum permissions for users
Microsoft.Workloads/sapVirtualInstances/databaseInstances/read
Microsoft.Workloads/sapVirtualInstances/read
Microsoft.Workloads/Operations/read
Microsoft.Workloads/Locations/OperationStatuses/read
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSizingRecommendations/action
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getSapSupportedSku/action
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getDiskConfigurations/action
Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action
Microsoft.Insights/Metrics/Read
Microsoft.ResourceHealth/AvailabilityStatuses/read
Microsoft.Advisor/configurations/read
Microsoft.Advisor/recommendations/read
Microsoft.Workloads/sapVirtualInstances/start/action
Built-in roles for user-assigned managed identities
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/stop/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/centralInstances/start/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/centralInstances/stop/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/applicationInstances/start/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/applicationInstances/stop/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/databaseInstances/start/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Minimum permissions for user-assigned managed identities
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Workloads/sapVirtualInstances/databaseInstances/stop/action
Microsoft.Compute/virtualMachines/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/extensions/write
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Consumption/*/read**
Minimum permissions for users
Microsoft.CostManagement/*/read
Microsoft.Billing/billingPeriods/read
Microsoft.Resources/subscriptions/read
Microsoft.Resources/subscriptions/resourceGroups/read
Microsoft.Billing/billingProperty/read
Contributor
Microsoft.Workloads/sapVirtualInstances/delete
Microsoft.Workloads/sapVirtualInstances/read
Microsoft.Workloads/sapVirtualInstances/applicationInstances/read
Microsoft.Workloads/sapVirtualInstances/centralInstances/read
Microsoft.Workloads/sapVirtualInstances/databaseInstances/read
This article describes reliability support in Azure Center for SAP Solutions, and covers
both regional resiliency with availability zones and cross-region resiliency with customer
enabled disaster recovery. For a more detailed overview of reliability in Azure, see Azure
reliability.
Azure Center for SAP solutions is an end-to-end solution that enables you to create and
run SAP systems as a unified workload on Azure and provides a more seamless
foundation for innovation. You can take advantage of the management capabilities for
both new and existing Azure-based SAP systems.
There are three types of Azure services that support availability zones: zonal, zone-
redundant, and always-available services. You can learn more about these types of
services and how they promote resiliency in the Azure services with availability zone
support.
Azure Center for SAP Solutions supports zone-redundancy. When creating a new SAP
system through Azure Center for SAP solutions, you can choose the Compute availability
option for the infrastructure being deployed. You can choose to deploy the SAP system
with zone redundancy based on your requirements, while the service is zone-redundant
by default. Learn more about deployment type options for SAP systems here.
Regional availability
When deploying SAP systems using Azure Center for SAP solutions, you can use Zone-
redundant Premium plans in the following regions:
This section explains how you can deploy an SAP system with Zone redundancy from
the Azure portal. You can also use PowerShell and CLI interfaces to deploy a zone
redundant SAP system with Azure Center for SAP solutions. Learn more about deploying
a new SAP system using Azure Center for SAP solutions.
1. Open the Azure portal and navigate to the Azure Center for SAP solutions page.
2. In the Basics page, special attention to the fields in the table (also highlighted in
the screenshot), which have specific requirements for zone redundancy.
Case A B ACSS Service Register the workload with ACSS service available
1 (Down) region is in another region using PowerShell or CLI which
down allow to select an available service location.
Azure Center for SAP solutions service is a zone redundant service. So, service may
experience downtime because no paired region exists. There will be no Microsoft
initiated failover in the event of a region outage. This article explains some of the
strategies that you can use to achieve cross-region resiliency for Virtual Instance for SAP
solutions resources with customer enabled disaster recovery. It has detailed steps for
you to follow when a region in which your Virtual Instance for SAP solutions resource
exists is down.
You must configure disaster recovery for SAP systems that you deploy using Azure
Center for SAP solutions using Disaster recovery overview and infrastructure guidelines
for SAP workload.
In case of a region outage, customers will be notified about it. This article has the steps
you can follow to get the Virtual Instance for SAP solutions resources up and running in
a different region.
Case A B ACSS Service Register the workload with ACSS service available
1 (Down) region is in another region using PowerShell or CLI which
down allow to select an available service location.
Case ACSS SAP Scenario Mitigation Steps
# Service Workload
Region Region
2. In case the Azure Center for SAP solutions service is down (case 1 and 3 mentioned
in the above section) in the region where your Virtual Instance for SAP solutions
resource exists, register your SAP system with another available region.
Azure PowerShell
New-AzWorkloadsSapVirtualInstance `
-ResourceGroupName 'TestRG' `
-Name L46 `
-Location eastus `
-Environment 'NonProd' `
-SapProduct 'S4HANA' `
-CentralServerVmId
'/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.compute/vir
tualmachines/l46ascsvm' `
-Tag @{k1 = "v1"; k2 = "v2"} `
-ManagedResourceGroupName "acss-L46-rg" `
-ManagedRgStorageAccountName 'acssstoragel46' `
-IdentityType 'UserAssigned' `
-UserAssignedIdentity
@{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedId
entity/userAssignedIdentities/ACSS-MSI'= @{}} `
3. Following table has the list of locations where Azure Center for SAP solutions
service is available. It is recommended that you choose a region within the same
geography where your SAP infrastructure resources are located.
East US
East US 2
West US 3
West Europe
North Europe
Australia East
East Asia
Central India
Next steps
Deploy a new SAP system with Azure Center for SAP solutions
Prepare network for infrastructure
deployment
Article • 10/12/2023
In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA
infrastructure using Azure Center for SAP solutions. This article provides general
guidance about creating a virtual network. Your individual environment and use case will
determine how you need to configure your own network settings for use with a Virtual
Instance for SAP (VIS) resource.
If you have an existing network that you're ready to use with Azure Center for SAP
solutions, go to the deployment guide instead of following this guide.
Prerequisites
An Azure subscription.
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
It's recommended to have multiple IP addresses in the subnet or subnets before
you begin deployment. For example, it's always better to have a /26 mask instead
of /29 .
The names including AzureFirewallSubnet, AzureFirewallManagementSubnet,
AzureBastionSubnet and GatewaySubnet are reserved names within Azure. Please
do not use these as the subnet names.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.
Create network
You must create a network for the infrastructure deployment on Azure. Make sure to
create the network in the same region that you want to deploy the SAP system.
A virtual network
Subnets for the Application Servers and Database Servers. Your configuration
needs to allow communication between these subnets.
Azure network security groups
Route tables
Firewalls (or NAT Gateway)
Connect network
At a minimum, the network needs to have outbound internet connectivity for successful
infrastructure deployment and software installation. The application and database
subnets also need to be able to communicate with each other.
If internet connectivity isn't possible, allowlist the IP addresses for the following areas:
Then, make sure all resources within the virtual network can connect to each other. For
example, configure a network security group to allow resources within the virtual
network to communicate by listening on all ports.
If it's not possible to allow the resources within the virtual network to connect to each
other, allow connections between the application and database subnets, and open
important SAP ports in the virtual network instead.
4. Get a list of IP addresses to configure in the network and firewall by running pint
microsoft servers --json --region with the appropriate Azure region parameter.
5. Allowlist all these IP addresses on the firewall or network security group where
you're planning to attach the subnets.
If you're using Red Hat for the VMs, allowlist the Red Hat endpoints as needed. The
default allowlist is the Azure Global IP addresses. Depending on your use case, you
might also need to allowlist Azure US Government or Azure Germany IP addresses.
Configure all IP addresses from your list on the firewall or the network security group
where you want to attach the subnets.
The storage account where you're storing the SAP media that is required during
software installation.
The storage account created by Azure Center for SAP solutions in a managed
resource group, which Azure Center for SAP solutions also owns and manages.
Open the SAP ports listed in the following table. Replace the placeholder values ( xx ) in
applicable ports with your SAP instance number. For example, if your SAP instance
number is 01 , then 32xx becomes 3201 .
Host Agent 1128, Yes Yes HTTP/S port for the SAP host
1129 agent.
Control agent 5xx13, Yes No Stop, start, and get status of SAP
5xx14 system.
HANA XS engine 43xx, Yes Yes HTTP/S request port for web
80xx content.
b. Create a rule to allowlist RHEL or SUSE endpoints. Make sure to allow all source
IP addresses ( * ), set the source port to Any, allow the destination IP addresses
for RHEL or SUSE, and set the destination port to Any.
c. Create a rule to allow service tags. Make sure to allow all source IP addresses
( * ), set the destination type to Service tag. Then, allow the tags
Microsoft.Storage, Microsoft.KeyVault, AzureResourceManager and
Microsoft.AzureActiveDirectory.
b. Set the IP address to the firewall's IP address, which you can find on the
overview of the firewall resource in the Azure portal.
5. Update the subnets for the application and database tiers to use the new route
table.
6. If you're using a network security group with the virtual network, add the following
inbound rule. This rule provides connectivity between the subnets for the
application and database tiers.
7. If you're using a network security group instead of a firewall, add outbound rules
to allow installation.
Next steps
Deploy S/4HANA infrastructure
Deploy S/4HANA infrastructure with
Azure Center for SAP solutions
Article • 01/29/2024
In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in Azure Center
for SAP solutions. There are three deployment options: distributed with High Availability
(HA), distributed non-HA, and single server.
Prerequisites
An Azure subscription
Register the Microsoft.Workloads Resource Provider on the subscription in which
you are deploying the SAP system.
An Azure account with Contributor role access to the subscriptions and resource
groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
A User-assigned managed identity which has Contributor role access on the
Subscription or atleast all resource groups (Compute, Network,Storage). If you wish
to install SAP Software through the Azure Center for SAP solutions, also provide
Storage Blob data Reader, Reader and Data Access roles to the identity on SAP bits
storage account where you would store the SAP Media.
A network set up for your infrastructure deployment.
Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3
SKUS which will be used during Infrastructure deployment and Software
Installation
Review the quotas for your Azure subscription. If the quotas are low, you might
need to create a support request before creating your infrastructure deployment.
Otherwise, you might experience deployment failures or an Insufficient quota
error.
Note the SAP Application Performance Standard (SAPS) and database memory size
that you need to allow Azure Center for SAP solutions to size your SAP system. If
you're not sure, you can also select the VMs. There are:
A single or cluster of ASCS VMs, which make up a single ASCS instance in the
VIS.
A single or cluster of Database VMs, which make up a single Database instance
in the VIS.
A single Application Server VM, which makes up a single Application instance in
the VIS. Depending on the number of Application Servers being deployed or
registered, there can be multiple application instances.
Deployment types
There are three deployment options that you can select for your infrastructure,
depending on your use case.
Supported software
Azure Center for SAP solutions supports the following SAP software versions: S/4HANA
1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 and S/4HANA 2022 ISS 00.
The following operating system (OS) software versions are compatible with these SAP
software versions:
ノ Expand table
Red Hat Red Hat Enterprise Linux 8.6 for SAP S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
Applications - x64 Gen2 latest 03, S/4HANA 2021 ISS 00, S/4HANA 2022
ISS 00
Red Hat Red Hat Enterprise Linux 8.4 for SAP S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
Applications - x64 Gen2 latest 03, S/4HANA 2021 ISS 00, S/4HANA 2022
ISS 00
Publisher Image and Image Version Supported SAP Software Version
Red Hat Red Hat Enterprise Linux 8.2 for SAP S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
Applications - x64 Gen2 latest 03, S/4HANA 2021 ISS 00, S/4HANA 2022
ISS 00
SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
SAP Applications 15 SP4 - x64 Gen2 03, S/4HANA 2021 ISS 00, S/4HANA 2022
latest ISS 00
SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03, S/4HANA 2020 SPS
SAP Applications 15 SP3 - x64 Gen2 03, S/4HANA 2021 ISS 00, S/4HANA 2022
latest ISS 00
SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03
SAP Applications 12 SP5 - x64 Gen2
latest
SUSE SUSE Linux Enterprise Server (SLES) for S/4HANA 1909 SPS 03
SAP Applications 12 SP4 - x64 Gen2
latest
You can use latest if you want to use the latest image and not a specific older
version. If the latest image version is newly released in marketplace and has an
unforeseen issue, the deployment might fail. If you are using Portal for
deployment, we recommend choosing a different image sku train (e.g. 12-SP4
instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI,
you can provide any other image version which is available. To view and select the
available image versions from a publisher, use below commands
Powershell
Azure Center for SAP solutions now supports deployment of SAP system VMs with
custom OS images along with the Azure Marketplace images. For deployment
using custom OS images, follow the steps here.
Create deployment
1. Sign in to the Azure portal .
2. In the search bar, enter and select Azure Center for SAP solutions.
3. On the Azure Center for SAP solutions landing page, select Create a new SAP
system.
4. On the Create Virtual Instance for SAP solutions page, on the Basics tab, fill in the
details for your project.
a. For Subscription, select the Azure subscription into which you're deploying the
infrastructure.
b. For Resource group, select the resource group for all resources that the VIS
creates.
5. Under Instance details, enter the details for your SAP instance.
a. For Name enter the three-character SAP system identifier (SID). The VIS uses the
same name as the SID.
b. For Region, select the Azure region into which you're deploying the resources.
h. For Network, create the network you created previously with subnets.
i. For Application subnet and Database subnet, map the IP address ranges as
required. It's recommended to use a different subnet for each deployment. The
names including AzureFirewallSubnet, AzureFirewallManagementSubnet,
AzureBastionSubnet and GatewaySubnet are reserved names within Azure.
Please do not use these as the subnet names.
a. For Application OS image, select the OS image for the application server.
b. For Database OS image, select the OS image for the database server.
i. For Application OS image, select the image version from the Azure Compute
Gallery.
ii. For Database OS image, select the image version from the Azure Compute
Gallery.
c. For SSH public key source, select a source for the public key. You can choose to
generate a new key pair, use an existing key stored in Azure, or use an existing
public key stored on your local computer. If you don't have keys already saved,
it's recommended to generate a new key pair.
d. For Key pair name, enter a name for the key pair.
e. If you choose to use an Existing public key stored in azure, select the key in
Stored Keys input
f. Provide the corresponding SSH private key from local file stored on your
computer or copy paste the private key.
g. If you choose to use an Existing public key, you can either Provide the SSH
public key from local file stored on your computer or copy paste the public key.
h. Provide the corresponding SSH private key from local file stored on your
computer or copy paste the private key.
9. Under SAP Transport Directory, enter how you want to set up the transport
directory on this SID. This is applicable for Distributed with High Availability and
Distributed deployments only.
a. For SAP Transport Options, you can choose to Create a new SAP transport
Directory or Use an existing SAP transport Directory or completely skip the
creation of transport directory by choosing Don't include SAP transport
directory option. Currently, only NFS on AFS storage account fileshares is
supported.
b. If you choose to Create a new SAP transport Directory, this will create and
mount a new transport fileshare on the SID. By Default, this option will create an
NFS on AFS storage account and a transport fileshare in the resource group
where SAP system will be deployed. However, you can choose to create this
storage account in a different resource group by providing the resource group
name in Transport Resource Group. You can also provide a custom name for
the storage account to be created under Storage account name section.
Leaving the Storage account name will create the storage account with service
default name ""SIDname""nfs""random characters"" in the chosen transport
resource group. Creating a new transport directory will create a ZRS based
replication for zonal deployments and LRS based replication for non-zonal
deployments. If your region doesn't support ZRS replication deploying a zonal
VIS will lead to a failure. In such cases, you can deploy a transport fileshare
outside Azure Center for SAP Solutions with ZRS replication and then create a
zonal VIS where you select Use an existing SAP transport Directory to mount
the pre-created fileshare.
c. If you choose to Use an existing SAP transport Directory, select the pre -
existing NFS fileshare under File share name option. The existing transport
fileshare will be only mounted on this SID. The selected fileshare shall be in the
same region as that of SAP system being created. Currently, file shares existing
in a different region cannot be selected. Provide the associated private endpoint
of the storage account where the selected fileshare exists under Private
Endpoint option.
d. You can skip the creation of transport file share by selecting Don't include SAP
transport directory option. The transport fileshare will neither be created or
mounted for this SID.
10. Under Configuration Details, enter the FQDN for your SAP System.
a. For SAP FQDN, provide only the domain name for your system such
"sap.contoso.com"
11. Under User assigned managed identity, provide the identity which Azure Center
for SAP solutions will use to deploy infrastructure.
a. For Managed identity source, choose if you want the service to create a new
managed identity or you can instead use an existing identity. If you wish to
allow the service to create a managed identity, acknowledge the checkbox
which asks for your consent for the identity to be created and the contributor
role access to be added for all resource groups.
b. For Managed identity name, enter a name for a new identity you want to create
or select an existing identity from the drop down menu. If you are selecting an
existing identity, it should have Contributor role access on the Subscription or
on Resource Groups related to this SAP system you are trying to deploy. That is,
it requires Contributor access to the SAP application Resource Group, Virtual
Network Resource Group and Resource Group which has the existing SSHKEY. If
you wish to later install the SAP system using Azure Center for SAP Solutions,
we also recommend giving the Storage Blob Data Reader and Reader and Data
Access roles on the Storage Account which has the SAP software media.
12. Under Managed resource settings, choose the network settings for the managed
storage account deployed into your subscription. This storage account is required
for ACSS to orchestrate the deployment of new SAP system and further power all
the SAP management capabilities.
a. For Storage account network access, select Enable access from specific virtual
network for enhanced network security access for the managed storage
account. This option ensures that this storage account is accessible only from
the virtual network in which the SAP system exists.
) Important
To use the secure network access option, you must enable Microsoft.Storage
service endpoint on the Application and Database subnets. You can learn
more about storage account network security in this documentation. Private
endpoint on managed storage account is not currently supported in this
scenario.
When you choose to limit network access to specific virtual networks, Azure Center
for SAP solutions service accesses this storage account using trusted access based
on the managed identity associated with the VIS resource.
14. In the Virtual machines tab, generate SKU size and total VM count
recommendations for each SAP instance from Azure Center for SAP solutions.
c. For Memory size for database (GiB), provide the total memory size required for
the database tier. For example, 1024. The value must be greater than zero, and
less than or equal to 11,400.
e. Review the VM size and count recommendations for ASCS, Application Server,
and Database instances.
g. To change the Application server count, enter a new count for Number of VMs
under Application virtual machines.
The number of VMs for ASCS and Database instances aren't editable. The
default number for each is 2.
Azure Center for SAP solutions automatically configures a database disk layout
for the deployment. To view the layout for a single database server, make sure
to select a VM SKU. Then, select View disk configuration. If there's more than
one database server, the layout applies to each server.
16. In the Visualize Architecture tab, visualize the architecture of the VIS that you're
deploying.
a. To view the visualization, make sure to configure all the inputs listed on the tab.
b. Optionally, click and drag resources or containers to move them around visually.
c. Click on Reset to reset the visualization to its default state. That is, revert any
changes you might have made to the position of resources or containers.
d. Click on Scale to fit to reset the visualization to its default zoom level.
17. Optionally, enter tags to apply to all resources created by the Azure Center for SAP
solutions process. These resources include the VIS, ASCS instances, Application
Server instances, Database instances, VMs, disks, and NICs.
a. Make sure the validations have passed, and there are no errors listed.
b. Review the Terms of Service, and select the acknowledgment if you agree.
c. Select Create.
20. Wait for the infrastructure deployment to complete. Numerous resources are
deployed and configured. This process takes approximately 7 minutes.
Before you use an image from Azure Marketplace for customization, check the list
of supported OS image versions in Azure Center for SAP Solutions. BYOI is
supported on the OS version supported by Azure Center for SAP Solutions. Make
sure that Azure Center for SAP Solutions has support for the image, or else the
deployment will fail with the following error: The resource ID provided consists of an
OS image which is not supported in ACSS. Please ensure that the OS image version is
supported in ACSS for a successful installation.
Before beginning the deployment, make sure the image is available in Azure
Compute Gallery.
Azure Center for SAP Solutions validates the base operating system version of the
custom OS Image is available in the supportability matrix in Azure Center for SAP
Solutions. If the versions are unsupported, the deployment fails. To fix this
problem, delete the VIS and infrastructure resources from the resource group, then
deploy again with a supported image.
Make sure the image version that you're using is compatible with the SAP software
version.
Confirm deployment
To confirm a deployment is successful:
1. In the Azure portal , search for and select Virtual Instances for SAP solutions.
2. On the Virtual Instances for SAP solutions page, select the Subscription filter, and
choose the subscription where you created the deployment.
3. In the table of records, find the name of the VIS. The Infrastructure column value
shows Deployed for successful deployments.
If the deployment fails, delete the VIS resource in the Azure portal, then recreate the
infrastructure.
Next steps
Install SAP software on your infrastructure
Get SAP installation media
Article • 09/07/2023
After you've created infrastructure for your new SAP system using Azure Center for SAP
solutions, you need to install the SAP software on your SAP system. However, before you
can do this installation, you need to get and upload the SAP installation media for use
with Azure Center for SAP solutions.
In this how-to guide, you'll learn how to get the SAP software installation media through
different methods. You'll also learn how to upload the SAP media to an Azure Storage
account to prepare for installation.
Prerequisites
An Azure subscription.
An Azure account with Contributor role access to the subscriptions and resource
groups in which the Virtual Instance for SAP solutions exists.
A User-assigned managed identity with Storage Blob Data Reader or Reader and
Data Access roles on the storage account which has the SAP software.
A network set up for your infrastructure deployment.
A deployment of S/4HANA infrastructure.
The SSH private key for the virtual machines in the SAP system. You generated this
key during the infrastructure deployment.
If you're installing a Highly Available (HA) SAP system, get the Service Principal
identifier (SPN ID) and password to authorize the Azure fence agent (fencing
device) against Azure resources.
For more information, see Use Azure CLI to create an Azure AD app and
configure it to access Media Services API.
For an example, see the Red Hat documentation for Creating an Azure Active
Directory Application .
To avoid frequent password expiry, use the Azure Command-Line Interface
(Azure CLI) to create the Service Principal identifier and password instead of the
Azure portal.
Required components
The following components are necessary for the SAP installation.
SAP software installation media (part of the sapbits container described later in
this article)
All essential SAP packages (SWPM, SAPCAR, etc.)
SAP software (for example, S/4HANA 2021 ISS 00)
Supporting software packages for the installation process. (These packages are
downloaded automatically and used by Azure Center for SAP solutions during the
installation.)
pip3 version pip-21.3.1.tar.gz
wheel version 0.38.1
jq version 1.6
ansible version 2.11.12
The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions.
These YAML files list all required SAP packages for the SAP software installation.
There's a main BOM ( S41909SPS03_v0011ms.yaml , S42020SPS03_v0003ms.yaml ,
S4HANA_2021_ISS_v0001ms.yaml , S42022SPS00_v0001ms.yaml ) and dependent BOMs
( HANA_2_00_059_v0004ms.yaml , HANA_2_00_064_v0001ms.yaml ,
SUM20SP15_latest.yaml , SWPM20SP13_latest.yaml ). They provide the following
information:
The full name of the SAP package ( name )
The package name with its file extension as downloaded ( archive )
The checksum of the package as specified by SAP ( checksum )
The shortened filename of the package ( filename )
The SAP URL to download the software ( url )
Template or INI files, which are stack XML files required to run the SAP packages.
1. Create an Azure Storage account through the Azure portal. Make sure to create the
storage account in the same subscription as your SAP system infrastructure.
a. On the storage account's sidebar menu, select Containers under Data storage.
b. Select + Container.
d. Select Create.
3. Grant the User-assigned managed identity, which was used during infrastructure
deployment, Storage Blob Data Reader and Reader and Data Access role access
on this storage account.
1. Create an Ubuntu 20.04 VM in Azure. For more information, see how to create a
Linux VM in the Azure portal.
Bash
4. If the Azure CLI version is not version 2.30.0 or higher, Update the Azure CLI. You
can run below command to check the version
Azure CLI
az --version
5. Sign in to Azure.
Azure CLI
az login
6. Install PIP3
Bash
Bash
Bash
git
git
git
cd sap-automation/
git
git
git status
1. Run the Ansible script playbook_bom_download with your own information. With
the exception of the s_password variable, enter the actual values within double
quotes but without the triangular brackets. For the s_password variable, use single
quotes. The Ansible command that you run should look like:
Bash
4. For <bom_base_name> , use the SAP Version you want to install i.e.
S41909SPS03_v0011ms or S42020SPS03_v0003ms or S4HANA_2021_ISS_v0001ms
or S42022SPS00_v0001ms
5. For <s_user> , use your SAP username.
7. For <storageAccountAccessKey> , use your storage account's access key. To find the
storage account's key:
a. Find the storage account in the Azure portal that you created.
b. On the storage account's sidebar menu, select Access keys under Security +
networking.
8. For <containerBasePath> , use the path to your sapbits container. To find the
container path:
a. Find the storage account that you created in the Azure portal.
10. Where orchestration_ansible_user is the user with admin privileges like (e.g.
root).
Now you can install the SAP software through Azure Center for SAP solutions.
Don't change the folder name structure for any steps in this process. Otherwise, the
installation process fails.
1. Create a new Azure Storage account for storing the software components.
2. Grant the roles Storage Blob Data Reader and Reader and Data Access to the
user-assigned managed identity, which you used during infrastructure deployment.
3. Create a container within the storage account. You can choose any container name,
such as sapbits .
7. In the boms folder, create four subfolders with the following names, depending on
the SAP version that you're using:
i. HANA_2_00_059_v0003ms
ii. S41909SPS03_v0011ms
iii. SWPM20SP12_latest
iv. SUM20SP14_latest
i. HANA_2_00_064_v0001ms
ii. S42020SPS03_v0003ms
iii. SWPM20SP12_latest
iv. SUM20SP14_latest
i. HANA_2_00_064_v0001ms
ii. S4HANA_2021_ISS_v0001ms
iii. SWPM20SP12_latest
iv. SUM20SP14_latest
i. HANA_2_00_071_v0001ms
ii. S42022SPS00_v0001ms
iii. SWPM20SP15_latest
iv. SUM20SP17_latest
1. Upload the following YAML files to the folders with the same name. Make sure to
use the files that correspond to the SAP version that you're using.
i. S41909SPS03_v0011ms.yaml
ii. HANA_2_00_059_v0004ms.yaml
i. S42020SPS03_v0003ms.yaml
ii. HANA_2_00_064_v0001ms.yaml
i. S4HANA_2021_ISS_v0001ms.yaml
ii. HANA_2_00_064_v0001ms.yaml
i. S42022SPS00_v0001ms.yaml
ii. HANA_2_00_071_v0001ms.yaml
i. HANA_2_00_055_v1_install.rsp.j2
ii. S41909SPS03_v0011ms-app-inifile-param.j2
iii. S41909SPS03_v0011ms-dbload-inifile-param.j2
iv. S41909SPS03_v0011ms-ers-inifile-param.j2
v. S41909SPS03_v0011ms-generic-inifile-param.j2
vi. S41909SPS03_v0011ms-pas-inifile-param.j2
vii. S41909SPS03_v0011ms-scs-inifile-param.j2
viii. S41909SPS03_v0011ms-scsha-inifile-param.j2
ix. S41909SPS03_v0011ms-web-inifile-param.j2
i. HANA_2_00_055_v1_install.rsp.j2
ii. HANA_2_00_install.rsp.j2
iii. S42020SPS03_v0003ms-app-inifile-param.j2
iv. S42020SPS03_v0003ms-dbload-inifile-param.j2
v. S42020SPS03_v0003ms-ers-inifile-param.j2
vi. S42020SPS03_v0003ms-generic-inifile-param.j2
vii. S42020SPS03_v0003ms-pas-inifile-param.j2
viii. S42020SPS03_v0003ms-scs-inifile-param.j2
ix. S42020SPS03_v0003ms-scsha-inifile-param.j2
i. HANA_2_00_055_v1_install.rsp.j2
ii. HANA_2_00_install.rsp.j2
iii. NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB
iv. NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params
v. NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params
vi. NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params
vii. NW_Users_Create-GENERIC.HDB.PD_Distributed.params
viii. S4HANA_2021_ISS_v0001ms-app-inifile-param.j2
ix. S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2
x. S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2
xi. S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2
xii. S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2
xiii. S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2
xiv. S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2
xv. S4HANA_2021_ISS_v0001ms-web-inifile-param.j2
i. S42022SPS00_v0001ms-app-inifile-param.j2
ii. S42022SPS00_v0001ms-dbload-inifile-param.j2
iii. S42022SPS00_v0001ms-ers-inifile-param.j2
iv. S42022SPS00_v0001ms-generic-inifile-param.j2
v. S42022SPS00_v0001ms-pas-inifile-param.j2
vi. S42022SPS00_v0001ms-scs-inifile-param.j2
vii. S42022SPS00_v0001ms-scsha-inifile-param.j2
viii. S42022SPS00_v0001ms-web-inifile-param.j2
5. Upload all the files that you downloaded to the templates folder.
i. S41909SPS03_v0011ms.yaml
ii. HANA_2_00_059_v0004ms.yaml
i. S42020SPS03_v0003ms.yaml
ii. HANA_2_00_064_v0001ms.yaml
i. S4HANA_2021_ISS_v0001ms.yaml
ii. HANA_2_00_064_v0001ms.yaml
i. S42022SPS00_v0001ms.yaml
ii. HANA_2_00_071_v0001ms.yaml
8. Repeat the previous step for the main and dependent BOM files.
9. Upload all the packages that you downloaded to the archives folder. Don't
rename the files.
e. Save and reupload the YAML file. Make sure you only have one YAML file in the
subfolder ( S41909SPS03_v0011ms or S42020SPS03_v0003ms or
S4HANA_2021_ISS_v0001ms or S42022SPS00_v0001ms ) of the boms folder.
Now you can install the SAP software through Azure Center for SAP solutions.
Next steps
Install the SAP software through Azure Center for SAP solutions
Install SAP software
Article • 05/15/2023
After you've created infrastructure for your new SAP system using Azure Center for SAP
solutions, you need to install the SAP software.
In this how-to guide, you'll learn two ways to install the SAP software for your system.
Choose whichever method is appropriate for your use case. You can either:
Install the SAP software through Azure Center for SAP solutions directly using the
installation wizard.
Install the SAP software outside of Azure Center for SAP solutions, then detect the
installed system from the service.
Prerequisites
Review the prerequisites for your preferred installation method: through the Azure
Center for SAP solutions installation wizard or through an outside method
Only the following scenarios are supported for this installation method:
Infrastructure for S4/HANA was created through Azure Center for SAP solutions.
The S4/HANA application was installed outside Azure Center for SAP solutions
through a different tool.
Only S4/HANA installation done outside Azure Center for SAP solutions can be
detected. If you have installed a different SAP Application than S4/HANA, the
detection will fail.
If you want a fresh installation of S4/HANA software on the infrastructure deployed
by Azure Center for SAP solutions, use the wizard installation option instead.
4. On the Overview page for the Virtual Instance for SAP solutions resource, select
Install SAP software.
5. In the Prerequisites tab of the wizard, review the prerequisites. Then, select Next.
a. For Have you uploaded the software to an Azure storage account?, select Yes.
b. For Software version, use the SAP S/4HANA 1909 SPS03 or SAP S/4HANA
2020 SPS 03 or SAP S/4HANA 2021 ISS 00 . Please note only those versions will
light up which are supported with the OS version that was used to deploy the
infrastructure previously.
c. For BOM directory location, select Browse and find the path to your BOM file.
For example, https://<your-storage-
account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.ya
ml .
d. For High Availability (HA) systems only, enter the client identifier for the
STONITH Fencing Agent service principal for Fencing client ID.
e. For High Availability (HA) systems only, enter the password for the Fencing
Agent service principal for Fencing client password.
f. Select Next.
9. Wait for the installation to complete. The process takes approximately three hours.
You can see the progress, along with estimated times for each step, in the wizard.
10. After the installation completes, sign in with your SAP system credentials. To find
the SAP system and HANA DB credentials for the newly installed system, see how
to manage a Virtual Instance for SAP solutions.
2. Search for and select Azure Center for SAP solutions in the Azure portal's search
bar.
3. Select Virtual Instances for SAP solutions. Then select the Virtual Instance for SAP
solutions resource that you want to detect.
4. On the resource's overview page, select Confirm already installed software. Read
all the instructions, then select Confirm. Extensions will now be installed on ASCS,
APP and DB virtual machines and start discovering SAP metadata.
5. Wait for the Virtual Instance for SAP solutions resource to be detected and
populated with the metadata. The process completes after all SAP system
components have been detected.
6. Review the Virtual Instance for SAP solutions resource in the Azure portal. The
resource page now shows the SAP system resources, and information about the
system.
Limitations
The following are known limitations and issues.
Application servers
You can install a maximum of 10 Application Servers, excluding the Primary Application
Server.
1. Download a new valid package from the SAP software downloads page.
2. Upload the new package in the archives folder of your Azure Storage account.
3. Update the following contents in the BOM file(s) that reference the updated
component.
permissions to 0755
url to the new SAP download URL
git
2. Before running the Ansible playbook set the SPASS environment variable below.
Single quotes should be present in the command.
Bash
export SPASS='password_with_special_chars'
Azure CLI
ansible-playbook ./sap-
automation/deploy/ansible/playbook_bom_downloader.yaml -e
"bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -
e "s_user=<username>" -e "s_password=$SPASS" -e "sapbits_access_key=
<storageAccountAccessKey>" -e "sapbits_location_base_path=
<containerBasePath>"
Next steps
Find SAP and HANA passwords through Azure Center for SAP solutions
Monitor SAP system from Azure portal
Manage a Virtual Instance for SAP solutions
Register existing SAP system
Article • 01/19/2024
In this how-to guide, you learn how to register an existing SAP system with Azure Center
for SAP solutions. After you register an SAP system with Azure Center for SAP solutions,
you can use its visualization, management and monitoring capabilities through the
Azure portal. For example, you can:
View and track the SAP system as an Azure resource, called the Virtual Instance for
SAP solutions (VIS).
Get recommendations for your SAP infrastructure, Operating System
configurations etc. based on quality checks that evaluate best practices for SAP on
Azure.
Get health and status information about your SAP system.
Start and Stop SAP application tier.
Start and Stop individual instances of ASCS, App server and HANA Database.
Monitor the Azure infrastructure metrics for the SAP system resources.
View Cost Analysis for the SAP system.
When you register a system with Azure Center for SAP solutions, the following resources
are created in your Subscription:
Virtual Instance for SAP solutions, Central service instance for SAP solutions, App
server instance for SAP solutions and Database for SAP solutions. These resource
types are created to represent the SAP system on Azure. These resources do not
have any billing or cost associated with them.
A managed resource group that is used by Azure Center for SAP solutions service.
A Storage account within the managed resource group that contains blobs. These
blobs are scripts and logs necessary for the service to provide various capabilities
that include discovering and registering all components of SAP system.
7 Note
You can customize the names of the Managed resource group and the Storage
account which get deployed as part of the registration process by using Azure
Portal, Azure PowerShell or Azure CLI interfaces, when you register your systems.
7 Note
You can now enable secure access from specific virtual networks to the ACSS
managed storage account using the new option in the registration experience.
Prerequisites
Supported systems
You can register SAP systems with Azure Center for SAP solutions that run on the
following configurations:
The following SAP system configurations aren't supported in Azure Center for SAP
solutions:
Azure Center for SAP solutions uses this user-assigned managed identity to install VM
extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center
for SAP solutions to discover the SAP system components, and other SAP system
metadata. User-assigned managed identity is required to enable SAP system monitoring
and management capabilities.
) Important
When you limit storage account network access to specific virtual networks, you
have to configure Microsoft.Storage service endpoint on all subnets related to the
SAP system that you are registering. Without the service endpoint enabled, you will
not be able to successfully register the system. Private endpoint on managed
storage account is not currently supported in this scenario.
When you choose to limit network access to specific virtual networks, Azure Center for
SAP solutions service accesses this storage account using trusted access based on the
managed identity associated with the VIS resource.
1. Sign in to the Azure portal . Make sure to sign in with an Azure account that has
Azure Center for SAP solutions administrator and Managed Identity Operator
role access to the subscription or resource groups where the SAP system exists. For
more information, see the resource permissions explanation.
2. Search for and select Azure Center for SAP solutions in the Azure portal's search
bar.
3. On the Azure Center for SAP solutions page, select Register an existing SAP
system.
4. On the Basics tab of the Register existing SAP system page, provide information
about the SAP system.
a. For ASCS virtual machine, select Select ASCS virtual machine and select the
ASCS VM resource.
c. For SAP product, select the SAP system product from the drop-down menu.
d. For Environment, select the environment type from the drop-down menu. For
example, production or non-production environments.
g. For Managed resource group name, optionally enter a resource group name as
per your organization's naming policies. This resource group is managed by
ACSS service.
h. For Managed storage account name, optionally enter a storage account name
as per your organization's naming policies. This storage account is managed by
ACSS service.
i. For Storage account network access, select Enable access from specific virtual
network for enhanced network security access for the managed storage
account.
j. Select Review + register to discover the SAP system and begin the registration
process.
k. On the Review + register pane, make sure your settings are correct. Then, select
Register.
5. Wait for the VIS resource to be created. The VIS name is the same as the SID name.
The VIS deployment finishes after all SAP system components are discovered from
the ASCS VM that you selected.
You can now review the VIS resource in the Azure portal. The resource page shows the
SAP system resources, and information about the system.
If the registration doesn't succeed, see what to do when an SAP system registration fails
in Azure Center for SAP solutions. Once you have fixed the configuration causing the
issue, retry registration using the Retry action available on the VIS resource page on
Azure portal.
3. If the service is not running then restart this service. To restart use the following
steps:
4. If this does not solve the issue, try updating the VM Agent using this document
5. If the VM agent does not exist or needs to be re-installed, then follow this
documentation.
Next steps
Monitor SAP system from Azure portal
Manage a VIS
Manage a Virtual Instance for SAP
solutions
Article • 05/15/2023
In this article, you'll learn how to view the Virtual Instance for SAP solutions (VIS)
resource created in Azure Center for SAP solutions through the Azure portal. You can use
these steps to find your SAP system's properties and connect parts of the VIS to other
resources like databases.
Prerequisites
An Azure subscription in which you have a successfully created Virtual Instance for
SAP solutions(VIS) resource.
An Azure account with Azure Center for SAP solutions administrator role access
to the subscription or resource groups where you have the VIS resources.
2. Sign in with your Azure account that has the necessary role access as described in
the prerequisites.
3. In the search field in the navigation menu, enter and select Azure Center for SAP
solutions.
4. On the Azure Center for SAP solutions overview page, search for and select
Virtual Instances for SAP solutions in the sidebar menu.
5. On the Virtual Instances for SAP solutions page, select the VIS that you want to
view.
) Important
Each VIS resource has a unique Managed Resource Group associated with. This
Resource Group contains resources like Storage Account, Keyvault etc. which are
critical for Azure Center for SAP solutions service to provide capabilities like
deployment of infrastructure for a new system, installation of SAP software,
registration of existing systems and all other SAP system management functions.
Please do not delete this RG or any resources within it. If they are deleted, you will
have to re-register the VIS to use any capabilities of ACSS.
Monitor VIS
To see infrastructure-based metrics for the VIS, open the VIS in the Azure portal. On the
Overview pane, select the Monitoring tab. You can see the following metrics:
VM utilization by ASCS and Application Server instances. The graph shows CPU
usage percentage for all VMs that support the ASCS and Application Server
instances.
VM utilization by the database instance. The graph shows CPU usage percentage
for all VMs that support the database instance.
IOPS consumed by the database instance's data disk. The graph shows the
percentage of disk utilization by all VMs that support the database instance.
3. On the resource group's page, select the Key vault resource in the table.
4. On the key vault's page, select Secrets in the navigation menu under Settings.
5. Make sure that you have access to all the secrets. If you have correct permissions,
you can see the SAP password file listed in the table, which hosts the global
password for your SAP system.
6. Select the SAP password file name to open the secret's page.
If you get the warning The operation 'List' is not enabled in this key vault's access
policy. with the message You are unauthorized to view these contents.:
1. Make sure that you're responsible to manage these secrets in your organization.
2. In the sidebar menu, under Settings, select Access policies.
3. On the access policies page for the key vault, select + Add Access Policy.
4. In the pane Add access policy, configure the following settings.
a. For Configure from template (optional), select Key, Secret, & Certificate
Management.
b. For Key permissions, select the keys that you want to use.
c. For Secret permissions, select the secrets that you want to use.
d. For Certificate permissions, select the certificates that you want to use.
e. For Select principal, assign your own account name.
5. Select Add to add the policy.
6. In the access policy's menu, select Save to save your settings.
7. In the sidebar menu, under Settings, select Secrets.
8. On the secrets page for the key vault, make sure you can now see the SAP
password file.
Delete VIS
When you delete a VIS, you also delete the managed resource group and all instances
that are attached to the VIS. That is, the VIS, ASCS, Application Server, and Database
instances are deleted. Any Azure physical resources aren't deleted when you delete a
VIS. For example, the VMs, disks, NICs, and other resources aren't deleted.
2 Warning
Deleting a VIS is a permanent action! It's not possible to restore a deleted VIS.
To delete a VIS:
3. In the deletion pane, make sure that you want to delete this VIS and related
resources. You can see a count for each type of resource to be deleted.
6. Wait for the deletion operation to complete for the VIS and related resources.
After you delete a VIS, you can register the SAP system again. Open Azure Center for
SAP solutions in the Azure portal, and select Register an existing SAP system.
Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for your VIS
Start and stop SAP systems, instances
and HANA database
Article • 10/31/2023
In this how-to guide, you'll learn to start and stop your SAP systems through the Virtual
Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
Through the Azure portal, Azure PowerShell, CLI and REST API interfaces, you can start
and stop:
Entire SAP Application tier in one go, which include ABAP SAP Central Services
(ASCS) and Application Server instances.
Specific SAP instance, such as the application server instance.
HANA Database
You can start and stop instances and HANA database in the following types of
deployments:
Single-Server
High Availability (HA)
Distributed Non-HA
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
SAP HA systems that use SUSE and RHEL Pacemaker clustering software and
Windows Server Failover Clustering (WSFC). Other certified cluster software isn't
currently supported.
Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
For the start operation to work, the underlying virtual machines (VMs) of the SAP
instances must be running. This capability starts or stops the SAP application
instances, not the VMs that make up the SAP system resources.
The sapstartsrv service must be running on all VMs related to the SAP system.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
For HANA Database, Stop operation is initiated only when the cluster maintenance
mode is in Disabled status. Similarly, Start operation is initiated only when the
cluster maintenance mode is in Enabled status.
7 Note
When you deploy an SAP system using Azure Center for SAP solutions, RHEL and
SUSE cluster connector for highly available systems is already configured on them
as part of the SAP software installation process.
Supported scenarios
The following scenarios are supported when Starting and Stopping SAP systems:
SAP systems that run on Windows and, RHEL and SUSE Linux operating systems.
Stopping and Starting SAP system or individual instances from the VIS resource
only stops or starts the SAP application. The underlying VMs are not stopped or
started.
Stopping a highly available SAP system from the VIS resource gracefully stops the
SAP instances in the right order and does not result in a failover of Central Services
instance.
Stopping the HANA Database from the VIS resource results in the entire HANA
instance to be stopped. In case of HANA MDC with multiple tenant DBs, the entire
instance is stopped and not the specific Tenant DB.
For highly available (HA) HANA databases, start and stop operations through
Virtual Instance for SAP solutions resource are supported only when cluster
management solution is in place. Any other HANA database high availability
configurations without a cluster are not currently supported when starting and
stopping using Virtual Instance for SAP solutions resource.
7 Note
When multiple application server instances run on a single virtual machine and you
intend to stop all these instances, you can currently stop them one instance at a
time only. If you attempt to stop them in parallel, only one stop request is accepted
and all others would fail.
Stop SAP system
To stop an SAP system in the VIS resource:
2. Search for and select Azure Center for SAP solutions in the search bar.
4. In the table of VIS resources, select the name of the VIS you want to stop.
5. Select the Stop button. If you can't select this button, the SAP system already isn't
running.
A notification pane then opens with a Stopping Virtual Instance for SAP solutions
message.
2. Search for and select Azure Center for SAP solutions in the search bar.
4. In the table of VIS resources, select the name of the VIS you want to start.
5. Select the Start button. If you can't select this button, make sure that you've
followed the prerequisites for the VMs within your SAP system.
A notification pane then opens with a Starting Virtual Instance for SAP solutions
message. The VIS resource's Status also changes to Starting.
A notification pane then opens with a Started Virtual Instance for SAP solutions
message.
Troubleshooting
If the SAP system takes longer than 300 seconds to complete a start or stop operation,
the operation terminates. After the operation terminates, the monitoring service
continues to check and update the status of the SAP system in the VIS resource.
Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for a VIS resource
Soft stop SAP systems, application
server instances and HANA database
Article • 11/20/2023
In this how-to guide, you'll learn to soft stop your SAP systems, individual instances and
HANA database through the Virtual Instance for SAP solutions (VIS) resource in Azure
Center for SAP solutions. You can stop your system smoothly by making sure that
existing user connections, batch processes, etc. are drained first.
Using the Azure PowerShell, CLI and REST API interfaces, you can:
Soft stop the entire SAP system, that is the application server instances and central
services instance.
Soft stop specific SAP application server instances.
Soft stop HANA database.
Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. For
more information, see how to use granular permissions that govern start and stop
actions on the VIS, individual SAP instances and HANA databases.
For HA deployments, the HA interface cluster connector for SAP
( sap_vendor_cluster_connector ) must be installed on the ASCS instance. For more
information, see the SUSE connector specifications and RHEL connector
specifications .
For HANA Database, Stop operation is initiated only when the cluster maintenance
mode is in Disabled status.
When attempting to soft stop an SAP system or applicaton server instance using
Azure Center for SAP solutions, soft stop timeout value must be greater than 0 and
less than 82800 seconds.
PowerShell
Stop-AzWorkloadsSapVirtualInstance -InputObject
/subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0 --SoftStopTimeoutSecond 300 `
Azure CLI
To soft stop an application server represented as an App server instance for SAP solutions
resource:
Using PowerShell
Use the Stop-AzWorkloadsSapApplicationInstance command:
PowerShell
Stop-AzWorkloadsSapApplicationInstance -InputObject
/subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0/applicationInstances/app0 --SoftStopTimeoutSecond 300 `
Using CLI
Use the az workloads sap-application-server-instance stop command:
Azure CLI
7 Note
When attempting to soft stop HANA database instance using Azure Center for SAP
solutions, soft stop timeout value must be greater than 0 and less than 1800
seconds.
Using PowerShell
Use the Stop-AzWorkloadsSapDatabaseInstance command:
PowerShell
Stop-AzWorkloadsSapDatabaseInstance -InputObject
/subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirt
ualInstances/DB0/databaseInstances/ab0 --SoftStopTimeoutSecond 300 `
Using CLI
Use the az workloads sap-database-instance stop command:
Azure CLI
In this how-to guide, you'll learn how to start and stop SAP systems and their underlying
virtual machines through the Virtual Instance for SAP solutions (VIS) resource in Azure
Center for SAP solutions. This simplifies the process to stop and start SAP systems by
shutting down and bringing up underlying infrastructure and SAP application in one
command.
Start and stop the entire SAP application tier and its Virtual machines, which
includes ABAP SAP Central Services (ASCS) and Application Server instances.
Start and stop a specific SAP instance, such as the application server instance, and
its Virtual machines.
Start and stop HANA database instance and its Virtual machines.
) Important
The ability to start and stop virtual machines of an SAP system is available from API
Version 2023-10-01.
7 Note
You can schedule stop and start of SAP systems, HANA database at scale for your
SAP landscapes using the ARM template . This ARM template can be customized
to suit your own requirements.
Prerequisites
An SAP system that you've created in Azure Center for SAP solutions or registered
with Azure Center for SAP solutions.
Check that your Azure account has Azure Center for SAP solutions administrator
or equivalent role access on the Virtual Instance for SAP solutions resources. You
can learn more about the granular permissions that govern Start and Stop actions
on the VIS, individual SAP instances and HANA Database in this article.
Check that the User Assigned Managed Identity associated with the VIS resource
has Virtual Machine Contributor or equivalent role access. This is needed to be
able to Start and Stop VMs.
Unsupported scenarios
The following scenarios are not currently supported when using the Start and Stop of
SAP, individual SAP instances, HANA database and their underlying VMs:
Starting and stopping systems when multiple SIDs on the same set of Virtual
Machines.
Starting and stopping HANA databases with MCOS (Multiple Components in One
System) architecture, where multiple HANA instances run on the same set of virtual
machines.
Starting and stopping SAP application server or central services instances where
instances of multiple SIDs or multiple instances of the same SID run on the same
virtual machine.
) Important
For single-server deployments, when you want to stop SAP, HANA DB and the VM,
use stop VIS action to stop SAP application tier and then stop HANA database with
'deallocateVm' set to true. This ensures that SAP application and HANA database
are both stopped before stopping the VM.
7 Note
When stopping a VIS or an instance with 'DeallocateVm' option set to true, only
that VIS or instance is stopped and then the virtual machine is shutdown. SAP
instances of other SIDs are not stopped. Use the virtual machine stop option only
after all instances running on the VM are stopped.
HTTP
POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/start?api-
version=2023-10-01-preview
{
"startVm": true
}
HTTP
POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/stop?api-
version=2023-10-01-preview
{
"deallocateVm": true
}
HTTP
POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/databaseInstances/d
b0/start?api-version=2023-10-01-preview
{
"startVm": true
}
HTTP
POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-
rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/databaseInstances/d
b0/stop?api-version=2023-10-01-preview
{
"deallocateVm": true
}
Get quality checks and insights for a
Virtual Instance for SAP solutions
Article • 05/15/2023
The Quality Insights Azure workbook in Azure Center for SAP solutions provides insights
about the SAP system resources as a result of running more than 100 quality checks on
the VIS. The feature is part of the monitoring capabilities built in to the Virtual Instance
for SAP solutions (VIS). These quality checks make sure that your SAP system uses Azure
and SAP best practices for reliability and performance.
In this how-to guide, you'll learn how to use quality checks and insights to get more
information about various configurations within your SAP system.
Prerequisites
An SAP system that you've created with Azure Center for SAP solutions or
registered with Azure Center for SAP solutions.
2. Search for and select Azure Center for SAP solutions in the Azure portal search
bar.
3. On the Azure Center for SAP solutions page's sidebar menu, select Virtual
Instances for SAP solutions.
4. On the Virtual Instances for SAP solutions page, select the VIS that you want to
get insights about.
5. On the sidebar menu for the VIS, under Monitoring select Quality Insights.
The table in the Advisor Recommendations tab shows all the recommendations for
ASCS, Application and Database instances in the VIS.
Select an instance name to see all recommendations, including which action to take to
resolve an issue.
7 Note
These quality checks run on all VIS instances at a regular frequency of once every 1
hour. The corresponding recommendations in Azure Advisor also refresh at the
same 1-hour frequency.If you take action on one or more recommendations from
Azure Center for SAP solutions, wait for the next refresh to see any new
recommendations from Azure Advisor.
) Important
Azure Advisor filters out recommendations for Deleted Azure resources for 7 days.
Therefore, if you delete a VIS and then re-register it, you will be able to see Advisor
recommendations after 7 days of re-registration.
Get VM information
The Virtual Machine tab provides insights about the VMs in your VIS. There are multiple
subsections:
Azure Compute
Compute List
Compute Extensions
Compute + OS Disk
Compute + Data Disks
Azure Compute
The Azure Compute tab shows a summary graph of the VMs inside the VIS.
Compute List
The Compute List tab shows a table of information about the VMs inside the VIS. This
information includes the VM's name and state, SKU, OS, publisher, image version and
SKU, offer, Azure region, resource group, tags, and more.
You can toggle Show Help to see more information about the table data.
Select a VM name to see its overview page, and change settings like Boot Diagnostic.
Compute Extensions
The Compute Extensions tab shows information about your VM extensions. There are
three tabs within this section:
VM+Extensions
VM Extensions Status
Failed VM Extensions
VM + Extensions
VM Extensions Status
VM Extensions Status shows details about the VM extensions in each VM. You can see
each extension's state, version, and if AutoUpgrade is enabled.
Failed VM Extensions
Failed VM Extensions shows which VM extensions are failing in the selected VIS.
Compute + OS Disk
The Compute+OS Disk tab shows a table with OS disk configurations in the SAP system.
Accelerated Networking
Public IP
Backup
Load Balancer
Accelerated Networking
The Accelerated Networking tab shows if Accelerated Networking State is enabled for
each NIC in the VIS. It's recommended to enable this setting for reliability and
performance.
Public IP
The Public IP tab shows any public IP addresses that are associated with the NICs linked
to the VMs in the VIS.
Backup
The Backup tab shows a table of VMs that don't have Azure Backup configured. It's
recommended to use Azure Backup with your VMs.
Load Balancer
The Load Balancer tab shows information about load balancers connected to the
resource group(s) for the VIS. There are two subsections: Load Balancer Overview and
Load Balancer Monitor.
Load Balancer Key Metrics, which is a table that shows important information about the
load balancers in the subscription where the VIS exists.
Backend health probe by Backend IP, which is a chart that shows the health probe
status for each load balancer over time.
Next steps
Manage a VIS
Monitor SAP system from the Azure portal
View post-deployment cost analysis for
SAP system
Article • 05/15/2023
In this how-to guide, you'll learn how to view the running cost of your SAP systems
through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP
solutions.
After you deploy or register an SAP system as a VIS resource, you can view the cost of
running that SAP system on the VIS resource's page. This feature shows the post-
deployment running costs in the context of your SAP system. When you have Azure
resources of multiple SAP systems in a single resource group, you no longer need to
analyze the cost for each system. Instead, you can easily view the system-level cost from
the VIS resource.
7 Note
If you register an existing SAP system as a VIS, the cost analysis only shows data
after the time of registration. Even if some infrastructure resources might have been
deployed before the registration, the cost analysis tags aren't applied to historical
data.
The following Azure resources aren't included in the SAP system-level cost analysis. This
list includes some resources that might be shared across multiple SAP systems.
Virtual networks
Storage accounts
Azure NetApp files (ANF)
Azure key vaults
Azure Monitor for SAP solutions resources
Azure Backup resources
Cost and usage data is typically available within 8-24 hours. As such, your VIS resource
can take 8-24 hours to start showing cost analysis data.
Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for a VIS resource
Start and Stop SAP systems
Configure and monitor Azure Backup
status for your SAP system through
Virtual Instance for SAP solutions
(Preview)
Article • 11/15/2023
7 Note
Configuration of Backup from Virtual Instance for SAP solutions feature is currently
in Preview.
In this how-to guide, you'll learn to configure and monitor Azure Backup for your SAP
system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for
SAP solutions.
When you configure Azure Backup from the VIS resource, you can enable Backup for
your SAP Central Services instance, Application server and Database virtual machines
and HANA Database in one step. For the HANA Database, Azure Center for SAP
solutions automates the step of running the Pre-Registration script.
Once backup is configured, you can monitor the status of your Backup Jobs for both
virtual machines and HANA DB from the VIS.
If you have already configured Backup from Azure Backup Center for your SAP VMs and
HANA DB, then VIS resource automatically detects this and enables you to monitor the
status of Backup jobs.
Prerequisites
A Virtual Instance for SAP solutions (VIS) resource representing your SAP system
on Azure Center for SAP solutions.
An Azure account with Contributor role access on the Subscription in which your
SAP system exists.
To be able to configure Backup from the VIS resource, assign the following roles to
Azure Workloads Connector Service first-party app
1. Backup Contributor role access on the Subscription or specific Resource group
which has the Recovery services vault that will be used for Backup.
2. Virtual Machine Contributor role access on the Subscription or Resource groups
which have the Compute resources of the SAP systems. You can skip this step if
you have already configured Backup for your VMs and HANA DB using Azure
Backup Center. You will be able to monitor Backup of your SAP system from the
VIS.
) Important
Once you have completed configuring Backup from the VIS experience, it is
recommended that you remove role access assigned to Azure Workloads
Connector Service first-party app, as the access is no longer needed when
monitoring backup status from VIS.
For HANA database backup, ensure the prerequisites required by Azure Backup are
in place.
For HANA database backup, create a HDB Userstore key that will be used for
preparing HANA DB for configuring Backup. For a highly available(HA) HANA
database, the Userstore key should be created in both Primary and Secondary
databases.
7 Note
If you are configuring backup for HANA database from the Virtual Instance for SAP
solutions resource, you can skip running the Backup pre-registration script. Azure
Center for SAP solutions runs this script before configuring HANA backup.
Select a Backup policy that is to be used for backing up Central service, App
server and Database VMs.
Select Include database servers for virtual machine backup if you want to
have Azure VM backup configured for database VMs. If this is not selected,
only Central service and App server VMs will have VM backup configured.
If you choose to include database VMs for backup, then you can decide if
all disks associated to the VM must be backed up or OS disk only.
8. For Database Backup, select an existing Recovery Services vault or Create new.
) Important
If you are configuring backup for a HSR enabled HANA database, then you
must ensure the HANA DB user store key is available on both primary and
secondary databases.
10. If SSL enforce is enabled for the HANA database, provide the key store, trust store
path and SSL hostname and crypto provider details.
7 Note
If you are configuring backup for a HSR enabled HANA database from the Virtual
Instance for SAP solutions resource, then the Backup pre-registration script is run
on both Primary and Secondary HANA VMs. This is inline with Azure Backup
configuration process for HSR enabled HANA databases, to ensure Azure Backup
service is able to connect to any new primary node automatically without any
manual intervention. Learn more.
Next steps
Monitor SAP system from the Azure portal
Get quality checks and insights for a VIS resource
Start and Stop SAP systems
View Cost Analysis of SAP system
Monitor SAP system from Azure portal
Article • 05/15/2023
In this how-to guide, you'll learn how to monitor the health and status of your SAP
system with Azure Center for SAP solutions through the Azure portal. The following
capabilities are available for your Virtual Instance for SAP solutions resource:
Monitor your SAP system, along with its instances and VMs.
Analyze important SAP infrastructure metrics.
Create and/or register an instance of Azure Monitor for SAP solutions to monitor
SAP platform metrics.
System health
The health of an SAP system within Azure Center for SAP solutions is based on the
status of its underlying instances. Codes for health are also determined by the collective
impact of these instances on the performance of the SAP system.
System status
The status of an SAP system within Azure Center for SAP solutions indicates the current
state of the system.
Instance properties
When you check the health or status of your SAP system in the Azure portal, the results
for each instance are listed and color-coded.
Green Running
Yellow Unavailable
Red Unavailable
Gray Unavailable
Example scenarios
The following are different scenarios with the corresponding status and health values.
Application instance state ASCS instance state System status System health
For ASCS and application server instances, the following color-coding applies:
7 Note
After creating your virtual Instance for SAP solutions (VIS), you might need to wait
2-5 minutes to see health and status information.
The average latency to get health and status information is about 30 seconds.
2. In the search bar, enter SAP on Azure , then select Azure Center for SAP solutions
in the results.
3. On the service's page, select Virtual Instances for SAP solutions in the sidebar
menu.
4. On the page for the VIS, review the table of instances. There is an overview of
health and status information for each VIS.
5. Select the VIS you want to check.
6. On the Overview page for the VIS resource, select the Properties tab.
7. On the properties page for the VIS, review the SAP status section to see the health
of SAP instances. Review the Virtual machines section to see the health of VMs
inside the VIS.
2. In the sidebar menu, under SAP resources, select Central service instances.
2. In the sidebar menu, under SAP resources, select App server instances.
2. In the search bar, enter SAP on Azure , then select Azure Center for SAP solutions
in the results.
3. On the service's page, select SAP Virtual Instances in the sidebar menu.
4. On the page for the VIS, select the VIS from the table.
5. On the overview page for the VIS, select the Monitoring tab.
7. Select any of the monitoring charts to do more in-depth analysis with Azure
Monitor metrics explorer.
2. In the search bar, enter SAP on Azure , then select Azure Center for SAP solutions
in the results.
3. On the service's page, select SAP Virtual Instances in the sidebar menu.
4. On the page for the VIS, select the VIS from the table.
5. In the sidebar menu for the VIS, under Monitoring, select Azure Monitor for SAP
solutions.
6. Select whether you want to [create a new Azure Monitor for SAP solutions
instance](#create-new-Azure Monitor for SAP solutions-resource), or [register an
existing Azure Monitor for SAP solutions instance](#register-existing-Azure
Monitor for SAP solutions-resource). If you don't see this option, you've already
configured this setting.
7. After you create or register your Azure Monitor for SAP solutions instance, you are
redirected to the Azure Monitor for SAP solutions instance.
1. On the Create new Azure Monitor for SAP solutions resource page, select the
Basics tab.
b. For Azure Monitor for SAP solutions resource group, select the same resource
group as the VIS.
) Important
If you select a resource group that's different from the resource group of the
VIS, the deployment fails.
3. Under Azure Monitor for SAP solutions instance details, configure your Azure
Monitor for SAP solutions instance.
a. For Resource name, enter a name for your Azure Monitor for SAP solutions
resource.
c. For Route All, choose to enable or disable the option. When you enable this
setting, all outbound traffic from the app is affected by your networking
configuration.
7 Note
You can only view and select the current version of Azure Monitor for SAP solutions
resources. Azure Monitor for SAP solutions (classic) resources aren't available.
Unregister Azure Monitor for SAP solutions from VIS
7 Note
This operation only unregisters the Azure Monitor for SAP solutions resource from
the VIS. To delete the Azure Monitor for SAP solutions resource, you need to delete
the Azure Monitor for SAP solutions instance.
To remove the link between your Azure Monitor for SAP solutions resource and your
VIS:
2. In the sidebar menu, under Monitoring, select Azure Monitor for SAP solutions.
3. On the Azure Monitor for SAP solutions page, select Delete to unregister the
resource.
4. Wait for the confirmation message, Azure Monitor for SAP solutions has been
unregistered successfully.
Solution:
1. If the SAP Central services VM is not running, then bring up the virtual machine
and SAP services on the VM. Once this is done, wait for a few minutes and check if
the Health and Status shows up on the VIS resource.
2. Navigate to the SAP Central Services VM on Azure Portal and check if the status of
Microsoft.Workloads.MonitoringExtension on the Extensions + applications tab
shows Provisioning Succeeded. If not, raise a support ticket.
3. Navigate to the VIS resource and go to the Managed Resource Group from the
Essentials section on Overview. Check if a Storage Account exists in this resource
group. If it exists, then check if your virtual network allows connectivity from the
SAP central services VM to this storage account. Enable connectivity if needed. If
the storage account doesn't exist, then you will have to delete the VIS resource
and register the system again.
4. Check if the SAP central services VM system assigned managed identity has the
‘Storage Blob Data Owner’ access on the managed resource group of the VIS. If
not, provide the necessary access. If the system assigned managed identity doesn't
exist, then you will have to delete the VIS and re-register the system.
5. Ensure sapstartsrv process for the SAP instance and SAP Hostctrl is running on the
Central Services VM.
6. If everything mentioned above is in place, then log a support ticket.
Next steps
Get quality checks and insights for your VIS
az workloads sap-virtual-instance
Preview Reference
7 Note
This reference is part of the workloads extension for the Azure CLI (version 2.55.0
or higher). The extension will automatically install the first time you run an az
workloads sap-virtual-instance command. Learn more about extensions.
Commands
ノ Expand table
az workloads sap- Create a Virtual Instance for SAP solutions (VIS) Extension Preview
virtual-instance resource.
create
az workloads sap- Delete a Virtual Instance for SAP solutions resource Extension Preview
virtual-instance and its child resources, that is the associated Central
delete Services Instance, Application Server Instances and
Database Instance.
az workloads sap- List all Virtual Instances for SAP solutions resources in Extension Preview
virtual-instance a Resource Group.
list
az workloads sap- Show a Virtual Instance for SAP solutions resource. Extension Preview
virtual-instance
show
az workloads sap- Starts the SAP application, that is the Central Services Extension Preview
virtual-instance instance and Application server instances.
start
az workloads sap- Stops the SAP Application, that is the Application Extension Preview
virtual-instance server instances and Central Services instance.
stop
Name Description Type Status
az workloads sap- Update a Virtual Instance for SAP solutions (VIS) Extension Preview
virtual-instance resource.
update
az workloads sap- Place the CLI in a waiting state until a condition is met. Extension Preview
virtual-instance
wait
Azure CLI
Examples
Deploy infrastructure for a three-tier distributed SAP system. See sample json payload
here: https://go.microsoft.com/fwlink/?linkid=2230236
Azure CLI
az workloads sap-virtual-instance create -g <resource-group-name> -n <vis-
name> --environment NonProd --sap-product s4hana --configuration <payload-
file-path> --identity "{type:UserAssigned,userAssignedIdentities:{<managed-
identity-resource-id>:{}}}"
Install SAP software on the infrastructure deployed for the three-tier distributed SAP
system. See sample json payload here: https://go.microsoft.com/fwlink/?linkid=2230167
Azure CLI
Deploy infrastructure for a three-tier distributed Highly Available (HA) SAP system with
customized resource naming. See sample json payload here:
https://go.microsoft.com/fwlink/?linkid=2230402
Azure CLI
Install SAP software on the infrastructure deployed for the three-tier distributed Highly
Available (HA) SAP system with customized resource naming. See sample json payload
here: https://go.microsoft.com/fwlink/?linkid=2230340
Azure CLI
Register an existing SAP system as a Virtual Instance for SAP solutions resource (VIS)
Azure CLI
Azure CLI
Deploy infrastructure for a three-tier distributed Highly Available (HA) SAP system with
Azure Compute Gallary Image. See sample json payload here:
https://go.microsoft.com/fwlink/?linkid=2263420
Azure CLI
Required Parameters
--name --sap-virtual-instance-name -n
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
Optional Parameters
--central-server-vm
--environment
--identity
A pre-created user assigned identity with appropriate roles assigned. To learn more
on identity and roles required, visit the ACSS how-to-guide. Support shorthand-
syntax, json-file and yaml-file. Try "??" to show more.
--location -l
--managed-resources-network-access-type --mrg-network-access-typ
Specifies the network access configuration for the resources that will be deployed in
the Managed Resource Group. The options to choose from are Public and Private. If
'Private' is chosen, the Storage Account service tag should be enabled on the
subnets in which the SAP VMs exist. This is required for establishing connectivity
between VM extensions and the managed resource group storage account. This
setting is currently applicable only to Storage Account. Learn more here
https://go.microsoft.com/fwlink/?linkid=2247228 .
accepted values: Private, Public
default value: Public
--managed-rg-name
--managed-rg-sa-name
The custom storage account name for the storage account created by the service in
the managed resource group created as part of VIS deployment.
--no-wait
--sap-product
--tags
Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show
more.
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Increase logging verbosity. Use --debug for full debug logs.
Delete a Virtual Instance for SAP solutions resource and its child resources, that is the
associated Central Services Instance, Application Server Instances and Database
Instance.
Azure CLI
Examples
Delete a Virtual Instance for SAP solutions (VIS)
Azure CLI
Remove a Virtual Instance for SAP solutions (VIS) using the Azure resource ID of the VIS
Azure CLI
Optional Parameters
--ids
--name --sap-virtual-instance-name -n
--no-wait
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
--yes -y
Global Parameters
--debug
--help -h
--only-show-errors
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
List all Virtual Instances for SAP solutions resources in a Resource Group.
Azure CLI
Examples
Get a list of the Virtual Instance(s) for SAP solutions (VIS)
Azure CLI
az workloads sap-virtual-instance list -g <resource-group-name>
Required Parameters
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
Optional Parameters
--max-items
Total number of items to return in the command's output. If the total number of
items available is more than the value specified, a token is provided in the
command's output. To resume pagination, provide the token value in --next-token
argument of a subsequent command.
--next-token
Token to specify where to start paginating. This is the token value from a previously
truncated response.
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Azure CLI
Examples
Get an overview of any Virtual Instance(s) for SAP solutions (VIS)
Azure CLI
Get an overview of the Virtual Instance(s) for SAP solutions (VIS) using the Azure
resource ID of the VIS
Azure CLI
Optional Parameters
--ids
--name --sap-virtual-instance-name -n
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
Global Parameters
--debug
--help -h
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Starts the SAP application, that is the Central Services instance and Application server
instances.
Azure CLI
Examples
Start an SAP system: This command starts the SAP application tier, that is ASCS instance
and App servers of the system.
Azure CLI
Start an SAP system using the Azure resource ID of the Virtual instance for SAP solutions
(VIS): This command starts the SAP application tier, that is ASCS instance and App
servers of the system.
Azure CLI
Start an SAP system with Virtual Machines: This command starts the SAP application tier,
that is ASCS instance and App servers of the system with Virtual Machines.
Azure CLI
Optional Parameters
--ids
--no-wait
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--sap-virtual-instance-name --vis-name
--start-vm
The boolean value indicates whether to start the virtual machines before starting the
SAP instances.
accepted values: 0, 1, f, false, n, no, t, true, y, yes
default value: False
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
JMESPath query string. See http://jmespath.org/ for more information and
examples.
--subscription
--verbose
Stops the SAP Application, that is the Application server instances and Central Services
instance.
Azure CLI
Examples
Stop an SAP system: This command stops the SAP application tier, that is ASCS instance
and App servers of the system.
Azure CLI
az workloads sap-virtual-instance stop -g <resource-group-name> -n <vis-
name>
Stop an SAP system using the Azure resource ID of the Virtual instance for SAP solutions
(VIS): This command stops the SAP application tier, that is ASCS instance and App
servers of the system.
Azure CLI
Stop an SAP system with Virtual Machines: This command stops the SAP application tier,
that is ASCS instance and App servers of the system with Virtual Machines.
Azure CLI
Soft Stop an SAP system: This command soft stops the SAP application tier, that is ASCS
instance and App servers of the system.
Azure CLI
Optional Parameters
--deallocate-vm
The boolean value indicates whether to Stop and deallocate the virtual machines
along with the SAP instances.
accepted values: 0, 1, f, false, n, no, t, true, y, yes
default value: False
--ids
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--sap-virtual-instance-name --vis-name
--soft-stop-timeout-seconds
This parameter defines how long (in seconds) the soft shutdown waits until the
RFC/HTTP clients no longer consider the server for calls with load balancing. Value 0
means that the kernel does not wait, but goes directly into the next shutdown state,
i.e. hard stop.
default value: 0
--subscription
Global Parameters
--debug
--help -h
--only-show-errors
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Azure CLI
Examples
Add tags for an existing Virtual Instance for SAP solutions (VIS) resource
Azure CLI
Add tags for an existing Virtual Instance for SAP solutions (VIS) resource using the Azure
resource ID of the VIS
Azure CLI
Add/Change Identity and Managed Resource Network Access for an existing Virtual
Instance for SAP Solutions (VIS) resource
Azure CLI
Optional Parameters
--add
Add an object to a list of objects by specifying a path and key value pairs. Example: -
-add property.listProperty <key=value, string or JSON string>.
--configuration
Defines if the SAP system is being created using Azure Center for SAP solutions
(ACSS) or if an existing SAP system is being registered with ACSS Support shorthand-
syntax, json-file and yaml-file. Try "??" to show more.
--force-string
When using 'set' or 'add', preserve string literals instead of attempting to convert to
JSON.
accepted values: 0, 1, f, false, n, no, t, true, y, yes
--identity
--ids
--managed-resource-group-configuration --mrg-config
--managed-resources-network-access-type --mrg-network-access-typ
Specifies the network access configuration for the resources that will be deployed in
the Managed Resource Group. The options to choose from are Public and Private. If
'Private' is chosen, the Storage Account service tag should be enabled on the
subnets in which the SAP VMs exist. This is required for establishing connectivity
between VM extensions and the managed resource group storage account. This
setting is currently applicable only to Storage Account. Learn more here
https://go.microsoft.com/fwlink/?linkid=2247228 .
accepted values: Private, Public
--name --sap-virtual-instance-name -n
--no-wait
Do not wait for the long-running operation to finish.
accepted values: 0, 1, f, false, n, no, t, true, y, yes
--remove
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--set
Update an object by specifying a property path and value to set. Example: --set
property1.property2=.
--subscription
--tags
Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show
more.
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
--verbose
Azure CLI
Optional Parameters
--created
--custom
--deleted
--exists
--ids
--interval
--name --sap-virtual-instance-name -n
--resource-group -g
Name of resource group. You can configure the default group using az configure --
defaults group=<name> .
--subscription
Name or ID of subscription. You can configure the default subscription using az
account set -s NAME_OR_ID .
--timeout
--updated
Global Parameters
--debug
--help -h
--only-show-errors
--output -o
Output format.
accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query
--subscription
Module: Az.Workloads
Syntax
PowerShell
New-AzWorkloadsSapVirtualInstance
-Name <String>
-ResourceGroupName <String>
[-SubscriptionId <String>]
-Environment <SapEnvironmentType>
-Location <String>
-SapProduct <SapProductType>
-CentralServerVmId <String>
[-ManagedRgStorageAccountName <String>]
[-IdentityType <ManagedServiceIdentityType>]
[-ManagedResourceGroupName <String>]
[-Tag <Hashtable>]
[-UserAssignedIdentity <Hashtable>]
[-DefaultProfile <PSObject>]
[-AsJob]
[-NoWait]
[-WhatIf]
[-Confirm]
[<CommonParameters>]
PowerShell
New-AzWorkloadsSapVirtualInstance
-Name <String>
-ResourceGroupName <String>
[-SubscriptionId <String>]
-Environment <SapEnvironmentType>
-Location <String>
-SapProduct <SapProductType>
[-IdentityType <ManagedServiceIdentityType>]
[-ManagedResourceGroupName <String>]
[-Tag <Hashtable>]
[-UserAssignedIdentity <Hashtable>]
-Configuration <String>
[-DefaultProfile <PSObject>]
[-AsJob]
[-NoWait]
[-WhatIf]
[-Confirm]
[<CommonParameters>]
Description
Creates a Virtual Instance for SAP solutions (VIS) resource
Examples
In this example, you Deploy the infrastructure for a three tier distributed SAP system. A
sample json payload is a linked here: https://go.microsoft.com/fwlink/?linkid=2230236
In this example, you Install the SAP software on the deployed infrastructure for a three
tier Non-High Availability distributed SAP system. A sample json payload is a linked
here:https://go.microsoft.com/fwlink/?linkid=2230167
In this example, you Deploy the infrastructure for a three tier distributed Highly
Available (HA) SAP system.
In this example, you Install the SAP software on the deployed infrastructure for a three
tier distributed Highly Availabile SAP system with Transport directory and customized
resource naming.
ノ Expand table
Type: SwitchParameter
Position: Named
Required: False
-CentralServerVmId
ノ Expand table
Type: String
Position: Named
Required: True
-Configuration
ノ Expand table
Type: String
Position: Named
Default value: None
Required: True
-Confirm
ノ Expand table
Type: SwitchParameter
Aliases: cf
Position: Named
Required: False
-DefaultProfile
The credentials, account, tenant, and subscription used for communication with
Azure.
ノ Expand table
Type: PSObject
Position: Named
Required: False
ノ Expand table
Type: SapEnvironmentType
Position: Named
Required: True
-IdentityType
ノ Expand table
Type: ManagedServiceIdentityType
Position: Named
Required: False
-Location
ノ Expand table
Type: String
Position: Named
-ManagedResourceGroupName
ノ Expand table
Type: String
Position: Named
Required: False
-ManagedRgStorageAccountName
The custom storage account name for the storage account created by the service in
the managed resource group created as part of VIS deployment.
If not provided, the service will create the storage account with a random name
ノ Expand table
Type: String
Position: Named
Required: False
-Name
The name of the Virtual Instances for SAP solutions resource
ノ Expand table
Type: String
Aliases: SapVirtualInstanceName
Position: Named
Required: True
-NoWait
ノ Expand table
Type: SwitchParameter
Position: Named
Required: False
-ResourceGroupName
ノ Expand table
Type: String
Position: Named
-SapProduct
ノ Expand table
Type: SapProductType
Position: Named
Required: True
-SubscriptionId
ノ Expand table
Type: String
Position: Named
Required: False
-Tag
Resource tags.
ノ Expand table
Type: Hashtable
Position: Named
Required: False
-UserAssignedIdentity
ノ Expand table
Type: Hashtable
Position: Named
Required: False
-WhatIf
Shows what would happen if the cmdlet runs. The cmdlet is not run.
ノ Expand table
Type: SwitchParameter
Aliases: wi
Position: Named
Required: False
6 Collaborate with us on
Azure PowerShell feedback
GitHub
Azure PowerShell is an open source
The source for this content can project. Select a link to provide
be found on GitHub, where you feedback:
can also create and review
issues and pull requests. For Open a documentation issue
more information, see our
contributor guide. Provide product feedback
SAP Virtual Instances
Reference
Service: Workloads
API Version: 2023-10-01-preview
Operations
ノ Expand table
Delete Deletes a Virtual Instance for SAP solutions resource and its child resources,
that is the associated Central Services Instance, Application Server Instances
an...
List By Resource Gets all Virtual Instances for SAP solutions resources in a Resource Group.
Group
List By Gets all Virtual Instances for SAP solutions resources in a Subscription.
Subscription
Start Starts the SAP application, that is the Central Services instance and
Application server instances.
Stop Stops the SAP Application, that is the Application server instances and Central
Services instance.
The dependency between the control plane and the application plane is illustrated in
the following diagram. In a typical deployment, a single control plane is used to manage
multiple SAP deployments.
You use the control plane of SAP Deployment Automation Framework to deploy the SAP
infrastructure and the SAP application. The deployment uses Terraform templates to
create the infrastructure as a service (IaaS) -defined infrastructure to host the SAP
applications.
7 Note
This automation framework is based on Microsoft best practices and principles for
SAP on Azure. To understand how to use certified virtual machines (VMs) and
storage solutions for stability, reliability, and performance, see Get started with
SAP automation framework on Azure.
This automation framework also follows the Microsoft Cloud Adoption Framework
for Azure.
You can use the automation framework to deploy the following SAP architectures:
Standalone: For this architecture, all the SAP roles are installed on a single server.
Distributed: With this architecture, you can separate the database server and the
application tier. The application tier can further be separated in two by having SAP
central services on a VM and one or more application servers.
Distributed (highly available): This architecture is similar to the distributed
architecture. In this deployment, the database and/or SAP central services can both
be configured by using a highly available configuration that uses two VMs, each
with Pacemaker clusters.
The control plane is typically a regional resource deployed into the hub subscription in a
hub-and-spoke architecture.
The following diagram shows the key components of the control plane and the
workload zone.
The application configuration is performed from the deployment agents in the control
plane by using a set of predefined playbooks. These playbooks will:
For more information about how to configure and deploy the control plane, see
Configure the control plane and Deploy the control plane.
Deployer VMs
These VMs are used to run the orchestration scripts that deploy the Azure resources by
using Terraform. They're also Ansible controllers and are used to execute the Ansible
playbooks on all the managed nodes, that is, the VMs of an SAP deployment.
You would typically create a workload zone for each unique Azure Virtual network
(VNet) that you want to deploy the SAP systems into.
The SAP workload zone provides the following services to the SAP systems:
Virtual network
Azure Key Vault for system credentials (VMs and SAP accounts)
Shared storage (optional)
For more information about how to configure and deploy the SAP workload zone, see
Configure the workload zone and Deploy the SAP workload zone.
The SAP system deployment consists of the VMs and the associated resources required
to run the SAP application, including the web, app, and database tiers.
For more information about how to configure and deploy the SAP system, see Configure
the SAP system and Deploy the SAP system.
The software acquisition is using an SAP application manifest file that contains the list of
SAP software to be downloaded. The manifest file is a YAML file that contains the:
As part of the download process, the application manifest and the supporting templates
are also persisted in the storage account. The application manifest and the dependent
manifests are aggregated into a single manifest file that is used by the installation
process.
Glossary
The following terms are important concepts for understanding the automation
framework.
SAP concepts
ノ Expand table
Term Description
System An instance of an SAP application that contains the resources the application needs
to run. Defined by a unique three-letter identifier, the SID.
The following diagram shows the relationships between SAP systems, workload zones
(environments), and landscapes. In this example setup, the customer has three SAP
landscapes: ECC, CRM, and BW. Each landscape contains three workload zones:
production, quality assurance, and development. Each workload zone contains one or
more systems.
Deployment components
ノ Expand table
Library Provides storage for the Terraform state files and the SAP Region
installation media.
Workload Contains the virtual network for the SAP systems and a key vault Workload
zone that holds the system credentials. zone
System The deployment unit for the SAP application (SID). Contains all Workload
infrastructure assets. zone
Next steps
Get started with the deployment automation framework
SAP Deployment Automation Framework supports deployment of all the supported SAP
on Azure topologies.
Control plane
The deployer virtual machine of the control plane must be deployed on Linux because
the Ansible controllers only work on Linux.
SAP infrastructure
The automation framework supports deployment of the SAP on Azure infrastructure
both on Linux or Windows virtual machines on x86-64 or x64 hardware.
ノ Expand table
Database Versions
ノ Expand table
Database Versions
DB2 11.5
ノ Expand table
Premium_SSD
Premium_SSDv2
Azure Files NFS For shared files, not for database files
Encryption using Azure Disk Encryption with customer managed keys is supported.
ノ Expand table
Deployment Notes
Distributed Separate Database server and application tier. The application tier can further
split by having SAP central services on one VM and one or more application
servers on another.
Distributed Database and/or SAP Central Services are deployed highly available using
(HA) Pacemaker
You can also deploy the automation framework to a standalone server by specifying a
configuration without an application tier.
Green-field deployments
In a green-field deployment, the automation framework creates all the required
resources.
In this scenario, you provide the relevant data (address spaces for networks and
subnets) when you configure the environment. For more examples, see Configure the
workload zone.
Brown-field deployments
In a brown-field deployment, you can use existing Azure resources as part of the
deployment.
In this scenario, you provide the Azure resource identifiers for the existing resources
when you configure the environment. For more examples, see Configure the workload
zone.
Next step
Get started with the automation framework
Plan your deployment of the SAP
automation framework
Article • 03/11/2024
There are multiple considerations for planning SAP deployments using the SAP
Deployment Automation Framework. These include subscription planning, credentials
management virtual network design.
For generic SAP on Azure design considerations, see Introduction to an SAP adoption
scenario.
7 Note
Subscription planning
You should deploy the control plane and the workload zones in different subscriptions.
The control plane should reside in a hub subscription that is used to host the
management components of the SAP automation framework.
The SAP systems should be hosted in spoke subscriptions, which are dedicated to the
SAP systems. An example of partitioning the systems would be to host the development
systems in a separate subscription with a dedicated virtual network and the production
systems would be hosted in their own subscription with a dedicated virtual network.
This approach provides a both a security boundary and allows for clear separation of
duties and responsibilities. For example, the SAP Basis team can deploy systems into the
workload zones, and the infrastructure team can manage the control plane.
Before you design your control plane, consider the following questions:
Control plane
The control plane provides the following services:
The control plane is defined by using two configuration files, one for the deployer and
one for the SAP Library.
The deployment configuration file defines the region, environment name, and virtual
network information. For example:
tfvars
management_network_address_space = "10.170.20.0/24"
management_subnet_address_prefix = "10.170.20.64/28"
firewall_deployment = true
management_firewall_subnet_address_prefix = "10.170.20.0/26"
bastion_deployment = true
management_bastion_subnet_address_prefix = "10.170.20.128/26"
use_webapp = true
webapp_subnet_address_prefix = "10.170.20.192/27"
deployer_assign_subscription_permissions = true
deployer_count = 2
use_service_endpoint = false
use_private_endpoint = false
public_network_access_enabled = true
DNS considerations
When you plan the DNS configuration for the automation framework, consider the
following questions:
Is there an existing private DNS that the solutions can integrate with or do you
need to use a custom private DNS zone for the deployment environment?
Are you going to use predefined IP addresses for the virtual machines or let Azure
assign them dynamically?
You can integrate with an existing private DNS zone by providing the following values in
your tfvars files:
tfvars
management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
#management_dns_resourcegroup_name = "RESOURCEGROUPNAME"
use_custom_dns_a_registration = false
Without these values, a private DNS zone is created in the SAP library resource group.
For more information, see the in-depth explanation of how to configure the deployer.
SAP library configuration
The SAP library resource group provides storage for SAP installation media, Bill of
Material files, Terraform state files, and, optionally, the private DNS zones. The
configuration file defines the region and environment name for the SAP library. For
parameter information and examples, see Configure the SAP library for automation.
The workload zone provides the following shared services for the SAP applications:
Azure Virtual Network, for virtual networks, subnets, and network security groups.
Azure Key Vault, for storing the virtual machine and SAP system credentials.
Azure Storage accounts for boot diagnostics and Cloud Witness.
Shared storage for the SAP systems, either Azure Files or Azure NetApp Files.
Before you design your workload zone layout, consider the following questions:
development environment hosted in the West Europe region by using the SAP01 virtual
network. PRD-WEEU-SAP02-INFRASTRUCTURE is for a production environment hosted in the
West Europe region by using the SAP02 virtual network.
The SAP01 and SAP02 designations define the logical names for the Azure virtual
networks. They can be used to further partition the environments. Suppose you need
two Azure virtual networks for the same workload zone. For example, you might have a
multi-subscription scenario where you host development environments in two
subscriptions. You can use the different logical names for each virtual network. For
example, you can use DEV-WEEU-SAP01-INFRASTRUCTURE and DEV-WEEU-SAP02-
INFRASTRUCTURE .
For more information, see Configure a workload zone deployment for automation.
Windows-based deployments
When you perform Windows-based deployments, the virtual machines in the workload
zone's virtual network need to be able to communicate with Active Directory to join the
SAP virtual machines to the Active Directory domain. The provided DNS name needs to
be resolvable by Active Directory.
ノ Expand table
DNS settings
For high-availability scenarios, a DNS record is needed in the Active Directory for the
SAP central services cluster. The DNS record needs to be created in the Active Directory
DNS zone. The DNS record name is defined as [sid]>scs[scs instance number]cl1 . For
example, w01scs00cl1 is used for the cluster, with W01 for the SID and 00 for the
instance number.
Credentials management
The automation framework uses service principals for infrastructure deployment. We
recommend using different deployment credentials (service principals) for each
workload zone. The framework stores these credentials in the deployer's key vault. Then,
the framework retrieves these credentials dynamically during the deployment process.
ノ Expand table
Azure CLI
az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --
name="DEV-Deployment-Account"
3. Note the output. You need the application identifier ( appId ), password ( password ),
and tenant identifier ( tenant ) for the next step. For example:
JSON
{
"appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"displayName": "DEV-Deployment-Account",
"name": "http://DEV-Deployment-Account",
"password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
4. Assign the User Access Administrator role to your service principal. For example:
Azure CLI
For more information, see the Azure CLI documentation for creating a service principal.
) Important
If you don't assign the User Access Administrator role to the service principal, you
can't assign permissions by using the automation.
Permissions management
In a locked-down environment, you might need to assign another permission to the
service principals. For example, you might need to assign the User Access Administrator
role to the service principal.
Required permissions
The following table shows the required permissions for the service principals.
ノ Expand table
Workload Zone SPN Private DNS zone Private DNS zone contributor
ノ Expand table
Azure CLI Installing Azure CLI Setup of deployer The firewall requirements
and during for the Azure CLI
deployments installation are defined in
Installing Azure CLI.
The following example shows how to test the connectivity to the URLs by using an
interactive PowerShell script.
PowerShell
$sdaf_path = Get-Location
if ( $PSVersionTable.Platform -eq "Unix") {
if ( -Not (Test-Path "SDAF") ) {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( -not (Test-Path $sdaf_path)) {
New-Item -Path $sdaf_path -Type Directory
}
}
cd sap-automation
cd deploy
cd scripts
DevOps structure
The deployment framework uses three separate repositories for the deployment
artifacts. For your own parameter files, it's a best practice to keep these files in a source
control repository that you manage.
Main repository
This repository contains the Terraform parameter files and the files needed for the
Ansible playbooks for all the workload zone and system deployments.
You can create this repository by cloning the SAP Deployment Automation Framework
bootstrap repository into your source control repository.
) Important
This repository must be the default repository for your Azure DevOps project.
Folder structure
The following sample folder hierarchy shows how to structure your configuration files
along with the automation framework files.
ノ Expand table
DEPLOYER Configuration A folder with deployer configuration files for all deployments
files for the that the environment manages. Name each subfolder by the
deployer naming convention of Environment - Region - Virtual Network.
For example, PROD-WEEU-DEP00-INFRASTRUCTURE.
LIBRARY Configuration A folder with SAP library configuration files for all deployments
files for SAP that the environment manages. Name each subfolder by the
library naming convention of Environment - Region - Virtual Network.
For example, PROD-WEEU-SAP-LIBRARY.
LANDSCAPE Configuration A folder with configuration files for all workload zones that the
files for environment manages. Name each subfolder by the naming
workload zone convention Environment - Region - Virtual Network. For
example, PROD-WEEU-SAP00-INFRASTRUCTURE.
SYSTEM Configuration A folder with configuration files for all SAP System Identification
files for the (SID) deployments that the environment manages. Name each
SAP systems subfolder by the naming convention Environment - Region -
Virtual Network - SID. For example, PROD-WEEU-SAPO00-ABC.
Your parameter file's name becomes the name of the Terraform state file. Make sure to
use a unique parameter file name for this reason.
Code repository
This repository contains the Terraform automation templates, the Ansible playbooks,
and the deployment pipelines and scripts. For most use cases, consider this repository
as read-only and don't modify it.
Sample repository
This repository contains the sample Bill of Materials files and the sample Terraform
configuration files.
To create this repository, clone the SAP Deployment Automation Framework samples
repository into your source control repository.
Azure regions
Before you deploy a solution, it's important to consider which Azure regions to use.
Different Azure regions might be in scope depending on your specific scenario.
The automation framework supports deployments into multiple Azure regions. Each
region hosts:
Deployment environments
If you're supporting multiple workload zones in a region, use a unique identifier for your
deployment environment and SAP library. Don't use the identifier for the workload zone.
For example, use MGMT for management purposes.
The automation framework also supports having the deployment environment and SAP
library in separate subscriptions than the workload zones.
The deployment configuration file defines the region, environment name, and virtual
network information. For example:
Terraform
deployer_enable_public_ip = false
firewall_deployment = true
bastion_deployment = true
For more information, see the in-depth explanation of how to configure the deployer.
You create or grant access to the following services in each workload zone:
Azure Virtual Networks, for virtual networks, subnets, and network security groups.
Azure Key Vault, for system credentials and the deployment service principal.
Azure Storage accounts, for boot diagnostics and Cloud Witness.
Shared storage for the SAP systems, either Azure Files or Azure NetApp Files.
Before you design your workload zone layout, consider the following questions:
For more information, see Configure a workload zone deployment for automation.
SAP system setup
The SAP system contains all Azure components required to host the SAP application.
Before you configure the SAP system, consider the following questions:
For more information, see Configure the SAP system for automation.
Deployment flow
When you plan a deployment, it's important to consider the overall flow. There are three
main steps of an SAP deployment on Azure with the automation framework.
1. Deploy the control plane. This step deploys components to support the SAP
automation framework in a specified Azure region.
a. Create the deployment environment.
b. Create shared storage for Terraform state files.
c. Create shared storage for SAP installation media.
2. Deploy the workload zone. This step deploys the workload zone components, such
as the virtual network and key vaults.
3. Deploy the system. This step includes the infrastructure for the SAP system
deployment and the SAP configuration and SAP installation.
Naming conventions
The automation framework uses a default naming convention. If you want to use a
custom naming convention, plan and define your custom names before deployment. For
more information, see Configure the naming convention.
Disk sizing
If you want to configure custom disk sizes, make sure to plan your custom setup before
deployment.
Next step
Manual deployment of the automation framework
Naming conventions for SAP
Deployment Automation Framework
Article • 12/12/2023
Deploy the SAP virtual network infrastructure into any supported Azure region.
Do multiple deployments with partitioned virtual networks.
Deploy the SAP system into any SAP workload zone.
Run regular and high availability instances.
Do disaster recovery and fall forward behavior.
Review the standard terms, area paths, and variable names before you begin your
deployment. If necessary, you can also configure custom naming.
Placeholder values
The naming convention's example formats use the following placeholder values.
ノ Expand table
{VM_NAME} VM name
{SUBNET} Subnet
{DIAG} 5
{RND} 3
{USER} 12
{COMPUTER_NAME} 14
Deployer names
For an explanation of the Format column, see the definitions for placeholder values.
ノ Expand table
ノ Expand table
User- {remote_vnet}_Hub-udr
defined
route
Availability {ENVIRONMENT}-{REGION_MAP}-
set (AV set) {SAP_VNET}_iscsi-avset
Network 80 {ENVIRONMENT}-{REGION_MAP}-
interface {SAP_VNET}_iscsi##-nic
component
VM {ENVIRONMENT}-{REGION_MAP}-
{SAP_VNET}_iscsi##
Concept Character limit Format Example
OS disk {ENVIRONMENT}-
{REGION_MAP}-
{SAP_VNET}_iscsi##-
OsDisk
Computer {ENVIRONMENT}_{REGION_MAP}{SAP_VNET}
name {region_map}iscsi##
ノ Expand table
VM {PREFIX}_{COMPUTER-NAME}
7 Note
Disk numbering starts at zero. The naming convention uses a two-character format;
for example, 00 .
Azure region names
The automation framework uses short forms of Azure region names. The short Azure
region names are mapped to the normal region names.
You can set the mapping under the variable _region_mapping in the name generator's
configuration file, ../../../deploy/terraform/terraform-
units/modules/sap_namegenerator/variables_local.tf .
Then, you can use the _region_mapping variable elsewhere, such as an area path. The
format for an area path is {ENVIRONMENT}-{REGION_MAP}-{SAP_VNET}-{ARTIFACT} where:
INFRASTRUCTURE .
"${upper(var.__environment)}-${upper(element(split(",",
lookup(var.__region_mapping, var.__region,
"-,unknown")),1))}-${upper(var.__SAP_VNET)}-INFRASTRUCTURE"
Next steps
Learn about configuring the custom naming module
Configure custom naming for the
automation framework
Article • 09/03/2023
SAP Deployment Automation Framework uses a standard naming convention for Azure
resource naming.
The Terraform module sap_namegenerator defines the names of all resources that the
automation framework deploys. The module is located at /deploy/terraform/terraform-
units/modules/sap_namegenerator/ in the repository. The framework also supports
providing your own names for some of the resources by using the parameter files.
If these capabilities aren't enough, you can also use custom naming logic by either
providing a custom JSON file that contains the resource names or by modifying the
naming module used by the automation.
The JSON file has sections for the different resource types.
JSON
"availabilityset_names" : {
"app": "app-avset",
"db" : "db-avset",
"scs": "scs-avset",
"web": "web-avset"
}
JSON
"keyvault_names": {
"DEPLOYER": {
"private_access": "DEVWEEUprvtABC",
"user_access": "DEVWEEUuserABC"
},
"SDU": {
"private_access": "DEVWEEUSAP01X00pABC",
"user_access": "DEVWEEUSAP01X00uABC"
},
"WORKLOAD_ZONE": {
"private_access": "DEVWEEUSAP01prvtABC",
"user_access": "DEVWEEUSAP01userABC"
}
}
The key vault names need to be unique across Azure. SAP Deployment Automation
Framework appends three random characters (ABC in the example) at the end of the key
vault name to reduce the likelihood for name conflicts.
JSON
"storageaccount_names": {
"DEPLOYER": "devweeudiagabc",
"LIBRARY": {
"library_storageaccount_name": "devweeusaplibabc",
"terraformstate_storageaccount_name": "devweeutfstateabc"
},
"SDU": "devweeusap01diagabc",
"WORKLOAD_ZONE": {
"landscape_shared_transport_storage_account_name":
"devweeusap01sharedabc",
"landscape_storageaccount_name": "devweeusap01diagabc",
"witness_storageaccount_name": "devweeusap01witnessabc"
}
}
The key vault names need to be unique across Azure. SAP Deployment Automation
Framework appends three random characters (abc in the example) at the end of the key
vault name to reduce the likelihood for name conflicts.
The following example lists the virtual machine names for a deployment in the DEV
environment in West Europe. The deployment has a database server, two application
servers, a central services server, and a web dispatcher.
JSON
"virtualmachine_names": {
"ANCHOR_COMPUTERNAME": [],
"ANCHOR_SECONDARY_DNSNAME": [],
"ANCHOR_VMNAME": [],
"ANYDB_COMPUTERNAME": [
"x00db00l0abc"
],
"ANYDB_SECONDARY_DNSNAME": [
"x00dhdb00l0abc",
"x00dhdb00l1abc"
],
"ANYDB_VMNAME": [
"x00db00l0abc"
],
"APP_COMPUTERNAME": [
"x00app00labc",
"x00app01labc"
],
"APP_SECONDARY_DNSNAME": [
"x00app00labc",
"x00app01labc"
],
"APP_VMNAME": [
"x00app00labc",
"x00app01labc"
],
"DEPLOYER": [
"devweeudeploy00"
],
"HANA_COMPUTERNAME": [
"x00dhdb00l0af"
],
"HANA_SECONDARY_DNSNAME": [
"x00dhdb00l0abc"
],
"HANA_VMNAME": [
"x00dhdb00l0abc"
],
"ISCSI_COMPUTERNAME": [
"devsap01weeuiscsi00"
],
"OBSERVER_COMPUTERNAME": [
"x00observer00labc"
],
"OBSERVER_VMNAME": [
"x00observer00labc"
],
"SCS_COMPUTERNAME": [
"x00scs00labc"
],
"SCS_SECONDARY_DNSNAME": [
"x00scs00labc"
],
"SCS_VMNAME": [
"x00scs00labc"
],
"WEB_COMPUTERNAME": [
"x00web00labc"
],
"WEB_SECONDARY_DNSNAME": [
"x00web00labc"
],
"WEB_VMNAME": [
"x00web00labc"
]
}
The different resource names are identified by prefixes in the Terraform code:
SAP deployer deployments use resource names with the prefix deployer_ .
SAP library deployments use resource names with the prefix library .
SAP landscape deployments use resource names with the prefix vnet_ .
SAP system deployments use resource names with the prefix sdu_ .
The calculated names are returned in a data dictionary, which is used by all the
Terraform modules.
Prefix custom_prefix Used as prefix for all the resources in the resource
group
Terraform
module "sap_namegenerator" {
source = "../../terraform-units/modules/sap_namegenerator"
environment = local.infrastructure.environment
location = local.infrastructure.region
codename = lower(try(local.infrastructure.codename, ""))
random_id = module.common_infrastructure.random_id
sap_vnet_name = local.vnet_logical_name
sap_sid = local.sap_sid
db_sid = local.db_sid
app_ostype = try(local.application.os.os_type, "LINUX")
anchor_ostype = upper(try(local.anchor_vms.os.os_type, "LINUX"))
db_ostype = try(local.databases[0].os.os_type, "LINUX")
db_server_count = var.database_server_count
app_server_count = try(local.application.application_server_count, 0)
web_server_count = try(local.application.webdispatcher_count, 0)
scs_server_count = local.application.scs_high_availability ? 2 *
local.application.scs_server_count : local.application.scs_server_count
app_zones = local.app_zones
scs_zones = local.scs_zones
web_zones = local.web_zones
db_zones = local.db_zones
resource_offset = try(var.options.resource_offset, 0)
custom_prefix = var.custom_prefix
}
Next, you need to point your other Terraform module files to your custom naming
module. These module files include:
deploy\terraform\run\sap_system\module.tf
deploy\terraform\bootstrap\sap_deployer\module.tf
deploy\terraform\bootstrap\sap_library\module.tf
deploy\terraform\run\sap_library\module.tf
deploy\terraform\run\sap_deployer\module.tf
For each file, change the source for the module sap_namegenerator to point to your new
naming module's location. For example:
Terraform
locals {
7 Note
Only change the map values. Don't change the map key, which the Terraform code
uses. For example, if you want to rename the administrator network interface
component, change "admin-nic" = "-admin-nic" to "admin-nic" = "yourNICname" .
Terraform
variable resource_suffixes {
type = map(string)
description = "Extension of resource name"
default = {
"admin_nic" = "-admin-nic"
"admin_subnet" = "admin-subnet"
"admin_subnet_nsg" = "adminSubnet-nsg"
"app_alb" = "app-alb"
"app_avset" = "app-avset"
"app_subnet" = "app-subnet"
"app_subnet_nsg" = "appSubnet-nsg"
"db_alb" = "db-alb"
"db_alb_bepool" = "dbAlb-bePool"
"db_alb_feip" = "dbAlb-feip"
"db_alb_hp" = "dbAlb-hp"
"db_alb_rule" = "dbAlb-rule_"
"db_avset" = "db-avset"
"db_nic" = "-db-nic"
"db_subnet" = "db-subnet"
"db_subnet_nsg" = "dbSubnet-nsg"
"deployer_rg" = "-INFRASTRUCTURE"
"deployer_state" = "_DEPLOYER.terraform.tfstate"
"deployer_subnet" = "_deployment-subnet"
"deployer_subnet_nsg" = "_deployment-nsg"
"iscsi_subnet" = "iscsi-subnet"
"iscsi_subnet_nsg" = "iscsiSubnet-nsg"
"library_rg" = "-SAP_LIBRARY"
"library_state" = "_SAP-LIBRARY.terraform.tfstate"
"kv" = ""
"msi" = "-msi"
"nic" = "-nic"
"osdisk" = "-OsDisk"
"pip" = "-pip"
"ppg" = "-ppg"
"sapbits" = "sapbits"
"storage_nic" = "-storage-nic"
"storage_subnet" = "_storage-subnet"
"storage_subnet_nsg" = "_storageSubnet-nsg"
"scs_alb" = "scs-alb"
"scs_alb_bepool" = "scsAlb-bePool"
"scs_alb_feip" = "scsAlb-feip"
"scs_alb_hp" = "scsAlb-hp"
"scs_alb_rule" = "scsAlb-rule_"
"scs_avset" = "scs-avset"
"scs_ers_feip" = "scsErs-feip"
"scs_ers_hp" = "scsErs-hp"
"scs_ers_rule" = "scsErs-rule_"
"scs_scs_rule" = "scsScs-rule_"
"sdu_rg" = ""
"tfstate" = "tfstate"
"vm" = ""
"vnet" = "-vnet"
"vnet_rg" = "-INFRASTRUCTURE"
"web_alb" = "web-alb"
"web_alb_bepool" = "webAlb-bePool"
"web_alb_feip" = "webAlb-feip"
"web_alb_hp" = "webAlb-hp"
"web_alb_inrule" = "webAlb-inRule"
"web_avset" = "web-avset"
"web_subnet" = "web-subnet"
"web_subnet_nsg" = "webSubnet-nsg"
}
}
Next step
Learn about naming conventions
Use SAP Deployment Automation
Framework from Azure DevOps Services
Article • 11/29/2023
Azure DevOps streamlines the deployment process by providing pipelines that you can
run to perform the infrastructure deployment and the configuration and SAP installation
activities.
You can use Azure Repos to store your configuration files and use Azure Pipelines to
deploy and configure the infrastructure and the SAP application.
Open PowerShell ISE and copy the following script and update the parameters to match
your environment.
PowerShell
$Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
$Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
$Env:SDAF_CONTROL_PLANE_CODE = "MGMT"
$Env:SDAF_WORKLOAD_ZONE_CODE = "DEV"
$Env:SDAF_ControlPlaneSubscriptionID = "xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx"
$Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy"
$Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
else {
$Env:SDAF_APP_NAME = Read-Host "Please provide the Application
registration name"
}
if ( Test-Path "New-SDAFDevopsProject.ps1") {
remove-item .\New-SDAFDevopsProject.ps1
}
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/New-SDAFDevopsProject.ps1 -OutFile .\New-
SDAFDevopsProject.ps1 ; .\New-SDAFDevopsProject.ps1
Run the script and follow the instructions. The script opens browser windows for
authentication and for performing tasks in the Azure DevOps project.
You can choose to either run the code directly from GitHub or you can import a copy of
the code into your Azure DevOps project.
To confirm that the project was created, go to the Azure DevOps portal and select the
project. Ensure that the repo was populated and that the pipelines were created.
) Important
Run the following steps on your local workstation. Also ensure that you have the
latest Azure CLI installed by running the az upgrade command.
Open PowerShell ISE and copy the following script and update the parameters to match
your environment.
PowerShell
$Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
$Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
$Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy"
$Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
if ( Test-Path "New-SDAFDevopsWorkloadZone.ps1") {
remove-item .\New-SDAFDevopsWorkloadZone.ps1
}
Open Azure DevOps and create a new project by selecting New Project and entering
the project details. The project contains the Azure Repos source control repository and
Azure Pipelines for performing deployment activities.
If you don't see New Project, ensure that you have permissions to create new projects in
the organization.
If you're unable to import a repository, you can create the repository manually. Then
you can import the content from the SAP Deployment Automation Framework GitHub
Bootstrap repository to it.
To create the workspaces repository, in the Repos section, under Project settings, select
Create.
Choose the repository, enter Git, and provide a name for the repository. For example,
use SAP Configuration Repository.
To clone the repository to a local folder, on the Repos section of the portal, under Files,
select Clone. For more information, see Clone a repository.
Manually import the repository content by using a local
clone
You can also manually download the content from the SAP Deployment Automation
Framework repository and add it to your local clone of the Azure DevOps repository.
Copy the content from the .zip file to the root folder of your local clone.
Open the local folder in Visual Studio Code. You should see that changes need to be
synchronized by the indicator by the source control icon shown here.
Select the source control icon and provide a message about the change. For example,
enter Import from GitHub and select Ctrl+Enter to commit the changes. Next, select
Sync Changes to synchronize the changes back to the repository.
If you want to run the SAP Deployment Automation Framework code from the local
Azure DevOps project, you need to create a separate code repository and a
configuration repository in the Azure DevOps project:
Name of configuration repository: Same as the DevOps Project name . Source is
https://github.com/Azure/sap-automation-bootstrap.git .
To pull the code from GitHub, you need a GitHub service connection. For more
information, see Manage service connections.
To create the service connection, go to Project Settings and under the Pipelines section,
go to Service connections.
Select GitHub as the service connection type. Select Azure Pipelines in the OAuth
Configuration dropdown.
Enter a service connection name, for instance, SDAF Connection to GitHub. Ensure that
the Grant access permission to all pipelines checkbox is selected. Select Save to save
the service connection.
Windows
PowerShell
echo $TF_VAR_app_registration_app_id
del manifest.json
Save the app registration ID and password values for later use.
ノ Expand table
Setting Value
Branch main
Path pipelines/01-deploy-control-plane.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Control plane deployment.
ノ Expand table
Setting Value
Branch main
Path pipelines/02-sap-workload-zone.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP workload zone deployment.
SAP system deployment pipeline
Create the SAP system deployment pipeline. Under the Pipelines section, select New
Pipeline. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings:
ノ Expand table
Setting Value
Branch main
Path pipelines/03-sap-system-deployment.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP system deployment (infrastructure).
ノ Expand table
Setting Value
Branch main
Path deploy/pipelines/04-sap-software-download.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP software acquisition.
SAP configuration and software installation
pipeline
Create the SAP configuration and software installation pipeline. Under the Pipelines
section, select New Pipeline. Select Azure Repos Git as the source for your code.
Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline
with the following settings:
ノ Expand table
Setting Value
Branch main
Path pipelines/05-DB-and-SAP-installation.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as SAP configuration and software installation.
ノ Expand table
Setting Value
Branch main
Path pipelines/10-remover-terraform.yml
ノ Expand table
Setting Value
Branch main
Path pipelines/12-remove-control-plane.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Control plane removal.
ノ Expand table
Setting Value
Branch main
Path pipelines/11-remover-arm-fallback.yml
Setting Value
Name Deployment removal using Azure Resource Manager
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Deployment removal using ARM processor.
7 Note
Only use this pipeline as a last resort. Removing just the resource groups leaves
remnants that might complicate redeployments.
ノ Expand table
Setting Value
Branch main
Path pipelines/20-update-ado-repository.yml
Save the pipeline. To see Save, select the chevron next to Run. Go to the Pipelines
section and select the pipeline. Choose Rename/Move from the ellipsis menu on the
right and rename the pipeline as Repository updater.
This pipeline should be used when there's an update in the sap-automation repository
that you want to use.
2. Sign in with the user account you plan to use in your Azure DevOps
organization.
3. From your home page, open your user settings and select Personal access tokens.
Common variables
Common variables are used by all the deployment pipelines. They're stored in a variable
group called SDAF-General .
Create a new variable group named SDAF-General by using the Library page in the
Pipelines section. Add the following variables:
ノ Expand table
Branch main
Alternatively, you can use the Azure DevOps CLI to set up the groups.
Bash
az devops login
Environment-specific variables
Because each environment might have different deployment credentials, you need to
create a variable group per environment. For example, use SDAF-MGMT , SDAF-DEV , and
SDAF-QA .
Create a new variable group named SDAF-MGMT for the control plane environment by
using the Library page in the Pipelines section. Add the following variables:
ノ Expand table
sap_fqdn SAP fully qualified Only needed if Private DNS isn't used.
domain name, for
example,
sap.contoso.net
POOL <Agent Pool name> The agent pool to use for this
environment.
SDAF_GENERAL_GROUP_ID The group ID for the The ID can be retrieved from the URL
SDAF-General group parameter variableGroupId when
accessing the variable group by using
a browser. For example:
variableGroupId=8 .
WORKLOADZONE_PIPELINE_ID The ID for the SAP The ID can be retrieved from the URL
workload zone parameter definitionId from the
deployment pipeline pipeline page in Azure DevOps. For
example: definitionId=31 .
SYSTEM_PIPELINE_ID The ID for the SAP The ID can be retrieved from the URL
system deployment parameter definitionId from the
(infrastructure) pipeline page in Azure DevOps. For
pipeline example: definitionId=32 .
When you use the web app, ensure that the Build Service has at least Contribute
permissions.
You can use the clone functionality to create the next environment variable group.
APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID,
WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-
MGMT group.
Create a service connection
To remove the Azure resources, you need an Azure Resource Manager service
connection. For more information, see Manage service connections.
To create the service connection, go to Project Settings. Under the Pipelines section,
select Service connections.
Select Azure Resource Manager as the service connection type and Service principal
(manual) as the authentication method. Enter the target subscription, which is typically
the control plane subscription. Enter the service principal details. Select Verify to
validate the credentials. For more information on how to create a service principal, see
Create a service principal.
Enter a Service connection name, for instance, use Connection to MGMT subscription .
Ensure that the Grant access permission to all pipelines checkbox is selected. Select
Verify and save to save the service connection.
Permissions
Most of the pipelines add files to the Azure Repos and therefore require pull
permissions. On Project Settings, under the Repositories section, select the Security tab
of the source code repository and assign Contribute permissions to the Build Service .
Select the Control plane deployment pipeline and enter the configuration names for
the deployer and the SAP library. Select Run to deploy the control plane. Make sure to
select the Deploy the configuration web application checkbox if you want to set up the
configuration web app.
8. From the list of secrets, select the secret that ends with -sshkey.
Bash
mkdir -p ~/Azure_SAP_Automated_Deployment
cd ~/Azure_SAP_Automated_Deployment
cd sap-automation/deploy/scripts
./configure_deployer.sh
Reboot the deployer, reconnect, and run the following script to set up the Azure DevOps
agent:
Bash
cd ~/Azure_SAP_Automated_Deployment/
$DEPLOYMENT_REPO_PATH/deploy/scripts/setup_ado.sh
Accept the license and, when you're prompted for the server URL, enter the URL you
captured when you created the Azure DevOps project. For authentication, select PAT
and enter the token value from the previous step.
When prompted, enter the application pool name that you created in the previous step.
Accept the default agent name and the default work folder name. The agent is now
configured and starts.
Deploy the control plane web application
Selecting the deploy the web app infrastructure parameter when you run the control
plane deployment pipeline provisions the infrastructure necessary for hosting the web
app. The Deploy web app pipeline publishes the application's software to that
infrastructure.
Wait for the deployment to finish. Select the Extensions tab and follow the instructions
to finalize the configuration. Update the reply-url values for the app registration.
As a result of running the control plane pipeline, part of the web app URL that is needed
is stored in a variable named WEBAPP_URL_BASE in your environment-specific variable
group. At any time, you can update the URLs of the registered application web app by
using the following command.
Windows
PowerShell
$webapp_url_base="<WEBAPP_URL_BASE>"
az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-
url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris
https://${webapp_url_base}.azurewebsites.net/
https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback
You also need to grant reader permissions to the app service system-assigned managed
identity. Go to the app service resource. On the left side, select Identity. On the System
assigned tab, select Azure role assignments > Add role assignment. Select
Subscription as the scope and Reader as the role. Then select Save. Without this step,
the web app dropdown functionality will not work.
You should now be able to visit the web app and use it to deploy SAP workload zones
and SAP system infrastructure.
Next step
Azure DevOps hands-on lab
Configure the control plane
Article • 12/12/2023
The control plane for SAP Deployment Automation Framework consists of the following
components:
Deployer
SAP Library
Deployer
The deployer is the execution engine of SAP Deployment Automation Framework. It's a
preconfigured virtual machine (VM) that's used for running Terraform and Ansible
commands. When you use Azure DevOps, the deployer is a self-hosted agent.
If you want to use an existing resource group for the Deployer provide the Azure
resource ID for the resource group using the resource_group_arm_id parameter in the
deployer's tfvars file. If the parameter isn't defined, the resource group is created using
the default naming. You can change the default name using the resource_group_name
parameter.
Terraform parameters
This table shows the Terraform parameters. These parameters need to be entered
manually if you aren't using the deployment scripts.
ノ Expand table
tfstate_resource_id Azure resource identifier for the storage account in the SAP Required
library that contains the Terraform state files
Environment parameters
This table shows the parameters that define the resource naming.
ノ Expand table
Resource group
This table shows the parameters that define the resource group.
ノ Expand table
Variable Description Type
Network parameters
The automation framework supports both creating the virtual network and the subnets
(green field) or using an existing virtual network and existing subnets (brown field) or a
combination of green field and brown field:
Green-field scenario: The virtual network address space and the subnet address
prefixes must be specified.
Brown-field scenario: The Azure resource identifier for the virtual network and the
subnets must be specified.
The recommended CIDR of the virtual network address space is /27, which allows space
for 32 IP addresses. A CIDR value of /28 only allows 16 IP addresses. If you want to
include Azure Firewall, use a CIDR value of /25, because Azure Firewall requires a range
of /26.
The recommended CIDR value for the management subnet is /28, which allows 16 IP
addresses. The recommended CIDR value for the firewall subnet is /26, which allows 64
IP addresses.
ノ Expand table
7 Note
When you use an existing subnet for the web app, the subnet must be empty, in
the same region as the resource group being deployed, and delegated to
Microsoft.Web/serverFarms.
ノ Expand table
Terraform
xxx_vm_image = {
os_type = ""
source_image_id = ""
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
type = "marketplace"
}
7 Note
Authentication parameters
This section defines the parameters used for defining the VM authentication.
ノ Expand table
ノ Expand table
DNS support
ノ Expand table
use_custom_dns_a_registration Uses an external system for DNS, set to false for Optional
Azure native.
Other parameters
ノ Expand table
ノ Expand table
Variable Description Type Notes
webapp_client_secret The SKU of the App Service Plan. Optional Will be persisted in
Key Vault
deployer_enable_public_ip=false
firewall_deployment=true
bastion_deployment=true
SAP library
The SAP library provides the persistent storage of the Terraform state files and the
downloaded SAP installation media for the control plane.
The configuration of the SAP library is performed in a Terraform tfvars variable file.
If you want to use an existing resource group for the SAP Library provide the Azure
resource ID for the resource group using the resource_group_arm_id parameter in the
deployer's tfvars file. If the parameter isn't defined, the resource group is created using
the default naming. You can change the default name using the resource_group_name
parameter.
Terraform parameters
This table shows the Terraform parameters. These parameters need to be entered
manually if you aren't using the deployment scripts or Azure Pipelines.
ノ Expand table
Environment parameters
This table shows the parameters that define the resource naming.
ノ Expand table
environment Identifier for the control Mandatory For example, PROD for a
plane (maximum of five production environment and NP
characters) for a nonproduction environment.
Resource group
This table shows the parameters that define the resource group.
ノ Expand table
ノ Expand table
ノ Expand table
DNS support
ノ Expand table
ノ Expand table
Next step
Configure SAP system
Workload zone configuration in the SAP
automation framework
Article • 03/05/2024
An SAP application typically has multiple development tiers. For example, you might
have development, quality assurance, and production tiers. SAP Deployment
Automation Framework calls these tiers workload zones. See the following diagram for
an example of a workload zone with two SAP systems.
The workload zone provides shared services to all of the SAP Systems in the workload
zone. These shared services include:
The workload zone is typically deployed in a spoke subscription and the deployment of
all the artifacts in the workload zone is done using unique service principal.
Workload zone deployment configuration
The configuration of the SAP workload zone is done via a Terraform tfvars variable file.
You can find examples of the variable file in the samples/WORKSPACES/LANDSCAPE folder.
The following sections show the different sections of the variable file.
Environment parameters
This table contains the parameters that define the environment settings.
ノ Expand table
environment Identifier for the Mandatory For example, PROD for a production
workload zone (max environment and QA for a Quality
five characters) Assurance environment.
ノ Expand table
Network parameters
The automation framework supports both creating the virtual network and the subnets
(green field) or using an existing virtual network and existing subnets (brown field) or a
combination of green field and brown field:
Green-field scenario: The virtual network address space and the subnet address
prefixes must be specified.
Brown-field scenario: The Azure resource identifier for the virtual network and the
subnets must be specified.
Ensure that the virtual network address space is large enough to host all the resources.
ノ Expand table
app_subnet_address_prefix The address range for the app Mandatory For green-field
subnet deployments
web_subnet_address_prefix The address range for the web Mandatory For green-field
subnet deployments
This table contains the networking parameters if Azure NetApp Files is used.
ノ Expand table
anf_subnet_address_prefix The address range for the ANF Required When using ANF for
subnet deployments
This table contains the networking parameters if iSCSI devices are hosted from this
workload zone.
ノ Expand table
This table contains the networking parameters if Azure Monitor for SAP is hosted from
this workload zone.
ノ Expand table
ams_subnet_address_prefix The address range for the iscsi Mandatory For green-field
subnet deployments
ノ Expand table
Terraform
network_logical_name = "SAP01"
network_address_space = "10.110.0.0/16"
db_subnet_address_prefix = "10.110.96.0/19"
app_subnet_address_prefix = "10.110.32.0/19"
Authentication parameters
This table defines the credentials used for defining the virtual machine authentication.
ノ Expand table
Variable Description Type Notes
Terraform
automation_username = "azureadm"
ノ Expand table
existing system
credentials key
vault
Private DNS
ノ Expand table
dns_resource_group_name The name of the resource group that contains the Optional
private DNS zone
NFS support
ノ Expand table
ノ Expand table
Terraform
NFS_provider = "AFS"
use_private_endpoint = true
ノ Expand table
capacity pool
Terraform
NFS_provider = "ANF"
anf_subnet_address_prefix = "10.110.64.0/27"
ANF_service_level = "Ultra"
DNS support
ノ Expand table
Other parameters
ノ Expand table
iSCSI parameters
ノ Expand table
Variable Description Type Notes
Utility VM parameters
ノ Expand table
ams_laws_arm_id Defines the ARM resource ID for the Log Analytics Optional
Workspace
Terraform parameters
This table contains the Terraform parameters. These parameters need to be entered
manually if you're not using the deployment scripts.
ノ Expand table
tfstate_resource_id The Azure resource identifier for the storage account in the Required
SAP library that contains the Terraform state files.
deployer_tfstate_key The name of the state file for the deployer. Required
Next step
About SAP system deployment in automation framework
Configure SAP system parameters
Article • 03/10/2024
Green-field scenario: The automation defines default names for resources, but
some resource names might be defined in the tfvars file.
Brown-field scenario: The Azure resource identifiers for the resources must be
specified.
Deployment topologies
You can use the automation framework to deploy the following SAP architectures:
Standalone
Distributed
Distributed (highly available)
Standalone
In the standalone architecture, all the SAP roles are installed on a single server.
To configure this topology, define the database tier values and set
enable_app_tier_deployment to false.
Distributed
The distributed architecture has a separate database server and application tier. The
application tier can further be separated by having SAP central services on a virtual
machine and one or more application servers.
To configure this topology, define the database tier values and define scs_server_count
= 1, application_server_count >= 1.
High availability
The distributed (highly available) deployment is similar to the distributed architecture. In
this deployment, the database and/or SAP central services can both be configured by
using a highly available configuration that uses two virtual machines, each with
Pacemaker clusters or Windows failover clustering.
To configure this topology, define the database tier values and set
database_high_availability to true. Set scs_server_count = 1 and
scs_high_availability = true and application_server_count >= 1.
Environment parameters
This section contains the parameters that define the environment settings.
ノ Expand table
ノ Expand table
Infrastructure parameters
This section contains the parameters related to the Azure infrastructure.
ノ Expand table
placement groups.
The resource_offset parameter controls the naming of resources. For example, if you
set the resource_offset to 1, the first disk will be named disk1 . The default value is 0.
ノ Expand table
ノ Expand table
use_secondary_ips Boolean flag that indicates if SAP should be installed by using Optional
virtual hostnames
HANA
DB2
ORACLE
ORACLE-ASM
ASE
SQLSERVER
ノ Expand table
The virtual machine and the operating system image are defined by using the following
structure:
Python
{
os_type="linux"
type="marketplace"
source_image_id=""
publisher="SUSE"
offer="sles-sap-15-sp3"
sku="gen2"
version="latest"
}
ノ Expand table
app_disk_sizes_filename Defines the custom disk size file for Optional See
the application tier servers Custom
Variable Description Type Notes
sizing.
Network parameters
If the subnets aren't deployed using the workload zone deployment, they can be added
in the system's tfvars file.
The automation framework can either deploy the virtual network and the subnets
(green-field deployment) or use an existing virtual network and existing subnets (brown-
field deployments):
Green-field scenario: The virtual network address space and the subnet address
prefixes must be specified.
Brown-field scenario: The Azure resource identifier for the virtual network and the
subnets must be specified.
Ensure that the virtual network address space is large enough to host all the resources.
ノ Expand table
This section defines the parameters used for defining the key vault information.
ノ Expand table
ノ Expand table
The virtual machine and the operating system image are defined by using the following
structure:
Python
{
os_type = "linux"
type = "marketplace"
source_image_id = ""
publisher = "SUSE"
offer = "sles-sap-15-sp5"
sku = "gen2"
version= " latest"
}
Authentication parameters
By default, the SAP system deployment uses the credentials from the SAP workload
zone. If the SAP system needs unique credentials, you can provide them by using these
parameters.
ノ Expand table
Miscellaneous parameters
ノ Expand table
Variable Description
license_type Specifies the license type for the virtual machines. Possible
values are RHEL_BYOS and SLES_BYOS . For Windows, the
possible values are None , Windows_Client , and Windows_Server .
NFS support
ノ Expand table
NFS_provider Defines what NFS back end to use. The options are AFS for Optional
Azure Files NFS or ANF for Azure NetApp files.
sapmnt_volume_size Defines the size (in GB) for the sapmnt volume. Optional
ノ Expand table
Variable Description Type
ノ Expand table
ノ Expand table
HANA data.
Oracle parameters
These parameters need to be updated in the sap-parameters.yaml file when you deploy
Oracle-based systems.
ノ Expand table
Variable Description Type Notes
oracle_sbp_patch Oracle SBP patch file name, for Mandatory Must be part of the
example, SAP19P_2202-70004508.ZIP Bill of Materials
You can use the configuration_settings variable to let Terraform add them to sap-
parameters.yaml file.
Terraform
configuration_settings = {
ora_release = "19",
ora_version = "19.0.0",
oracle_sbp_patch = "SAP19P_2202-
70004508.ZIP",
oraclegrid_sbp_patch = "GIRU19P_2202-
70004508.ZIP",
}
DNS support
ノ Expand table
ams_resource_id Defines the ARM resource ID for Azure Monitor for Optional
SAP
Other parameters
ノ Expand table
add_Agent_IP Controls if Agent IP is added to the key vault and storage Optional
account firewalls
Terraform parameters
This section contains the Terraform parameters. These parameters need to be entered
manually if you're not using the deployment scripts.
ノ Expand table
tfstate_resource_id Azure resource identifier for the storage account in the Required
SAP library that will contain the Terraform state files *
deployer_tfstate_key The name of the state file for the deployer Required
*
landscaper_tfstate_key The name of the state file for the workload zone Required
*
High-availability configuration
The high-availability configuration for the database tier and the SCS tier is configured by
using the database_high_availability and scs_high_availability flags. Red Hat and
SUSE should use the appropriate HA version of the virtual machine images (RHEL-SAP-
HA, sles-sap-15-sp?).
Cluster parameters
This section contains the parameters related to the cluster configuration.
ノ Expand table
database_cluster_disk_lun Specifies the The LUN of the shared disk for the Optional
Database cluster.
database_cluster_disk_size The size of the shared disk for the Database cluster. Optional
database_cluster_type Cluster quorum type; AFA (Azure Fencing Agent), ASD Optional
(Azure Shared Disk), ISCSI
idle_timeout_scs_ers Sets the idle timeout setting for the SCS and ERS Optional
loadbalancer.
scs_cluster_disk_lun Specifies the The LUN of the shared disk for the Optional
Central Services cluster.
scs_cluster_disk_size The size of the shared disk for the Central Services Optional
cluster.
scs_cluster_type Cluster quorum type; AFA (Azure Fencing Agent), ASD Optional
(Azure Shared Disk), ISCSI
7 Note
The highly available central services deployment requires using a shared file system
for sap_mnt . You can use Azure Files or Azure NetApp Files by using the
NFS_provider attribute. The default is Azure Files. To use Azure NetApp Files, set
If you set the variable use_msi_for_clusters to true , the fencing agent uses managed
identities.
If you want to use a service principal for the fencing agent, set that variable to false.
The fencing agents should be configured to use a unique service principal with
permissions to stop and start virtual machines. For more information, see Create a
fencing agent.
Azure CLI
Replace <prefix> with the name prefix of your environment, such as DEV-WEEU-SAP01 .
Replace <subscriptionID> with the workload zone subscription ID.
) Important
The name of the fencing agent service principal must be unique in the tenant. The
script assumes that a role Linux Fence Agent Role was already created.
appId
password
tenant
The fencing agent details must be stored in the workload zone key vault by using a
predefined naming convention. Replace <prefix> with the name prefix of your
environment, such as DEV-WEEU-SAP01 . Replace <workload_kv_name> with the name of the
key vault from the workload zone resource group. For the other values, use the values
recorded from the previous step and run the script.
Azure CLI
Next steps
Deploy SAP system
Configure SAP installation parameters
Article • 08/29/2023
The Ansible playbooks use a combination of default parameters and parameters defined
by the Terraform deployment for the SAP installation.
Default parameters
The following tables contain the default parameters defined by the framework.
User IDs
This table contains the IDs for the SAP users and groups for the different platforms.
HANA
DB2
ORACLE
Windows parameters
This table contains the information pertinent to Windows deployments.
Parameters
The following tables contain the parameters stored in the sap-parameters.yaml file. Most
of the values are prepopulated via the Terraform deployment.
Infrastructure
sap_fqdn The FQDN suffix for the virtual machines to be added to the local hosts Required
file
Application tier
web_sid The SID for the web dispatcher Required if web dispatchers
are deployed
Database tier
platform The database platform. Valid values are ASE, DB2, HANA, Required
ORACLE, and SQLSERVER.
NFS
Parameter Description Type
NFS_provider Defines what NFS back end to use. The options are AFS Optional
for Azure Files NFS or ANF for Azure NetApp Files, NONE
for NFS from the SCS server, or NFS for an external NFS
solution.
Windows support
domain Defines the Windows domain Netbios name, for example, Optional
sap
SQL
use_sql_for_SAP Uses the SAP-defined SQL Server media, defaults to true Optional
Miscellaneous
Parameter Description Type
kv_name The name of the Azure key vault that contains the system Required
credentials
secret_prefix The prefix for the name of the secrets for the SID stored in Required
the key vault
Disks
Disks define a dictionary with information about the disks of all the virtual machines in
the SAP application virtual machines.
LUN Defines the LUN number that the disk is attached to. Required
type This attribute is used to group the disks. Each disk of the same type is Required
added to the LVM on the virtual machine.
YAML
disks:
- { host: 'rh8dxdb00l084', LUN: 0, type: 'sap' }
- { host: 'rh8dxdb00l084', LUN: 10, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 11, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 12, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 13, type: 'data' }
- { host: 'rh8dxdb00l084', LUN: 20, type: 'log' }
- { host: 'rh8dxdb00l084', LUN: 21, type: 'log' }
- { host: 'rh8dxdb00l084', LUN: 22, type: 'log' }
- { host: 'rh8dxdb00l084', LUN: 2, type: 'backup' }
- { host: 'rh8dxdb00l184', LUN: 0, type: 'sap' }
- { host: 'rh8dxdb00l184', LUN: 10, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 11, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 12, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 13, type: 'data' }
- { host: 'rh8dxdb00l184', LUN: 20, type: 'log' }
- { host: 'rh8dxdb00l184', LUN: 21, type: 'log' }
- { host: 'rh8dxdb00l184', LUN: 22, type: 'log' }
- { host: 'rh8dxdb00l184', LUN: 2, type: 'backup' }
- { host: 'rh8app00l84f', LUN: 0, type: 'sap' }
- { host: 'rh8app01l84f', LUN: 0, type: 'sap' }
- { host: 'rh8scs00l84f', LUN: 0, type: 'sap' }
- { host: 'rh8scs01l84f', LUN: 0, type: 'sap' }
Oracle support
From the v3.4 release, it's possible to deploy SAP on Azure systems in a shared home
configuration by using an Oracle database back end. For more information on running
SAP on Oracle in Azure, see Azure Virtual Machines Oracle DBMS deployment for SAP
workload.
To install the Oracle back end by using SAP Deployment Automation Framework, you
need to provide the following parameters:
YAML
MULTI_SIDS:
- {sid: 'DE1', dbsid_uid: '3005', sidadm_uid: '2001', ascs_inst_no: '00',
pas_inst_no: '00', app_inst_no: '00'}
- {sid: 'QE1', dbsid_uid: '3006', sidadm_uid: '2002', ascs_inst_no: '01',
pas_inst_no: '01', app_inst_no: '01'}
dbsid_uid The UID for the DB admin user for the instance Required
sidadm_uid The UID for the SID admin user for the instance Required
For example, if you want to override the default value of the group ID for the sapinst
group ( sapinst_gid ) parameter, add the following line to the sap-parameters.yaml file:
YAML
sapinst_gid: 1000
If you want to provide them as parameters for the Ansible playbooks, add the following
parameter to the command line:
Bash
You can also override the default parameters by specifying them in the
configuration_settings variable in your tfvars file. For example, if you want to
override sapinst_gid , your tfvars file should contain the following line:
Terraform
configuration_settings = {
sapinst_gid = "1000"
}
Next step
Deploy the SAP system
Change the disk configuration for SAP
Deployment Automation Framework
Article • 08/29/2023
By default, SAP Deployment Automation Framework defines the disk configuration for
SAP systems. As needed, you can change the default configuration by providing a
custom disk configuration JSON file.
Tip
When possible, it's a best practice to increase the disk size instead of adding more
disks.
HANA databases
The table shows the default disk configuration for HANA systems.
AnyDB databases
The table shows the default disk configuration for AnyDB systems.
Default Standard_E4s_v3 P6 (64 GB) P15 (256 GB) P10 (128 GB)
200 GB Standard_E4s_v3 P6 (64 GB) P15 (256 GB) P10 (128 GB)
500 GB Standard_E8s_v3 P6 (64 GB) P20 (512 GB) P15 (256 GB)
These sections contain the information for the default virtual machine size and the list of
disks to be deployed for each tier.
Create a file by using the structure shown in the following code sample. Save the file in
the same folder as the parameter file for the system. For instance, use XO1_sizes.json .
Then define the parameter custom_disk_sizes_filename in the parameter file. For
example, use custom_disk_sizes_filename = "XO1_db_sizes.json" .
Tip
The path to the disk configuration needs to be relative to the folder that contains
the tfvars file.
The following sample code is an example configuration file. It defines three data disks
(LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU), and a backup disk (LUN 13).
The application tier servers (application, central services, and web dispatchers) are
deployed with just a single sap data disk.
The three data disks are striped by using LVM. The log disk and the backup disk are
each mounted as a single disk.
JSON
{
"db" : {
"Default": {
"compute": {
"vm_size" : "Standard_E20ds_v4",
"swap_size_gb" : 2
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "data",
"count" : 3,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
},
{
"name" : "log",
"count" : 1,
"disk_type" : "UltraSSD_LRS",
"size_gb": 512,
"disk-iops-read-write" : 2048,
"disk-mbps-read-write" : 8,
"caching" : "None",
"write_accelerator" : false,
"lun_start" : 9
},
{
"name" : "backup",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 13
}
]
}
},
"app" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3"
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "sap",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
}
]
}
},
"scs" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3"
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "sap",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
}
]
}
},
"web" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3"
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "sap",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite",
"write_accelerator" : false,
"lun_start" : 0
}
]
}
}
}
"append" : true, . The last block adds a new disk to the database tier, which is already
JSON
{
"db" : {
"Default": {
"compute": {
"vm_size" : "Standard_D4s_v3",
"swap_size_gb" : 2
},
"storage": [
{
"name" : "os",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 128,
"caching" : "ReadWrite"
},
{
"name" : "data",
"count" : 3,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"start_lun" : 0
},
{
"name" : "log",
"count" : 1,
"disk_type" : "UltraSSD_LRS",
"size_gb": 512,
"disk-iops-read-write" : 2048,
"disk-mbps-read-write" : 8,
"caching" : "None",
"write_accelerator" : false,
"start_lun" : 9
},
{
"name" : "backup",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"start_lun" : 13
}
,
{
"name" : "data",
"count" : 1,
"disk_type" : "Premium_LRS",
"size_gb" : 256,
"caching" : "ReadWrite",
"write_accelerator" : false,
"append" : true,
"start_lun" : 4
}
]
}
}
}
Next step
Configure custom naming
Extending the SAP Deployment Automation Framework
Article • 02/26/2024
Within the SAP Deployment Automation Framework (SDAF), we recognize the importance of adaptability and customization to meet the
unique needs of various deployments. This document describes the ways to extend the framework's capabilities, ensuring that it aligns with
your specific requirements.
Forking the Source Code Repository: One method of extending SDAF is by forking the source code repository. This approach grants
you the flexibility to make tailored modifications within your own forked version of the code. By doing so, you gain control over the
framework's core functionality, enabling you to tailor it precisely to your deployment objectives.
Adding Stages to the SAP Configuration Pipeline: Another way to customization is by adding stages to the SAP configuration pipeline.
This approach allows you to integrate specific processes or steps that are integral to your deployment workflows into the automation
pipeline.
Streamlined Extensibility: This capability allows you to effortlessly incorporate your existing Ansible playbooks directly into the SDAF.
By using this feature, you can seamlessly integrate your Ansible automation scripts with the framework, further enhancing its
versatility.
Configuration extensibility: This feature allows you to extend the framework's configuration capabilities by adding custom
repositories, packages, kernel parameters, logical volumes, mounts, and exports without the need to write any code.
Throughout this documentation, we provide comprehensive guidance on each of these extensibility options, ensuring that you have the
knowledge and tools needed to tailor the SAP Deployment Automation Framework to your specific deployment needs.
7 Note
If you fork the source code repository, you must maintain your fork of the code. You must also merge the changes from the source
code repository into your fork of the code whenever there is a new release of the SDAF codebase.
The Ansible playbooks must be located in a folder called 'Ansible' located in the root folder in your configuration repository. They're called
with the same parameter files as the SDAF playbooks so you have access to all the configuration.
The Ansible playbooks must be named according to the following naming convention:
'Playbook name_pre' for playbooks to be run before the SDAF playbook and 'Playbook name_post' for playbooks to be run after the SDAF
playbook.
ノ Expand table
Playbook name Playbook name for 'pre' tasks Playbook name for 'post' tasks
7 Note
The playbook_08_00_00_post_configuration_actions.yaml step has no SDAF provided roles/tasks, it's only there to facilitate _pre and
_post hooks after SDAF has completed the installation and configuration.
---
# /*---------------------------------------------------------------------------8
# | |
# | Run commands on all remote hosts |
# | |
# +------------------------------------4--------------------------------------*/
- name: "Show how to run a command on just the 'SCS' and 'ERS' hosts"
ansible.builtin.command: "whoami"
register: whoami_results
when:
- "'scs' in supported_tiers or 'ers' in supported_tiers "
...
YAML
You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.
Terraform
configuration_settings = {
sapadm_uid = "3000",
sidadm_uid = "3100",
sapinst_gid = "300",
sapsys_gid = "400"
}
YAML
custom_scs_virtual_hostname: "myscshostname"
custom_ers_virtual_hostname: "myershostname"
custom_db_virtual_hostname: "mydbhostname"
custom_pas_virtual_hostname: "mypashostname"
custom_app_virtual_hostname: "myapphostname"
You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.
Terraform
configuration_settings = {
custom_scs_virtual_hostname = "myscshostname",
custom_ers_virtual_hostname = "myershostname",
custom_db_virtual_hostname = "mydbhostname",
custom_pas_virtual_hostname = "mypashostname",
custom_app_virtual_hostname = "myapphostname"
}
In this example, the repository 'epel' is registered on all the hosts in your SAP deployment that are running RedHat 8.2.
YAML
custom_repos:
redhat8.2:
- { tier: 'ha', repo: 'epel', url: 'https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm', state:
'present' }
In this example, the package 'openssl' is installed on all the hosts in your SAP deployment that are running SUSE Enterprise Linux for SAP
Applications version 15.3.
YAML
custom_packages:
sles_sap15.3:
- { tier: 'os', package: 'openssl', node_tier: 'all', state: 'present' }
If you want to install a package on a specific server type ( app , ers , pas , scs , hana ) you can add the following section to the sap-
parameters.yaml file.
YAML
custom_packages:
sles_sap15.3:
- { tier: 'ha', package: 'pacemaker', node_tier: 'hana', state: 'present' }
When you add the following section to the sap-parameters.yaml file, the parameter 'fs.suid_dumpable' is set to 0 on all the hosts in your
SAP deployment.
YAML
custom_parameters:
common:
- { tier: 'os', node_tier: 'all', name: 'fs.suid_dumpable', value: '0', state: 'present' }
In this example, the 'firewalld' service is stopped and disabled on all the hosts in your SAP deployment that are running RedHat 7.x.
YAML
custom_services:
redhat7:
- { tier: 'os', service: 'firewalld', node_tier: 'all', state: 'stopped' }
- { tier: 'os', service: 'firewalld', node_tier: 'all', state: 'disabled' }
When you add the following section to the sap-parameters.yaml file, a logical volume 'lv_custom' is created on all Virtual machines with a
disk with the name 'custom' in your SAP deployment. A filesystem is mounted on the logical volume and available on '/custompath.'
YAML
custom_logical_volumes:
- tier: 'sapos'
node_tier: 'all'
vg: 'vg_custom'
lv: 'lv_custom'
size: '100%FREE'
fstype: 'xfs'
path: '/custompath'
7 Note
In order to use this functionality you need to add an extra disk named 'custom' to one or more of your Virtual machines. For more
information, see Custom disk sizing.
You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.
Terraform
configuration_settings = {
custom_logical_volumes = [
{
tier = 'sapos'
node_tier = 'all'
vg = 'vg_custom'
lv = 'lv_custom'
size = '100%FREE'
fstype = 'xfs'
path = '/custompath'
}
]
}
When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is mounted from an NFS share on
'xxxxxxxxx.file.core.windows.net:/xxxxxxxxx/custom.'
YAML
custom_mounts:
- path: "/usr/custom"
opts: "vers=4,minorversion=1,sec=sys"
mount: "xxxxxxxxx.file.core.windows.net:/xxxxxxxx/custom"
target_nodes: "scs,pas,app"
The target_nodes attribute defines which nodes have the mount defined. Use 'all' if you want all nodes to have the mount defined.
You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.
Terraform
configuration_settings = {
custom_mounts = [
{
path = "/usr/custom",
opts = "vers=4,minorversion=1,sec=sys",
mount = "xxxxxxxxx.file.core.windows.net:/xxxxxxxx/custom",
target_nodes = "scs,pas,app"
}
]
}
When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is exported from the Central Services virtual
machine and available via NFS.
YAML
custom_exports:
path: "/usr/custom"
You can use the configuration_settings variable to let Terraform add them to sap-parameters.yaml file.
Terraform
configuration_settings = {
custom_mounts = [
{
path = "/usr/custom",
}
]
}
7 Note
This applies only for deployments with NFS_Provider set to 'NONE' as this makes the Central Services server an NFS Server.
YAML
db2_log_stripe_size: 64
db2_data_stripe_size: 256
db2_temp_stripe_size: 128
sybase_data_stripe_size: 256
sybase_log_stripe_size: 64
sybase_temp_stripe_size: 128
oracle_data_stripe_size: 256
oracle_log_stripe_size: 128
YAML
sapmnt_volume_size: 32g
usrsap_volume_size: 32g
hanashared_volume_size: 32g
Next step
Configure custom naming
Configure the Control Plane Web
Application credentials
Article • 12/07/2023
As a part of the SAP automation framework control plane, you can optionally create an
interactive web application that assists you in creating the required configuration files
and deploying SAP workload zones and systems using Azure Pipelines.
Windows
PowerShell
rm ./manifest.json
Persist the values in the control plane variable group for later use.
ノ Expand table
You'll also need to grant reader permissions to the app service system-assigned
managed identity. Navigate to the app service resource. On the left hand side, select
"Identity In the "system assigned" tab, select on "Azure role assignments" > "Add role
assignment." Select "subscription" as the scope, and "reader" as the role. Then select
save. Without this step, the web app dropdown functionality won't work. ".
You can sign in and visit the web app by following the URL from earlier or selecting
browse inside the app service resource. With the web app, you're able to configure SAP
workload zones and system infrastructure. Select download to obtain a parameter file of
the workload zone or system you specified, for use in the later deployment steps.
If no results are shown for a dropdown, you probably need to specify another
dropdown before you can see any options. Or, see step 2 above regarding
the system assigned managed identity.
The subscription parameter must be specified before any other dropdown
functionality is enabled
The network_arm_id parameter must be specified before any subnet
dropdown functionality is enabled
5. Select submit in the bottom left hand corner
If you would like to deploy a file, first convert it to a workload zone or system
object.
3. Specify the necessary parameters, and confirm it's the correct object.
4. Select deploy.
5. The web app generates a 'tfvars' file from the object, updates your Azure DevOps
repository, and kicks off the workload zone or system (infrastructure) pipeline. You
can monitor the deployment in the Azure DevOps Portal.
Configure external tools to use with SAP
Deployment Automation Framework
Article • 09/03/2023
This article describes how to configure external tools to use SAP Deployment
Automation Framework.
3. On the Key vault page, find the deployer key vault. The name starts with
MGMT[REGION]DEP00user . Filter by Resource group or Location, if necessary.
5. Find and select the secret that contains sshkey. It might look like MGMT-[REGION]-
DEP00-sshkey .
6. On the secret's page, select the current version. Copy the Secret value.
7. Create a new file in Visual Studio Code and copy in the secret value.
8. Save the file where you keep SSH keys. For example, use C:\\Users\\<your-
username>\\.ssh\weeu_deployer.ssh . Make sure that you save the file without an
extension.
After you've downloaded the SSH key for the deployer, you can use it to connect to the
deployer virtual machine.
naming convention. The contents of the deployer resource group should look like
the following image.
3. Find the public IP for the deployer. The name should end with -pip . Filter by type,
if necessary.
Bash
ssh -i `C:\\Users\\<your-username>\\weeu_deployer.ssh`
azureadm@<IP_Address>
7 Note
Change <IP_Address> to reflect the deployer IP.
3. Select Connect. Select Linux when you're prompted for the target operating
system, and accept the remaining dialogs (such as key and trust).
Next step
Configure the SAP workload zone
Get started with SAP Deployment
Automation Framework
Article • 03/12/2024
Prerequisites
To get started with SAP Deployment Automation Framework, you need:
An Azure subscription. If you don't have an Azure subscription, you can create a
free account .
An SAP User account with permissions to download the SAP software in your
Azure environment. For more information on S-User, see SAP S-User .
An Azure CLI installation.
A user Assigned Identity (MS) or a service principal to use for the control plane
deployment.
A user Assigned Identity (MS) or a A service principal to use for the workload zone
deployment.
An ability to create an Azure DevOps project if you want to use Azure DevOps for
deployment.
When you choose a name for your service principal, make sure that the name is unique
within your Azure tenant. Make sure to use an account with service principals creation
permissions when running the script.
cloudshell
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export control_plane_env_code="LAB"
az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/$ARM_SUBSCRIPTION_ID" --
name="$control_plane_env_code-Deployment-Account"
JSON
{
"appId": "<AppId>",
"displayName": "<environment>-Deployment-Account ",
"name": "<AppId>",
"password": "<AppSecret>",
"tenant": "<TenantId>"
}
2. Copy the output details. Make sure to save the values for appId , password , and
Tenant .
The output maps to the following parameters. You use these parameters in later
steps, with automation commands.
ノ Expand table
spn_id appId
spn_secret password
tenant_id tenant
3. Optionally, assign the User Access Administrator role to the service principal.
cloudshell
export appId="<appId>"
) Important
If you don't assign the User Access Administrator role to the service principal, you
can't assign permissions using the automation framework.
Create a user assigned Identity
The SAP automation deployment framework can also use a user assigned identity (MSI)
for the deployment. Make sure to use an account with permissions to create managed
identities when running the script that creates the identity.
cloudshell
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export control_plane_env_code="LAB"
JSON
{
"clientId": "<appId>",
"id": "<armId>",
"location": "<location>",
"name": "${control_plane_env_code}-Deployment-Identity",
"principalId": "<objectId>",
"resourceGroup": "<ExistingResourceGroup>",
"systemData": null,
"tags": {},
"tenantId": "<TenantId>",
"type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}
The output maps to the following parameters. You use these parameters in later
steps, with automation commands.
ノ Expand table
app_id appId
msi_id armId
export appId="<appId>"
cloudshell
export appId="<appId>"
) Important
If you don't assign the User Access Administrator role to the managed identity, you
can't assign permissions using the automation framework.
Pre-flight checks
You can use the following script to perform pre-flight checks. The script performs the
following checks and tests:
Checks if the service principal has the correct permissions to create resources in
the subscription.
Checks if the service principal has user Access Administrator permissions.
Create a Azure Virtual Network.
Create a Azure Virtual Key Vault with private end point.
Create a Azure Files NSF share.
Create a Azure Virtual Virtual Machine with data disk using Premium Storage v2.
Check access to the required URLs using the deployed virtual machine.
PowerShell
$sdaf_path = Get-Location
if ( $PSVersionTable.Platform -eq "Unix") {
if ( -Not (Test-Path "SDAF") ) {
$sdaf_path = New-Item -Path "SDAF" -Type Directory
}
}
else {
$sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
if ( -not (Test-Path $sdaf_path)) {
New-Item -Path $sdaf_path -Type Directory
}
}
cd sap-automation
cd deploy
cd scripts
You can use Azure Repos to store your configuration files. Azure Pipelines provides
pipelines, which can be used to deploy and configure the infrastructure and the SAP
application.
Open PowerShell ISE and copy the following script and update the parameters to match
your environment.
PowerShell
$Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
$Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
$Env:SDAF_CONTROL_PLANE_CODE = "MGMT"
$Env:SDAF_WORKLOAD_ZONE_CODE = "DEV"
$Env:SDAF_ControlPlaneSubscriptionID = "xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx"
$Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-
yyyyyyyyyyyy"
$Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
else {
$Env:SDAF_APP_NAME = Read-Host "Please provide the Application
registration name"
}
if ( Test-Path "New-SDAFDevopsProject.ps1") {
remove-item .\New-SDAFDevopsProject.ps1
}
Run the script and follow the instructions. The script opens browser windows for
authentication and for performing tasks in the Azure DevOps project.
You can choose to either run the code directly from GitHub or you can import a copy of
the code into your Azure DevOps project.
To confirm that the project was created, go to the Azure DevOps portal and select the
project. Ensure that the repo was populated and that the pipelines were created.
) Important
Run the following steps on your local workstation. Also ensure that you have the
latest Azure CLI installed by running the az upgrade command.
For more information on how to configure Azure DevOps for SAP Deployment
Automation Framework, see Configure Azure DevOps for SAP Deployment Automation
Framework.
Create the SAP Deployment Automation
Framework environment without Azure DevOps
You can run SAP Deployment Automation Framework from a virtual machine in Azure.
The following steps describe how to create the environment.
) Important
git
jq
unzip
virtualenv (if running on Ubuntu)
You can install the prerequisites on an Ubuntu virtual machine by using the following
command:
Bash
You can then install the deployer components by using the following commands:
Bash
wget https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/configure_deployer.sh -O
configure_deployer.sh
chmod +x ./configure_deployer.sh
./configure_deployer.sh
. /etc/profile.d/deploy_server.sh
Samples
The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample
configuration files to start testing the deployment automation framework. You can copy
them by using the following commands:
Bash
cd ~/Azure_SAP_Automated_Deployment
Next step
Plan the deployment
Upgrade SAP Deployment Automation
Framework
Article • 12/21/2023
SAP Deployment Automation Framework is updated regularly. This article describes how
to update the framework.
Prerequisites
Before you upgrade the framework, make sure that you back up the remote state files
from the tfstate storage account in the SAP library.
Go to the pipelines folder in your repository and create the pipeline definition by
selecting the file from the New menu. Name the file 21-update-pipelines.yml and paste
the following content into the file.
YAML
---
# /*----------------------------------------------------------------------
-----8
# |
|
# | This pipeline updates the ADO repository
|
# |
|
# +------------------------------------4----------------------------------
----*/
parameters:
- name: repository
displayName: Source repository
type: string
default: https://github.com/Azure/sap-
automation-bootstrap.git
- name: branch
displayName: Source branch to update from
type: string
default: main
- name: force
displayName: Force the update
type: boolean
default: false
trigger: none
pool:
vmImage: ubuntu-latest
variables:
- name: repository
value: ${{ parameters.repository }}
- name: branch
value: ${{ parameters.branch }}
- name: force
value: ${{ parameters.force }}
- name: log
value: logfile_$(Build.BuildId)
stages:
- stage: Update_DEVOPS_repository
displayName: Update DevOps pipelines
jobs:
- job: Update_DEVOPS_repository
displayName: Update DevOps pipelines
steps:
- checkout: self
persistCredentials: true
- bash: |
#!/bin/bash
green="\e[1;32m" ; reset="\e[0m" ; boldred="\e[1;31m"
fi
# If Pull already failed then keep that error code
if [ 0 != $return_code ]; then
return_code=$?
fi
exit $return_code
Create the Upgrade Pipelines pipeline by selecting New Pipeline from the Pipelines
section. Select Azure Repos Git as the source for your code. Configure your pipeline to
use an existing Azure Pipelines YAML file. Specify the pipeline with the following
settings.
ノ Expand table
Setting Value
Branch Main
Path deploy/pipelines/21-update-pipelines.yml
Save the pipeline. To see the Save option, select the chevron next to Run. Go to the
Pipelines section and select the pipeline. Rename the pipeline to Upgrade pipelines by
selecting Rename/Move from the ellipsis menu on the right.
Azure CLI
az login
The script removes the old deployer configuration and allows the new configuration to
be applied.
Azure CLI
Agent sign-in
You can also configure the Azure DevOps agent to perform the sign-in to Azure by
using the service principal. Add the following variable to the variable group that's used
by the control plane pipeline, which is typically SDAF-MGMT .
ノ Expand table
Name Value
Logon_Using_SPN true
Bash
tfversion="1.6.5"
#
# Install terraform for all users
#
sudo mkdir -p \
"${tf_dir}" \
"${tf_bin}"
wget -nv -O /tmp/"${tf_zip}"
"https://releases.hashicorp.com/terraform/${tfversion}/${tf_zip}"
sudo unzip -o /tmp/"${tf_zip}" -d "${tf_dir}"
sudo ln -vfs "../$(basename "${tf_dir}")/terraform" "${tf_bin}/terraform"
Bash
cd ~/Azure_SAP_Automated_Deployment/sap-automation
git pull
cd ~/Azure_SAP_Automated_Deployment/sap-automation-samples
git pull
If you're using private endpoints, run the following command before you perform the
upgrade to update the DNS settings for the private endpoint. Replace the
privateDNSzoneResourceId and keyvaultEndpointName placeholders with the values
Azure CLI
ノ Expand table
Name Value
Logon_Using_SPN true
Next step
Configure the control plane
Troubleshooting the SAP Deployment
Automation Framework
Article • 01/08/2024
Within the SAP Deployment Automation Framework (SDAF), we recognize that there are
many moving parts. This article is intended to help you troubleshoot issues that you can
encounter.
To track the progress of the deployment, the state is persisted in a file in the
.sap_deployment_automation folder in the WORKSPACES directory.
ノ Expand table
Deployment
This section describes how to troubleshoot issues that you can encounter when
performing deployments using the SAP Deployment Automation Framework.
Unable to access keyvault: XXXXX error
If you see an error similar to the following error when running the deployment:
text
This error indicates that the specified key vault doesn't exist or that the deployment
environment is unable to access it.
Depending on the deployment stage, you can resolve this issue in the following ways:
You can either add the IP of the environment from which you're executing the
deployment (recommended) or you can allow public access to the key vault. For more
information about controlling access to the key vault, see Allow public access to a key
vault.
The following variables are used to configure the key vault access:
tfvars
Agent_IP = "10.0.0.5"
public_network_access_enabled = true
text
You can verify if the deployment is being performed using a service principal or a
managed identity by checking the output of the deployment. If the deployment is using
a service principal, the output contains the following section:
text
If the deployment is using a managed identity, the output contains the following
section:
text
You can assign the 'Storage Account Contributor' role to the deployment credential on
the terraform state storage account, the resource group or the subscription (if feasible).
Use the ARM_CLIENT_ID from the deployment output.
cloudshell
export appId="<ARM_CLIENT_ID>"
You may also need to assign the reader role to the deployment credential on the
subscription containing the resource group with the Terraform state file. You can do that
with the following command:
cloudshell
export appId="<ARM_CLIENT_ID>"
text
or
or
Private DNS Zone Name: "privatelink.vaultcore.azure.net" was not found
This error indicates that the Private DNS zone listed in the error isn't available. You can
resolve this issue by either creating the Private DNS or providing the configuration for
an existing private DNS Zone. For more information on how to create the Private DNS
Zone, see Create a private DNS zone.
You can specify the details for an existing private DNS zone by using the following
variables:
Terraform
# Resource group name for resource group that contains the private DNS zone
management_dns_resourcegroup_name="<resource group name for the Private DNS
Zone>"
# Subscription ID name for resource group that contains the private DNS zone
management_dns_subscription_id="<subscription id for resource group name for
the Private DNS Zone>"
use_custom_dns_a_registration=false
OverconstrainedAllocationRequest error
If you see an error similar to the following error when running the deployment:
text
This error indicates that the selected VM size isn't available using the provided
constraints. To resolve this issue, select a different VM size or a different availability
zone.
The client 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' with
object id error
If you see an error similar to the following message when running the deployment:
text
The error indicates that the deployment credential doesn't have 'User Access
Administrator' role on the resource group. To resolve this issue, assign the 'User Access
Administrator' role to the deployment credential on the resource group or the
subscription (if feasible).
Configuration
This section describes how to troubleshoot issues that you can encounter when
performing configuration using the SAP Deployment Automation Framework.
text
This error indicates that the version of Ansible installed on the agent doesn't support
this task. To resolve this issue, upgrade to the latest version of Ansible on the agent
virtual machine.
Software download
This section describes how to troubleshoot issues that you can encounter when
downloading the SAP software using the SAP Deployment Automation Framework.
Azure DevOps
This section describes how to troubleshoot issues that you can encounter when using
Azure DevOps with the SAP Deployment Automation Framework.
text
This error indicates that the configured personal access token doesn't have permissions
to access the variable group. Ensure that the personal access token has the Read &
manage permission for the variable group and that it's still valid. The personal access
token is configured in the Azure DevOps pipeline variable groups either as 'PAT' in the
control plane variable group or as 'WZ_PAT' in the workload zone variable group.
Next step
Configure custom naming
Deploy the control plane
Article • 12/15/2023
The control plane deployment for SAP Deployment Automation Framework consists of
the:
Deployer
SAP library
Azure CLI
az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-
Account"
) Important
appId
password
tenant
Azure CLI
If you want to provide the User Access Administrator role scoped to the resource group
only, use the following command:
Azure CLI
Prepare for the control plane deployment by cloning the repositories using the
following commands:
Bash
mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
You can copy the sample configuration files to start testing the deployment automation
framework.
A minimal Terraform file for the DEPLOYER might look like this example:
Terraform
Note the Terraform variable file locations for future edits during deployment.
A minimal Terraform file for the LIBRARY might look like this example:
Terraform
# use_private_endpoint defines that the storage accounts and key vaults have
private endpoints enabled
use_private_endpoint = false
Note the Terraform variable file locations for future edits during deployment.
Run the following command to create the deployer and the SAP library. The command
adds the service principal details to the deployment key vault.
Windows
2. Go to the resource group that contains the deployer virtual machine (VM).
8. From the list of secrets, choose the secret that ends with -sshkey.
Bash
mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
wget https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/configure_deployer.sh -O
configure_deployer.sh
chmod +x ./configure_deployer.sh
./configure_deployer.sh
. /etc/profile.d/deploy_server.sh
The script installs Terraform and Ansible and configures the deployer.
5. Find and select the secret that contains sshkey. It might look like MGMT-[REGION]-
DEP00-sshkey .
6. On the secret's page, select the current version. Then copy the Secret value.
8. Save the file where you keep SSH keys. An example is C:\Users\<your-
username>\.ssh .
9. Save the file. If you're prompted to Save as type, select All files if SSH isn't an
option. For example, use deployer.ssh .
10. Connect to the deployer VM through any SSH client, such as Visual Studio Code.
Use the private IP address of the deployer and the SSH key you downloaded. For
instructions on how to connect to the deployer by using Visual Studio Code, see
Connect to the deployer by using Visual Studio Code. If you're using PuTTY,
convert the SSH key file first by using PuTTYGen.
7 Note
Bash
mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
wget https://raw.githubusercontent.com/Azure/sap-
automation/main/deploy/scripts/configure_deployer.sh -O
configure_deployer.sh
chmod +x ./configure_deployer.sh
./configure_deployer.sh
. /etc/profile.d/deploy_server.sh
The script installs Terraform and Ansible and configures the deployer.
Securing the control plane
The control plane is the most critical part of the SAP automation framework. It's
important to secure the control plane. The following steps help you secure the control
plane. If you have created your control plane using an external virtual machine or by
using the cloud shell, you should secure the control plane by implementing private
endpoints for the storage accounts and key vaults.
You can use the sync_deployer.sh script to copy the control plane configuration files to
the deployer VM. Sign in to the deployer VM and run the following commands:
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname
mgtneweeutfstate### --state_subscription xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx
Ensure that the use_private_endpoint variable is set to true in the DEPLOYER and
LIBRARY configuration files. Also ensure that public_network_access_enabled is set to
false in the DEPLOYER configuration files.
Terraform
# use_private_endpoint defines that the storage accounts and key vaults have
private endpoints enabled
use_private_endpoint = true
Rerun the control plane deployment to enable private endpoints for the storage
accounts and key vaults.
Bash
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP00"
export storageaccountname=<storageaccountname>
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_c
ode}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_cod
e}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
--deployer_parameter_file "${deployer_parameter_file}" \
--library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}" \
--storageaccountname "${storageaccountname}" \
--recover
Windows
PowerShell
$region_code="WEEU"
$env:TF_VAR_use_webapp="true"
del manifest.json
Next step
Configure SAP workload zone
Workload zone deployment in the SAP
automation framework
Article • 02/27/2024
An SAP application typically has multiple development tiers. For example, you might
have development, quality assurance, and production tiers. SAP Deployment
Automation Framework calls these tiers workload zones.
You can use workload zones in multiple Azure regions. Each workload zone then has its
own instance of Azure Virtual Network.
The private DNS is supported from the control plane or from a configurable source.
Core configuration
The following example parameter file shows only required parameters.
Bash
# The environment value is a mandatory field, it is used for partitioning
the environments, for example (PROD and NP)
environment="DEV"
Azure CLI
az ad sp create-for-rbac --role="Contributor" --
scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-
Account"
) Important
The name of the service principal must be unique.
appId
password
tenant
Azure CLI
SAP01-INFRASTRUCTURE folder.
Windows
If the scripts fail to run, it can sometimes help to clear the local cache files by
removing the ~/.sap_deployment_automation/ and ~/.terraform.d/ directories
before you run the scripts again.
Next step
SAP system deployment with the automation framework
SAP system deployment for the
automation framework
Article • 08/24/2023
The creation of the SAP system is part of the SAP Deployment Automation Framework
process. The SAP system deployment creates your virtual machines (VMs) and
supporting components for your SAP application.
The database tier, which deploys database VMs, their disks, and a Standard
instance of Azure Load Balancer. You can run HANA databases or AnyDB databases
in this tier.
The SAP central services tier, which deploys a customer-defined number of VMs
and a Standard instance of Load Balancer.
The application tier, which deploys the VMs and their disks.
The web dispatcher tier.
Application tier
The application tier deploys a customer-defined number of VMs. These VMs are size
Standard_D4s_v3 with a 30-GB operating system (OS) disk and a 512-GB data disk.
To set the application server count, define the parameter application_server_count for
this tier in your parameter file. For example, use application_server_count= 3 .
To set the SCS server count, define the parameter scs_server_count for this tier in your
parameter file. For example, use scs_server_count=1 .
Database tier
The database tier deploys the VMs and their disks and also deploys a Standard instance
of Load Balancer. You can use either HANA databases or AnyDB databases as your
database VMs.
You can set the size of database VMs with the parameter size for this tier. For example,
use "size": "S4Demo" for HANA databases or "size": "1 TB" for AnyDB databases. For
possible values, see the Size parameter in the tables of HANA database VM options and
AnyDB database VM options.
By default, the automation framework deploys the correct disk configuration for HANA
database deployments. For HANA database deployments, the framework calculates
default disk configuration based on VM size. However, for AnyDB database
deployments, the framework calculates default disk configuration based on database
size. You can set a disk size as needed by creating a custom JSON file in your
deployment. For an example, see the following JSON code sample and replace values as
necessary for your configuration. Then, define the parameter db_disk_sizes_filename in
the parameter file for the database tier. An example is db_disk_sizes_filename =
"path/to/JSON/file" .
You can also add extra disks to a new system or add extra disks to an existing system.
Core configuration
The following example parameter file shows only required parameters.
Bash
app_tier_vm_sizing="Production"
app_tier_use_DHCP=true
database_platform="HANA"
database_size="S4Demo"
database_sid="XDB"
database_vm_use_DHCP=true
database_vm_image={
os_type="linux"
source_image_id=""
publisher="SUSE"
offer="sles-sap-15-sp2"
sku="gen2"
version="latest"
}
application_server_image= {
os_type=""
source_image_id=""
publisher="SUSE"
offer="sles-sap-15-sp2"
sku="gen2"
version="latest"
}
scs_server_count=1
# scs_instance_number
scs_instance_number="00"
# ers_instance_number
ers_instance_number="02"
Windows
You can copy the sample configuration files to start testing the deployment
automation framework.
PowerShell
cd C:\Azure_SAP_Automated_Deployment
PowerShell
cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-
X01
Output files
The deployment creates an Ansible hosts file ( SID_hosts.yaml ) and an Ansible
parameter file ( sap-parameters.yaml ). These files are required input for the Ansible
playbooks.
Next step
Workload zone deployment with automation framework
Get started with Ansible configuration
Article • 03/12/2024
When you use SAP Deployment Automation Framework, you can perform an automated
infrastructure deployment. You can also do the required operating system
configurations and install SAP by using Ansible playbooks provided in the repository.
These playbooks are located in the automation framework repository in the /sap-
automation/deploy/ansible folder.
ノ Expand table
Filename Description
Prerequisites
The Ansible playbooks require the sap-parameters.yaml and SID_host.yaml files in the
current directory.
Configuration files
The sap-parameters.yaml file contains information that Ansible uses for configuration of
the SAP infrastructure.
YAML
---
# TERRAFORM CREATED
sap_fqdn: sap.contoso.net
# kv_name is the name of the key vault containing the system credentials
kv_name: LABSECESAP01user###
# secret_prefix is the prefix for the name of the secret stored in key vault
secret_prefix: LAB-SECE-SAP01
disks:
- { host: 'l00dxdb00l0538', LUN: 0, type: 'sap' }
- { host: 'l00dxdb00l0538', LUN: 10, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 11, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 12, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 13, type: 'data' }
- { host: 'l00dxdb00l0538', LUN: 20, type: 'log' }
- { host: 'l00dxdb00l0538', LUN: 21, type: 'log' }
- { host: 'l00dxdb00l0538', LUN: 22, type: 'log' }
- { host: 'l00dxdb00l0538', LUN: 2, type: 'backup' }
- { host: 'l00app00l538', LUN: 0, type: 'sap' }
- { host: 'l00app01l538', LUN: 0, type: 'sap' }
- { host: 'l00scs00l538', LUN: 0, type: 'sap' }
...
The L00_hosts.yaml file is the inventory file that Ansible uses for configuration of the
SAP infrastructure. The L00 label might differ for your deployments.
YAML
L00_DB:
hosts:
l00dxdb00l0538:
ansible_host : 10.110.96.12
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : hana
L00_SCS:
hosts:
l00scs00l538:
ansible_host : 10.110.32.25
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : scs
L00_ERS:
hosts:
vars:
node_tier : ers
L00_PAS:
hosts:
l00app00l538:
ansible_host : 10.110.32.24
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : pas
L00_APP:
hosts:
l00app01l538:
ansible_host : 10.110.32.15
ansible_user : azureadm
ansible_connection : ssh
connection_type : key
vars:
node_tier : app
L00_WEB:
hosts:
vars:
node_tier : web
Run a playbook
Make sure that you download the SAP software to your Azure environment before you
run this step.
One way you can run the playbooks is to use the configuration menu.
Bash
${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/configuration_menu.sh
Bash
sap_params_file=sap-parameters.yaml
export ANSIBLE_PASSWORD=$password_secret
export ANSIBLE_INVENTORY="${sap_sid}_hosts.yaml"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
export
ANSIBLE_COLLECTIONS_PATHS=/opt/ansible/collections${ANSIBLE_COLLECTIONS_PATH
S:+${ANSIBLE_COLLECTIONS_PATHS}}
export ANSIBLE_REMOTE_USER=azureadm
export ANSIBLE_PYTHON_INTERPRETER=auto_silent
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_01_os_base_config.yaml
Windows
The following tasks are executed on Windows virtual machines:
ComputerManagementDsc
PSDesiredStateConfiguration
WindowsDefender
ServerManager
SecurityPolicyDsc
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
Windows
The following tasks are executed on the central services instance virtual machine:
Download the software from the storage account and make it available for the
other virtual machines.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_03_bom_processing.yaml
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml
Database installation
This playbook performs the database server installation.
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_00_db_install.yaml
Database load
This playbook performs the Database load.
Windows
Database load
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_01_sap_dbload.yaml
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_01_db_ha.yaml
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_app_install.yaml
Windows
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_04_sap_web_install.yaml
ACSS registration
This playbook performs the Azure Center for SAP Solutions (ACSS) registration.
ACSS registration
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-
SAP04-L00/
export sap_sid=L00
export workload_vault_name="LABSECESAP04user###"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
prefix="LAB-SECE-SAP04"
password_secret_name=$prefix-sid-password
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env",
"ANSIBLE_PASSWORD") }}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_06_00_acss_registration.yaml
Tutorial: Deploy SAP Deployment
Automation Framework for enterprise
scale
Article • 03/12/2024
This tutorial shows you how to perform deployments by using SAP Deployment
Automation Framework. This example uses Azure Cloud Shell to deploy the control
plane infrastructure. The deployer virtual machine (VM) creates the remaining
infrastructure and SAP HANA configurations.
There are three main steps of an SAP deployment on Azure with the automation
framework:
1. Prepare the region. You deploy components to support the SAP automation
framework in a specified Azure region. In this step, you:
a. Create the deployment environment.
b. Create shared storage for Terraform state files.
c. Create shared storage for SAP installation media.
2. Prepare the workload zone. You deploy the workload zone components, such as
the virtual network and key vaults.
3. Deploy the system. You deploy the infrastructure for the SAP system.
There are several workflows in the deployment automation process. This tutorial focuses
on one workflow for ease of deployment. You can deploy this workflow, the SAP S4
HANA standalone environment, by using Bash. This tutorial describes the general
hierarchy and different phases of the deployment.
Environment overview
SAP Deployment Automation Framework has two main components:
The following diagram shows the dependency between the control plane and the
application plane.
The framework uses Terraform for infrastructure deployment and Ansible for the
operating system and application configuration. The following diagram shows the
logical separation of the control plane and workload zone.
Management zone
The management zone contains the control plane infrastructure from which other
environments are deployed. After the management zone is deployed, you rarely, if ever,
need to redeploy.
The deployer is the execution engine of the SAP automation framework. This
preconfigured VM is used for executing Terraform and Ansible commands.
The SAP Library provides the persistent storage for the Terraform state files and the
downloaded SAP installation media for the control plane.
You configure the deployer and the library in a Terraform .tfvars variable file. For more
information, see Configure the control plane.
Workload zone
An SAP application typically has multiple deployment tiers. For example, you might have
development, quality assurance, and production tiers. SAP Deployment Automation
Framework calls these tiers workload zones.
The SAP workload zone contains the networking and shared components for the SAP
VMs. These components include route tables, network security groups, and virtual
networks. The landscape provides the opportunity to divide deployments into different
environments. For more information, see Configure the workload zone.
The system deployment consists of the VMs to run the SAP application, including the
web, app, and database tiers. For more information, see Configure the SAP system.
Prerequisites
The SAP Deployment Automation Framework repository is available on GitHub.
You need to deploy Azure Bastion or use a Secure Shell (SSH) client to connect to the
deployer. Use any SSH client that you feel comfortable with.
Ensure that your Azure subscription has a sufficient core quote for DdSV4 and EdsV4
family SKUs in the elected region. About 50 cores available for each VM family should
suffice.
cloudshell
az login
Authenticate your sign-in. Don't close the window until you're prompted.
cloudshell
Or:
cloudshell
cloudshell
cloudshell
5. Optionally, remove all the deployment artifacts. Use this command when you want
to remove all remnants of previous deployment artifacts.
cloudshell
cd ~
mkdir -p ${HOME}/Azure_SAP_Automated_Deployment; cd $_
cp -Rp samples/Terraform/WORKSPACES
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
7. Optionally, validate the versions of Terraform and the Azure CLI available on your
instance of Cloud Shell.
cloudshell
./sap-automation/deploy/scripts/helpers/check_workstation.sh
instructions , as necessary.
When you choose a name for your service principal, make sure that the name is unique
within your Azure tenant.
1. Give the service principal Contributor and User Access Administrator permissions.
cloudshell
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export control_plane_env_code="LAB"
az ad sp create-for-rbac --role="Contributor" \
--scopes="/subscriptions/${ARM_SUBSCRIPTION_ID}" \
--name="${control_plane_env_code}-Deployment-Account"
JSON
{
"appId": "<AppId>",
"displayName": "<environment>-Deployment-Account ",
"name": "<AppId>",
"password": "<AppSecret>",
"tenant": "<TenantId>"
}
2. Copy down the output details. Make sure to save the values for appId , password ,
and Tenant .
The output maps to the following parameters. You use these parameters in later
steps, with automation commands.
ノ Expand table
spn_id appId
spn_secret password
tenant_id tenant
3. Optionally, assign the User Access Administrator role to the service principal.
cloudshell
export appId="<appId>"
) Important
If you don't assign the User Access Administrator role to the service principal, you
can't assign permissions by using the automation.
Configure the control plane web application
credentials
As a part of the SAP automation framework control plane, you can optionally create an
interactive web application that assists you in creating the required configuration files.
Bash
export env_code="LAB"
echo '[{"resourceAppId":"00000003-0000-0000-c000-
000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-
88639da4683d","type":"Scope"}]}]' >> manifest.json
export TF_use_webapp=true
7 Note
Ensure that you're logged on by using a user account that has the required
permissions to create application registrations. For more information about app
registrations, see Create an app registration.
Copy down the output details. Make sure to save the values for App registration ID
and App registration password .
The output maps to the following parameters. You use these parameters in later steps,
with automation commands.
ノ Expand table
cloudshell
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
code .
2. Expand the WORKSPACES directory. There are six subfolders: CONFIGURATION ,
DEPLOYER , LANDSCAPE , LIBRARY , SYSTEM , and BOMS . Expand each of these folders to
3. Find the Terraform variable files in the appropriate subfolder. For example, the
DEPLOYER Terraform variable file might look like this example:
Terraform
Note the Terraform variable file locations for future edits during deployment.
4. Find the Terraform variable files for the SAP Library in the appropriate subfolder.
For example, the LIBRARY Terraform variable file might look like this example:
Terraform
Note the Terraform variable file locations for future edits during deployment.
) Important
Ensure that the dns_label matches your instance of Azure Private DNS.
The deployment goes through cycles of deploying the infrastructure, refreshing the
state, and uploading the Terraform state files to the library storage account. All of these
steps are packaged into a single deployment script. The script needs the location of the
configuration file for the deployer and library, and some other parameters.
For example, choose West Europe as the deployment location, with the four-character
name SECE , as previously described. The sample deployer configuration file LAB-SECE-
DEP05-INFRASTRUCTURE.tfvars is in the
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-SECE-DEP05-
INFRASTRUCTURE folder.
folder.
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
If you're running the script from a workstation that isn't part of the deployment
network or from Cloud Shell, you can use the following command to set the
environment variable for allowing connectivity from your IP address:
Bash
export TF_VAR_Agent_IP=<your-public-ip-address>
If you're deploying the configuration web application, you need to also set the
following environment variables:
Bash
export TF_VAR_app_registration_app_id=<appRegistrationId>
export TF_VAR_webapp_client_secret=<appRegistrationPassword>
export TF_use_webapp=true
2. Create the deployer and the SAP Library and add the service principal details to the
deployment key vault by using this script:
Bash
export env_code="LAB"
export vnet_code="DEP05"
export region_code="SECE"
export
DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export
SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
cd $CONFIG_REPO_PATH
deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${reg
ion_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${regio
n_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
--deployer_parameter_file "${deployer_parameter_file}" \
--library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}"
If you run into authentication issues, run az logout to sign out and clear the
token-cache . Then run az login to reauthenticate.
Wait for the automation framework to run the Terraform operations plan and
apply .
You need to note some values for upcoming steps. Look for this text block in the
output:
text
#######################################################################
##################
#
#
# Please save these values:
#
# - Key Vault: LABSECEDEP05user39B
#
# - Deployer IP: x.x.x.x
#
# - Storage Account: labsecetfstate53e
#
# - Web Application Name: lab-sece-sapdeployment39B
#
# - App registration Id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#
#
#
#######################################################################
##################
Select Resource groups. Look for new resource groups for the deployer
infrastructure and library. For example, you might see LAB-[region]-DEP05-
INFRASTRUCTURE and LAB-[region]-SAP_LIBRARY .
The contents of the deployer and SAP Library resource group are shown here.
The Terraform state file is now placed in the storage account whose name contains
tfstate . The storage account has a container named tfstate with the deployer
and library state files. The contents of the tfstate container after a successful
control plane deployment are shown here.
If you get the following error for the deployer module creation, make sure that
you're in the WORKSPACES directory when you run the script:
text
text
If you have authentication issues directly after you run the script
deploy_controlplane.sh , run this command:
Azure CLI
az logout
az login
3. On the Key vault page, find the deployer key vault. The name starts with
LAB[REGION]DEP05user . Filter by Resource group or Location, if necessary.
6. On the secret's page, select the current version. Then, copy the secret value.
8. Save the file where you keep SSH keys. For example, use C:\\Users\\<your-
username>\\.ssh .
9. Save the file. If you're prompted to Save as type, select All files if SSH isn't an
option. For example, use deployer.ssh .
10. Connect to the deployer VM through any SSH client, such as Visual Studio Code.
Use the public IP address you noted earlier and the SSH key you downloaded. For
instructions on how to connect to the deployer by using Visual Studio Code, see
Connect to the deployer by using Visual Studio Code. If you're using PuTTY,
convert the SSH key file first by using PuTTYGen.
7 Note
Ensure that the file you use to save the SSH key can save the file by using the
correct format, that is, without carriage return (CR) characters. Use Visual Studio
Code or Notepad++.
After you're connected to the deployer VM, you can download the SAP software by
using the Bill of Materials (BOM).
8. From the list of secrets, select the secret that ends with -sshkey.
You should update the control plane tfvars file to enable private endpoints and to
block public access to the storage accounts and key vaults.
1. To copy the control plane configuration files to the deployer VM, you can use the
sync_deployer.sh script. Sign in to the deployer VM and update the following
command to use your Terraform state storage account name. Then, run the
following script:
Bash
terraform_state_storage_account=labsecetfstate###
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname
$terraform_state_storage_account --state_subscription
$ARM_SUBSCRIPTION_ID
This command copies the tfvars configuration files from the SAP Library's storage
account to the deployer VM.
Terraform
3. Rerun the deployment to apply the changes. Update the storage account name
and key vault name in the script.
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
Bash
export env_code="LAB"
export vnet_code="DEP05"
export region_code="SECE"
terraform_state_storage_account=labsecetfstate###
vault_name="LABSECEDEP05user###"
export
DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export
SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
cd $CONFIG_REPO_PATH
deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${reg
ion_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${regio
n_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -
p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
--deployer_parameter_file "${deployer_parameter_file}" \
--library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--storageaccountname "${terraform_state_storage_account}" \
--vault "${vault_name}"
Bash
export env_code="LAB"
export vnet_code="DEP05"
export region_code="SECE"
export webapp_name="<webAppName>"
export app_id="<appRegistrationId>"
export webapp_id="<webAppId>"
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
cd $DEPLOYMENT_REPO_PATH
cd Webapp/SDAF
zip -r SDAF.zip .
az webapp deploy --resource-group ${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE --name $webapp_name --src-path SDAF.zip --type zip
a. The name of the Terraform state file storage account in the library resource
group:
Select Library resource group > State storage account > Containers >
tfstate . Copy the name of the deployer state file.
Following from the preceding example, the name of the blob is LAB-SECE-
DEP05-INFRASTRUCTURE.terraform.tfstate .
3. If necessary, register the service principal. For this tutorial, this step isn't needed.
The first time an environment is instantiated, a service principal must be registered.
In this tutorial, the control plane is in the LAB environment and the workload zone
is also in LAB . For this reason, a service principal must be registered for the LAB
environment.
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appID>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenant>"
export key_vault="<vaultName>"
export env_code="LAB"
export region_code="SECE"
export
SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
Bash
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/set_secrets.sh \
--environment "${env_code}" \
--region "${region_code}" \
--vault "${key_vault}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}"
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/LAB-
SECE-SAP04-INFRASTRUCTURE
2. Optionally, open the workload zone configuration file and, if needed, change the
network logical name to match the network name.
3. Start deployment of the workload zone. The details that you collected earlier are
needed here:
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
Bash
export deployer_env_code="LAB"
export sap_env_code="LAB"
export region_code="SECE"
export deployer_vnet_code="DEP05"
export vnet_code="SAP04"
export tfstate_storage_account="<storageaccountName>"
export key_vault="<vaultName>"
export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
cd
"${CONFIG_REPO_PATH}/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE"
parameterFile="${sap_env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars"
deployerState="${deployer_env_code}-${region_code}-${deployer_vnet_code}-
INFRASTRUCTURE.terraform.tfstate"
$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
--parameterfile "${parameterFile}" \
--deployer_environment "${deployer_env_code}" \
--deployer_tfstate_key "${deployerState}" \
--keyvault "${key_vault}" \
--storageaccountname "${tfstate_storage_account}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \
--spn_id "${ARM_CLIENT_ID}" \
--spn_secret "${ARM_CLIENT_SECRET}" \
--tenant_id "${ARM_TENANT_ID}"
Wait for the deployment to finish. The new resource group appears in the Azure portal.
Go into the WORKSPACES/SYSTEM folder and copy the sample configuration files to use
from the repository.
The database tier, which deploys database VMs and their disks and an Azure
Standard Load Balancer instance. You can run HANA databases or AnyDB
databases in this tier.
The SCS tier, which deploys a customer-defined number of VMs and an Azure
Standard Load Balancer instance.
The application tier, which deploys the VMs and their disks.
The Web Dispatcher tier.
Bash
export sap_env_code="LAB"
export region_code="SECE"
export vnet_code="SAP04"
export SID="L00"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
cd
${CONFIG_REPO_PATH}/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-${SID
}
${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh
\
--parameterfile
"${sap_env_code}-${region_code}-${vnet_code}-${SID}.tfvars" \
--type sap_system
Check that the system resource group is now in the Azure portal.
The SAP BOM mimics the SAP maintenance planner. There are relevant product
identifiers and a set of download URLs.
YAML
---
name: 'S41909SPS03_v0010'
target: 'S/4 HANA 1909 SPS 03'
version: 7
product_ids:
dbl: NW_ABAP_DB:S4HANA1909.CORE.HDB.ABAP
scs: NW_ABAP_ASCS:S4HANA1909.CORE.HDB.ABAP
scs_ha: NW_ABAP_ASCS:S4HANA1909.CORE.HDB.ABAPHA
pas: NW_ABAP_CI:S4HANA1909.CORE.HDB.ABAP
pas_ha: NW_ABAP_CI:S4HANA1909.CORE.HDB.ABAPHA
app: NW_DI:S4HANA1909.CORE.HDB.PD
app_ha: NW_DI:S4HANA1909.CORE.HDB.ABAPHA
web: NW_Webdispatcher:NW750.IND.PD
ers: NW_ERS:S4HANA1909.CORE.HDB.ABAP
ers_ha: NW_ERS:S4HANA1909.CORE.HDB.ABAPHA
materials:
dependencies:
- name: HANA_2_00_055_v0005ms
media:
# SAPCAR 7.22
- name: SAPCAR
archive: SAPCAR_1010-70006178.EXE
checksum:
dff45f8df953ef09dc560ea2689e53d46a14788d5d184834bb56544d342d7b
filename: SAPCAR
permissions: '0755'
url:
https://softwaredownloads.sap.com/file/0020000002208852020
# Kernel
- name: "Kernel Part I ; OS: Linux on x86_64 64bit ; DB:
Database independent"
1. Connect to your deployer VM for the following steps. A copy of the repo is now
there.
2. Add a secret with the username for your SAP user account. Replace <vaultName>
with the name of your deployer key vault. Also replace <sap-username> with your
SAP username.
Bash
export key_vault=<vaultName>
sap_username=<sap-username>
3. Add a secret with the password for your SAP user account. Replace <vaultName>
with your deployer key vault name and replace <sap-password> with your SAP
password.
7 Note
The use of single quotation marks when you set sap_user_password is
important. The use of special characters in the password can otherwise cause
unpredictable results.
Azure CLI
sap_user_password='<sap-password>'
4. Configure your SAP parameters file for the download process. Then, download the
SAP software by using Ansible playbooks. Run the following commands:
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
mkdir BOMS
cd BOMS
vi sap-parameters.yaml
5. Update the bom_base_name with the name BOM. Replace <Deployer KeyVault Name>
with the name of the Azure key vault for the deployer resource group.
YAML
bom_base_name: S42022SPS00_v0001ms
deployer_kv_name: <vaultName>
BOM_directory:
${HOME}/Azure_SAP_Automated_Deployment/samples/SAP
6. Run the Ansible playbook to download the software. One way you can run the
playbooks is to use the Downloader menu. Run the download_menu script.
Bash
${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/download_menu.sh
1) BoM Downloader
3) Quit
Please select playbook:
Select the playbook 1) BoM Downloader to download the SAP software described in
the BOM file into the storage account. Check that the sapbits container has all
your media for installation.
You can run the playbook by using the configuration menu or directly from the
command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/BOMS/
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
--extra-vars="bom_processing=true"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
If you want, you can also pass the SAP User credentials as parameters.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/BOMS/
sap_username=<sap-username>
sap_user_password='<sap-password>'
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
--extra-vars="s_user=${sap_username}"
--extra-vars="s_password=${sap_user_password}"
--extra-vars="bom_processing=true"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
Make sure you have the following files in the current folders: sap-parameters.yaml and
L00_host.yaml .
For a standalone SAP S/4HANA system, there are eight playbooks to run in sequence.
One way you can run the playbooks is to use the configuration menu.
Bash
${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/configuration_menu.sh
You can run the playbook by using the configuration menu or the command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
You can run the playbook by using the configuration menu or the command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
You can run the playbook by using the configuration menu or the command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_03_bom_processing.yaml
You can run the playbook by using the configuration menu or the command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml
You can run the playbook by using the configuration menu or the command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_00_db_install.yaml
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_01_sap_dbload.yaml
You can run the playbook by using the configuration menu or the command line.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_04_00_01_db_ha.yaml
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_02_sap_app_install.yaml
You've now deployed and configured a standalone HANA system. If you need to
configure a highly available (HA) SAP HANA database, run the HANA HA playbook.
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-
L00/
export sap_sid=L00
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
playbook_options=(
--inventory-file="${sap_sid}_hosts.yaml"
--private-key=${ANSIBLE_PRIVATE_KEY_FILE}
--extra-vars="_workspace_directory=`pwd`"
--extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD")
}}'
--extra-vars="@sap-parameters.yaml"
"${@}"
)
# Run the playbook to retrieve the ssh key from the Azure key vault
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-
sshkey.yaml
# Run the playbook to download the software from the SAP Library
ansible-playbook "${playbook_options[@]}"
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_05_04_sap_web_install.yaml
To remove the entire SAP infrastructure you deployed, you need to:
Run the removal of your SAP infrastructure resources and workload zones from the
deployer VM. Run the removal of the control plane from Cloud Shell.
Before you begin, sign in to your Azure account. Then, check that you're in the correct
subscription.
Bash
export sap_env_code="LAB"
export region_code="SECE"
export sap_vnet_code="SAP04"
cd
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${r
egion_code}-${sap_vnet_code}-L00
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \
--parameterfile "${sap_env_code}-${region_code}-${sap_vnet_code}-
L00.tfvars" \
--type sap_system
Bash
export sap_env_code="LAB"
export region_code="SECE"
export sap_vnet_code="SAP01"
cd
${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-
${region_code}-${sap_vnet_code}-INFRASTRUCTURE
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh
\
--parameterfile ${sap_env_code}-${region_code}-${sap_vnet_code}-
INFRASTRUCTURE.tfvars \
--type sap_landscape
Bash
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/
Bash
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
Bash
export region_code="SECE"
export env_code="LAB"
export vnet_code="DEP05"
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_controlplane.sh
\
--deployer_parameter_file
DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars
\
--library_parameter_file LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars
Next step
Configure the control plane
Tutorial: Use SAP Deployment
Automation Framework with DevOps
Article • 08/31/2023
This tutorial shows you how to perform the deployment activities of SAP Deployment
Automation Framework by using Azure DevOps Services.
Prerequisites
An Azure subscription. If you don't have an Azure subscription, you can create a
free account .
7 Note
The free Azure account might not be sufficient to run the deployment.
A configured Azure DevOps instance. For more information, see Configure Azure
DevOps Services for SAP Deployment Automation.
For the SAP software acquisition and the Configuration and SAP installation
pipelines, a configured self-hosted agent.
The self-hosted agent virtual machine is deployed as part of the control plane
deployment.
Overview
These steps reference and use the default naming convention for the automation
framework. Example values are also used for naming throughout the configurations. This
tutorial uses the following names:
In this tutorial, the X00 SAP system is deployed with the following configuration:
Standalone deployment
HANA DB VM SKU: Standard_M32ts
ASCS VM SKU: Standard_D4s_v3
APP VM SKU: Standard_D4s_v3
Run the pipeline by selecting the Deploy control plane pipeline from the Pipelines
section. Enter MGMT-WEEU-DEP00-INFRASTRUCTURE as the deployer configuration name and
MGMT-WEEU-SAP_LIBRARY as the SAP library configuration name.
You can track the progress in the Azure DevOps Services portal. After the deployment is
finished, you can see the control plane details on the Extensions tab.
Run the pipeline by selecting the Deploy workload zone pipeline from the Pipelines
section. Enter DEV-WEEU-SAP01-INFRASTRUCTURE as the workload zone configuration name
and MGM as the deployer environment name.
You can track the progress in the Azure DevOps Services portal. After the deployment is
finished, you can see the workload zone details on the Extensions tab.
Deploy the SAP system
The deployment uses the configuration defined in the Terraform variable file located in
the samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00 folder.
Run the pipeline by selecting the SAP system deployment pipeline from the Pipelines
section. Enter DEV-WEEU-SAP01-X00 as the SAP system configuration name.
You can track the progress in the Azure DevOps Services portal. After the deployment is
finished, you can see the SAP system details on the Extensions tab.
Next step
Configure control plane
Configure new and existing
deployments
Article • 09/03/2023
You can use SAP Deployment Automation Framework in both new and existing
deployment scenarios.
In new deployment scenarios, the automation framework doesn't use existing Azure
infrastructure. The deployment process creates the virtual networks, subnets, key vaults,
and more.
) Important
New deployment
In this scenario, the automation framework creates all Azure components and uses the
deployer. This example deployment contains:
Deployer DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-WEEU-DEP00-
INFRASTRUCTURE.tfvars
Library LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-SAP_LIBRARY.tfvars
Workload LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE/DEV-WEEU-SAP01-
zone INFRASTRUCTURE.tfvars
System SYSTEM/DEV-WEEU-SAP01-X00/DEV-WEEU-SAP01-X00.tfvars
Clone the SAP Deployment Automation Framework repository and copy the sample
files to your root folder for parameter files:
Bash
cd ~/Azure_SAP_Automated_Deployment
mkdir -p WORKSPACES/DEPLOYER
cp sap-automation/samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE
WORKSPACES/DEPLOYER/. -r
mkdir -p WORKSPACES/LIBRARY
cp sap-automation/samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY
WORKSPACES/LIBRARY/. -r
mkdir -p WORKSPACES/LANDSCAPE
cp sap-automation/samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE
WORKSPACES/LANDSCAPE/. -r
mkdir -p WORKSPACES/SYSTEM
cp sap-automation/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00
WORKSPACES/SYSTEM/. -r
cd WORKSPACES
Prepare the control plane by installing the deployer and library. Be sure to replace the
sample values with your service principal's information.
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/"
export ARM_SUBSCRIPTION_ID="${subscriptionID}"
$DEPLOYMENT_REPO_PATH/scripts/prepare_region.sh
--deployer_parameter_file DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE/MGMT-
WEEU-DEP00-INFRASTRUCTURE.tfvars \
--library_parameter_file LIBRARY/MGMT-WEEU-SAP_LIBRARY/MGMT-WEEU-
SAP_LIBRARY.tfvars \
--subscription $subscriptionID
\
--spn_id $appID
\
--spn_secret $spn_secret
\
--tenant_id $tenant
--auto-approve
PowerShell
Import-Module "SAPDeploymentUtilities.psd1"
$Subscription=<subscriptionID>
$SPN_id=<appID>
$SPN_password=<password>
$Tenant_id=<tenant>
Deploy the workload zone by running either the Bash or PowerShell script.
Be sure to replace the sample credentials with your service principal's information. You
can use the same service principal credentials that you used in the control plane
deployment. For production deployments, we recommend using different service
principals per workload zone.
Bash
subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-
INFRASTRUCTURE
${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
--parameterfile DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars \
--deployer_environment 'MGMT' \
--subscription $subscriptionID \
--spn_id $appID \
--spn_secret $spn_secret \
--tenant_id $tenant \
--auto-approve
PowerShell
cd \Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\DEV-WEEU-SAP01-
INFRASTRUCTURE
$subscription="<subscriptionID>"
$appId="<appID>"
$spn_secret="<password>"
$tenant_id="<tenant>"
Deploy the SAP system. Run either the Bash or PowerShell command.
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00
PowerShell
Import-Module "SAPDeploymentUtilities.psd1"
cd \Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-X00
) Important
Modify all example configurations as necessary for your scenario. Update all the
<arm_resource_id> placeholders.
Deployer DEPLOYER/MGMT-EUS2-DEP01-INFRASTRUCTURE/MGMT-EUS2-DEP01-
INFRASTRUCTURE.tfvars
Library LIBRARY/MGMT-EUS2-SAP_LIBRARY/MGMT-EUS2-SAP_LIBRARY.tfvars
Workload LANDSCAPE/QA-EUS2-SAP03-INFRASTRUCTURE/QA-EUS2-SAP03-
zone INFRASTRUCTURE.tfvars
System SYSTEM/QA-EUS2-SAP03-X01/QA-EUS2-SAP03-X01.tfvars
Copy the sample files to your root folder for parameter files:
Bash
cd ~/Azure_SAP_Automated_Deployment
mkdir -p WORKSPACES/DEPLOYER
cp sap-automation/samples/WORKSPACES/DEPLOYER/MGMT-EUS2-DEP01-INFRASTRUCTURE
WORKSPACES/DEPLOYER/. -r
mkdir -p WORKSPACES/LIBRARY
cp sap-automation/samples/WORKSPACES/LIBRARY/MGMT-EUS2-SAP_LIBRARY
WORKSPACES/LIBRARY/. -r
mkdir -p WORKSPACES/LANDSCAPE
cp sap-automation/samples/WORKSPACES/LANDSCAPE/QA-EUS2-SAP03-INFRASTRUCTURE
WORKSPACES/LANDSCAPE/. -r
mkdir -p WORKSPACES/SYSTEM
cp sap-automation/samples/WORKSPACES/SYSTEM/QA-EUS2-SAP03-X01
WORKSPACES/SYSTEM/. -r
cd WORKSPACES
The sample tfvars file has <azure_resource_id> placeholders. You need to replace
them with the actual Azure resource IDs for resource groups, virtual networks, and
subnets.
Deploy the control plane by installing the deployer and SAP library. Run either the Bash
or PowerShell command. Be sure to replace the sample credentials with your service
principal's information.
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation/"
export ARM_SUBSCRIPTION_ID="${subscriptionID}"
$DEPLOYMENT_REPO_PATH/scripts/prepare_region.sh
--deployer_parameter_file DEPLOYER/MGMT-EUS2-DEP01-INFRASTRUCTURE/MGMT-
EUS2-DEP01-INFRASTRUCTURE.tfvars \
--library_parameter_file LIBRARY/MGMT-EUS2-SAP_LIBRARY/MGMT-EUS2-
SAP_LIBRARY.tfvars \
--subscription $subscriptionID
\
--spn_id $appID
\
--spn_secret $spn_secret
\
--tenant_id $tenant
--auto-approve
PowerShell
cd \Azure_SAP_Automated_Deployment\WORKSPACES
$subscription="<subscriptionID>"
$appId="<appID>"
$spn_secret="<password>"
$tenant_id="<tenant>"
New-SAPAutomationRegion
-DeployerParameterfile .\DEPLOYER\MGMT-EUS2-DEP01-INFRASTRUCTURE\MGMT-
EUS2-DEP01-INFRASTRUCTURE.json
-LibraryParameterfile .\LIBRARY\MGMT-EUS2-SAP_LIBRARY\MGMT-EUS2-
SAP_LIBRARY.json
-Subscription $subscription
-SPN_id $appId
-SPN_password $spn_secret
-Tenant_id $tenant_id
-Silent
Deploy the workload zone by running either the Bash or PowerShell script.
Be sure to replace the sample credentials with your service principal's information. You
can use the same service principal credentials that you used in the control plane
deployment. For production deployments, we recommend using different service
principals per workload zone.
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/QA-EUS2-SAP03-
INFRASTRUCTURE
subscriptionID=<subscriptionID>
appId=<appID>
spn_secret=<password>
tenant_id=<tenant>
${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
--parameterfile QA-EUS2-SAP03-INFRASTRUCTURE.tfvars \
--deployer_environment MGMT \
--subscription $subscriptionID \
--spn_id $appID \
--spn_secret $spn_secret \
--tenant_id $tenant \
--auto-approve
PowerShell
cd \Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\QA-EUS2-SAP03-
INFRASTRUCTURE
$subscription="<subscriptionID>"
$appId="<appID>"
$spn_secret="<password>"
$tenant_id="<tenant>"
Deploy the SAP system in the QA environment. Run either the Bash or PowerShell
command.
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/QA-EUS2-SAP03-X01
PowerShell
cd \Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\QA-EUS2-SAP03-X01
Next step
Tutorial: Enterprise scale for SAP Deployment Automation Framework
Configure Azure monitor for SAP with
SAP Deployment Automation
Framework
Article • 02/27/2024
By integrating Azure Monitor for SAP and SAP Deployment Automation Framework, you
can achieve a faster, easier, and more reliable deployment and operation of your SAP
systems on Azure. You can use the automation framework to provision and configure
the SAP systems, and Azure Monitor for SAP to monitor and optimize the performance
and availability of those SAP systems.
This integration with SAP on Azure Deployment Automation Framework enables you to
reduce the complexity and deployment cost of running your SAP environments on
Azure, by helping to automate the monitoring of different components of an SAP
landscape.
Overview
As described in the overview document, the automation framework has two main
components:
Deployment of Azure Monitor for SAP (AMS) and the providers can be automated from
the SAP Deployment Automation Framework (SDAF) to simplify the monitoring process.
In this architecture, one Azure Monitor for SAP resource is deployed in each workload
zone, which represents the environment. This resource is responsible for monitoring the
performance and availability of different components of the SAP systems in that
environment.
To monitor different components of each SAP system, there are corresponding providers
and all these providers are deployed in the Azure Monitor for SAP resource of that
environment. This setup allows for efficient monitoring and management of the SAP
systems, as all the providers for a particular system are located in the same Azure
Monitor for SAP resource. The automation framework automates the following steps:
7 Note
The key components of the Azure monitor for SAP resource created in the workload
zone resource group would include:
Terraform
############################################################################
#############
# AMS Subnet variables
#
############################################################################
#############
# If defined these parameters control the subnet name and the subnet prefix
# ams_subnet_name is an optional parameter and should only be used if the
default naming is not acceptable
# ams_subnet_name = ""
############################################################################
#############
# AMS instance variables
#
############################################################################
#############
# If defined these parameters control the ams instance (Azure monitor for
SAP)
# create_ams_instance is an optional parameter, and should be set true is
the AMS instance is to be created.
create_ams_instance = true
Terraform
You need a copy of the SAP software before you can use SAP Deployment Automation
Framework. Prepare your Azure environment so that you can put the SAP media in your
storage account. Then, download the SAP software by using Ansible playbooks.
Prerequisites
An Azure subscription. If you don't have an Azure subscription, you can create a
free account .
An SAP user account (SAP-User or S-User account) with software download
privileges.
1. Sign in to the Azure CLI with the account you want to use.
Azure CLI
az login
2. Add a secret with the username for your SAP user account. Replace <keyvault-
name> with the name of your deployer key vault. Also replace <sap-username> with
Azure CLI
export key_vault=<vaultID>
sap_username=<sap-username>
3. Add a secret with the password for your SAP user account. Replace <keyvault-
name> with the name of your deployer key vault. Also replace <sap-password> with
sap_user_password="<sap-password>
az keyvault secret set --name "S-Password" --vault-name "${key_vault}"
--value "${sap_user_password}";
4. Two other secrets are needed in this step for the storage account. The automation
framework automatically sets up sapbits . It's always a good practice to verify
whether they existed in your deployer key vault or not.
text
sapbits-access-key
sapbits-location-base-path
Bash
mkdir -p ~/Azure_SAP_Automated_Deployment/WORKSPACES/BOMS; cd $_
Bash
vi sap-parameters.yaml
b. Change the value of kv_name to the name of the deployer key vault.
c. (If needed) Change the value of secret_prefix to match the prefix in your
environment (for example, DEV-WEEU-SAP ).
Bash
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/download_menu.sh
text
1) BoM Downloader
2) Quit
Please select playbook:
Bash
ansible-playbook
\
--user azureadm
\
--extra-vars="@sap-parameters.yaml"
\
~/Azure_SAP_Automated_Deployment/sap-
automation/deploy/ansible/playbook_bom_downloader.yaml
Next step
Deploy the SAP infrastructure
Acquire media for BOM creation
Article • 02/10/2023
The SAP on Azure Deployment Automation Framework uses a Bill of Materials (BOM). To
create your BOM, you have to locate and download relevant SAP installation media.
Then, you need to upload these media files to your Azure storage account.
7 Note
This guide covers advanced deployment topics. For a basic explanation of how to
deploy the automation framework, see the get started guide instead.
This guide is for configurations that use either the SAP Application (DB) or HANA
databases.
Prerequisites
An SAP account with permissions to download the SAP software and access the
Maintenance Planner.
An installation of the SAP download manager on your computer.
Information about your SAP system:
SAP account username and password. The SAP account cannot be linked to a
SAP Universal ID.
The SAP system product to deploy (such as S/4HANA)
The SAP System Identifier (SAP SID)
Any language pack requirements
The operating system (OS) to use in the application infrastructure
An Azure subscription. If you don't already have an Azure subscription, create a
free account .
Acquire media
To prepare for downloading the SAP installation media:
1. On your computer, create a unique directory for your stack SAP downloads. For
example, ~/Downloads/S4HANA_1909_SP2/ .
a. On the search bar, make sure the search type is set to Downloads.
c. In the table Items Available to Download, select the row for SAPCAR with
Maintenance Software Component. This step filters available downloads for the
latest version of the utility.
d. Make sure the drop-down menu for the table shows the correct OS type. For
example, LINUX ON X86_64 64BIT .
e. Select the checkbox next to the filename of the SAPCAR executable. For
example, SAPCAR_1320-80000935.EXE .
f. Select the shopping cart icon to add your selection to the download basket.
d. Select Next
e. For Install a New System, enter the SAP SID you're using.
f. For Target Version, select your target SAP version. For example, SAP S/4HANA
2020.
g. For Target Stack, select your target stack. For example, Initial Shipment Stack.
i. Select Next
7. Design your codeployment.
b. For Target Version, select your target version for codeployment. For example,
SAP FIORI FOR SAP S/4HANA 2020.
c. For Target Stack, select your target stack for codeployment. For example, Initial
Shipment Stack.
d. Select Next
8. Select Continue Planning. If you're using a new system, select Next. If you're using
an existing system, make the following changes:
c. Select Next.
9. Optionally, under Select Stack Independent Files, configure settings for non-ABAP
databases. You can choose to expand the database and deselect non-required
language files.
11. Download stack XML files to the stack download directory you created earlier.
f. Go to your download basket again in the SAP Launchpad. You might need to
refresh the page to see your new selections.
g. Select the T icon to download a file with the URLs for your download basket.
Only follow these steps if you want to run the scripted BOM generation. You must
perform these actions before you run the SAP Download Manager. If you don't
want to run the scripted BOM generation, skip to the next section.
2. Add a new request by selecting the plus sign (+) button in the workspace tab. A
new page opens with your request.
_MODE=BASKET_CONTENT&_VERSION=3.1.2&$format=json .
12. On the Body tab, make sure to select the Raw view.
13. Copy the raw JSON response body. Save the response in your stack download
directory.
Download media
To download the SAP installation media:
4. Set your download directory to the stack download directory that you created. For
example, ``~/Downloads/S4HANA_1909_SP2/`.
5. Download all files from your download basket into this directory.
7 Note
The text file that contains your SAP download URLs is always
myDownloadBasketFiles.txt . However, this file is specific to the application or
database. You should keep this file with your other downloads for this particular
section for use in later sections.
Upload media
To upload the SAP media and stack files to your Azure storage account:
2. Under Azure services, select Resource groups. Or, enter resource groups in the
search bar.
4. On the resource group page, select the saplib storage account in the Resources
table.
5. On the storage account page's menu, select Containers under Data storage.
c. Navigate to the directory where you downloaded the SAP media previously.
d. Select all the archive files. These file names are similar to *.SAR , *.RAR , *.ZIP ,
and SAPCAR*.EXE .
c. Navigate to the download directory that you created in the previous section.
d. Select all your stack files. These file names are similar to MP_*.
(xml|xls|pdf|txt) .
is 2020 , the service pack is ISS for the initial software shipment, and the stack is
v001 .
Next steps
Prepare BOM
Prepare SAP BOM
Article • 05/07/2023
The SAP on Azure Deployment Automation Framework uses a Bill of Materials (BOM).
The BOM helps configure your SAP systems.
The automation framework's GitHub repository contains a set of Sample BOMs that
you can use to get started. It is also possible to create BOMs for other SAP Applications
and databases.
If you want to generate a BOM that includes permalinks, follow the steps for creating
this type of BOM.
7 Note
This guide covers advanced deployment topics. For a basic explanation of how to
deploy the automation framework, see the get started guide instead.
Prerequisites
Get, download, and prepare your SAP installation media and related files if you
haven't already done so.
SAP Application (DB) or HANA media in your Azure storage account.
A YAML editor for working with the BOM file.
Application installation templates for:
SAP Central Services (SCS)
The SAP Primary Application Server (PAS)
The SAP Additional Application Server (AAS)
Downloads of necessary stack files to the folder you created for acquiring SAP
media. For more information, see the basic BOM preparation how-to guide.
A copy of your SAP Download Basket manifest ( DownloadBasket.json ), downloaded
to the folder you created for acquiring SAP media.
An installation of the Postman utility .
An Azure subscription. If you don't already have an Azure subscription, create a
free account .
An SAP account with permissions to work with the database you want to use.
A system that runs Linux-type commands for validating the BOM. Install the
commands yamllint and ansible-lint on the system.
Scripted creation process
This process automates the same steps as the manual BOM creation process. Review the
script limitations before using this process.
Bash
cd stackfiles
2. Run the BOM generation script. Replace the example path with the correct path to
your utilities folder. For example:
Bash
cd ~/Azure_SAP_Automated_Deployment/deploy/scripts/generate_bom.sh
>../bom.yml
3. For the product parameter ( product ), enter the SAP product name. For example,
SAP_S4HANA_1809_SP4 . If you don't enter a value, the script attempts to determine
5. Review the templates section ( templates ). Make sure the file and
override_target_location values are correct. If necessary, edit and comment out
yml
templates:
# - name: "S4HANA_2020_ISS_v001 ini file"
# file: S4HANA_2020_ISS_v001.inifile.params
# override_target_location: "{{ target_media_location }}/config"
6. Review the stack files section ( stackfiles ). Make sure the item names and files are
correct. If necessary, edit those lines.
Script limitations
The scripted BOM creation process has the following limitations.
The scripting has a hard-coded dependency on HANA2. Edit your BOM file manually to
match the required dependency name. For example:
yml
dependencies:
- name: "HANA2"
yml
- name: SAPCAR
archive: SAPCAR_1320-80000935.EXE
override_target_filename: SAPCAR.EXE
- name: "SWPM20SP07"
archive: "SWPM20SP07_2-80003424.SAR"
override_target_filename: SWPM.SAR
sapurl: "https://softwaredownloads.sap.com/file/0020000001812632020"
The script only generates entries for media files that the SAP Maintenance Planner
identifies. This limitation occurs because it processes the stack .xsl file. If you add any
files to your download basket separately, such as through SAP Launchpad, you must add
those files to the BOM manually.
1. Open the downloads folder you created for acquiring SAP media
4. Add a BOM header with names for the build and target. The name value must be
the same as the BOM folder name in your storage account. For example:
yml
name: 'S4HANA_2020_ISS_v001'
target: 'ABAP PLATFORM 2020'
5. Add a defaults section with the target location. Use the path to the folder on the
target server where you want to copy installation files. Typically, use {{
target_media_location }} as follows:
yml
defaults:
target_location: "{{ target_media_location }}/download_basket"
6. Add a product identifiers section. You populate these values later as part of the
template preparation. For example:
yml
product_ids:
scs:
db:
pas:
aas:
web:
7. Add a materials section to specify the list of required materials. Add any
dependencies on other BOMs in this section. For example:
yml
materials:
dependencies:
- name: HANA2
c. For each item in the download basket, note the String and Number data. The
String data provides the file name (for example, igshelper_17-10010245.sar )
and a friendly description (for example, SAP IGS Fonts and Textures ). You'll
record the Number data after each entry in your BOM.
9. Add the list of media to bom.yml . The order of these items doesn't matter,
however, you might want to group related items together for readability. Add
SAPCAR separately, even though your SAP download basket contains this utility. For
example:
yml
media:
- name: SAPCAR
archive: SAPCAR_1320-80000935.EXE
<...>
10. Optionally, if you need to override the target media location, add the parameter
override_target_location to a media item. For example,
override_target_location: "{{ target_media_location }}/config" .
yml
templates:
yml
stackfiles:
- name: Download Basket JSON Manifest
file: downloadbasket.json
Permalinks
You can automatically generate a basic BOM that functions. However, the BOM doesn't
create permanent URLs (permalinks) to the SAP media by default. If you want to create
permalinks, you need to do more steps before you acquire the SAP media.
7 Note
Manual generation of a full SAP BOM with permalinks takes about twice as long as
preparing a basic BOM manually.
2. For each result, note the contents of the Value line. For example:
JSON
3. Copy down the first and fourth values separated by vertical bars.
b. The fourth value is the number you'll use to match with your media list. For
example, 61489 .
c. Optionally, copy down the second value, which denotes the file type. For
example, SP_B for kernel binary files, SPAT for non-kernel binary files, and CD
for database exports.
4. Use the fourth value as a key to match your download basket to your media list.
Match the values (for example, 61489 ) with the values you added as comments for
the media items (for example, # 61489 ).
5. For each matching entry in bom.yml , add a new value for the SAP URL. For the URL,
use https://softwaredownloads.sap.com/file/ plus the third value for that item
(for example, 0020000000703122018 ). For example:
yml
You can find multiple complete, usable BOM files in the GitHub repository folder.
yml
step|BOM Content
---
name: 'S4HANA_2020_ISS_v001'
target: 'ABAP PLATFORM 2020'
defaults:
target_location: "{{ target_media_location }}/download_basket"
product_ids:
scs:
db:
pas:
aas:
web:
materials:
dependencies:
- name: HANA2
media:
- name: SAPCAR
archive: SAPCAR_1320-80000935.EXE
- name: SWPM
archive: SWPM20SP06_6-80003424.SAR
templates:
- name: "S4HANA_2020_ISS_v001 ini file"
file: S4HANA_2020_ISS_v001.inifile.params
override_target_location: "{{ target_media_location }}/config"
stackfiles:
- name: Download Basket JSON Manifest
file: downloadbasket.json
override_target_location: "{{ target_media_location }}/config"
Validate BOM
You can validate your BOM structure from any OS that runs Linux-type commands. For
Windows, use Windows Subsystem for Linux (WSL). Another option is to run the
validation from your deployer if there's a copy of the BOM file there.
1. Run the validation script check_bom.sh from the directory containing your BOM.
For example:
Bash
cd ~/Azure_SAP_Automated_Deployment/deploy/scripts/check_bom.sh bom.yml
Successful validation
A successful validation shows the following output. You already installed yamllint and
ansible-lint commands in the prerequisites.
Output
Output
../documentation/ansible/system-design-
deployment/examples/S4HANA_2020_ISS_v001/bom_with_errors.yml
178:16 error too many spaces after colon (colons)
179:16 error too many spaces after colon (colons)
180:16 error too many spaces after colon (colons)
3. Under Azure services, select Resource groups. Or, enter resource groups in the
search bar.
5. On the resource group page, select the storage account saplib in the Resources
table.
6. On the storage account page's menu, select Containers under Data storage.
Next steps
How to generate SAP Application BOM
Generate SAP Application templates for
automation
Article • 02/10/2023
The SAP on Azure Deployment Automation Framework uses a Bill of Materials (BOM) to
define the SAP Application. Before you can deploy a system using a custom BOM, you
need to also create the templates for the ini-files used in the unattended SAP
installation. This guide covers how to create the application templates for an SAP/S4
deployment. The process is the same for the other SAP applications.
Prerequisites
Get, download, and prepare your SAP installation media and related files if you
haven't already done so. Make sure to have the name of the SAPCAR utility file
that you downloaded available.
Prepare your BOM if you haven't already done so. Make sure to have the BOM file
that you created available.
An Azure subscription. If you don't already have an Azure subscription, create a
free account .
An SAP account with permissions to work with the database you want to use.
Optionally, create a virtual machine (VM) within Azure to use for transferring SAP
media from your storage account. This method improves the transfer speed. Make
sure you have connectivity between your VM and the target SAP VM. For example,
check that your SSH keys are in place.
2. Change the root user password to a known value. You'll use this password later to
connect to the SAP Software Provisioning Manager (SWPM).
Bash
mkdir /tmp/workdir; cd $_
4. Make sure there's a temporary directory for the SAP Application template.
Bash
mkdir /tmp/app_template/
5. Change the permissions for the SAPCAR utility to make this file executable. Replace
<SAPCAR>.EXE with the name of the file you downloaded. For example,
SAPCAR_1311-80000935.EXE .
Bash
chmod +x /usr/sap/install/download_basket/<SAPCAR>.EXE
Bash
mkdir -p /usr/sap/install/SWPM
Bash
/usr/sap/install/download_basket/SAPCAR_1311-80000935.EXE -xf
/usr/sap/install/SWPM20SP07_0-80003424.SAR -R /usr/sap/install/SWPM/
You can do an unattended SAP installations with parameter files. These files pass all
required parameters to the SWPM installer.
7 Note
To generate the parameter file, you need to partially perform a manual installation.
For more information about why, see SAP NOTE 2230669 .
1. Sign in to your VM as the root user through your command-line interface (CLI).
2. Run the command hostname to get the host name of the VM from which you're
running the installation. Note both the unique hostname (where <example-vm-
hostname> is in the example output), and the full URL for the GUI.
3. Check that you have all necessary media and tools installed on your VM.
b. Replace <XML-stack-file-path> with the XML stack file path that you created.
For example, /usr/sap/install/config/MP_STACK_S4_2020_v001.xml .
Bash
/usr/sap/install/SWPM/sapinst \
SAPINST_XML_FILE=<XML-stack-file-path>.xml \
SAPINST_USE_HOSTNAME=<target-VM-hostname>
SAPINST_START_GUISERVER=true \
SAPINST_STOP_AFTER_DIALOG_PHASE=true
Output
5. Open your browser and visit the URL for the GUI that you previously obtained.
6. In the drop-down menu, select SAP S/4HANA Server 2020 > SAP HANA Database
> Installation > Application Server ABAP > Distributed System > ASCS Instance.
c. Select Next.
c. Select Next.
10. Set up a main password, which you only use during the creation of this ASCS
instance. You can only use alphanumeric characters and the special characters # ,
$ , @ , and _ for your password. You also can't use a digit or underscore as the first
character.
c. Select Next.
11. Configure more administrator settings. Other password fields are pre-populated
based on the main password you set.
a. Set the identifier of the administrator OS user ( <sid>adm where <sid > is your
SID) to 2000 .
c. Select Next.
12. When prompted for the SAPEXE kernel file path, enter
/usr/sap/install/download_basket , then select Next.
13. Make sure the package status is Available, then select Next.
14. Make sure the SAP Host Agent installation file status is Available, then select Next.
b. Make sure to set the virtual host name for the instance.
c. Select Next.
17. Keep the ABAP message server port settings. These default settings are 3600 and
3900 . Then, `select Next.
18. Don't select any other components to install, then select Next.
20. Enable Yes, clean up operating system users, then select Next.
22. In the CLI, find your installation configuration file in the temporary SAP installation
directory. At this point, the file is called inifile.params .
b. If the file .lastInstallationLocation exists, view the file contents and note the
directory listed.
c. If a directory for the product that you're installing exists, such as S4HANA2020 , go
to the product folder. For example, run cd
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/HA/ABAP/ASCS/ .
23. In your browser, in the SWPM GUI, select Cancel. Now, you have the ini files
required to build the template that can do an unattended installation of ASCS.
Bash
cp <path-to-INI-file>/inifile.params
/tmp/app_template/scs.inifile.params
Install and configure your HANA and SCS instances. These instances must be
online before you complete the database content load.
The <sid>adm user you created when you generated the unattended installation
file for ASCS must be a member of the sapinst group.
The user identifier for <sid>adm must match the value of hdblcm . This example uses
2000 .
1. Make and change to a temporary directory. Replace <sid> with your SID.
Bash
Bash
/usr/sap/install/SWPM/sapinst \
SAPINST_XML_FILE=/usr/sap/install/config/MP_STACK_S4_2020_v001.xml
a. In the drop-down menu, go to SAP S4/HANA Server 2020 > SAP HANA
Database > Installation > Application Server ABAP > Distributed System >
Database Instance > Distributed System.
7. Note the path of the profile directory that the ASCS installation creates. For
example, /usr/sap/<SID>/SYS/profile where <SID> is your SID. Then, select Next.
8. Enter the ABAP message server port for your ASCS instance. The port number is
36<InstanceNumber> , where <InstanceNumber> is the HANA instance number. For
example, if there are zero instances, 3600 is the port number. Then, select Next.
9. Enter your main password to use during the installation of database content. Then,
select Next.
10. Make sure the details for the administrator user <SID>adm where SID is your SID)
are correct. Then, select Next.
11. Enter your information for the SAP HANA Database Tenant.
a. For Database Host, enter the host name of the HANA database VM. To find this
host name, go to the resource page in the Azure portal.
b. For Instance Number, enter the HANA instance number. For example, 00 .
c. Enter an identifier for the new database tenant. For example, S4H .
e. Select Next.
12. Make sure your connection details are correct. Then, select OK.
13. Enter your administrator password for the system database. Then, select Next.
a. Select Next.
17. Review that all core HANA database export files are available. Then, select Next.
18. On Database Schema for SAPHANADB , select Next.
21. Enter the password for the HANA database administrator ( <SID>adm ) for the
database VM. Then, select Next.
23. Make sure the SAP HANA client file is available. Then, select Next.
24. Make sure to enable Yes, clean up operating system users. Then, select Next.
26. Open your CLI and find your installation configuration file.
c. If the file lastInstallationLocation is there, open the file. Note the directory
listed in the file contents.
d. If there's already a directory for the product that you're installing, such as
S4HANA2020 , go to the matching folder. For example,
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/HA/ABAP/DB/ .
28. Select Cancel. You can now use the unattended method for database content
loading.
29. Copy and rename your installation configuration file as follows. Replace
<path_to_config_file> with the path to your configuration file.
Bash
cp <path_to_config_file>/inifile.params
/tmp/app_template/db.inifile.params
Bash
/usr/sap/install/SWPM/sapinst -version
31. If the version of sapinst is greater than 749.0.6 , also copy the files keydb.xml and
instkey.pkey to follow SAP Note 2393060 . Replace <path_to_config_file> with
the path to your configuration file.
Bash
cp <path_to_config_file>/{keydb.xml,instkey.pkey} /tmp/app_template/
) Important
You might not see some of these settings in 2020 versions of SAP products. In that
case, skip the step.
2. Check that you have all necessary media and tools installed on your VM.
3. Create and change to a temporary directory. Replace <SID> with your SID.
Bash
a. Go to the URL for the SWPM GUI. You got this URL when you generated the
unattended installation file for ASCS.
10. Set the main password for all users. Then, select Next.
11. Wait for the list below-the-fold-list to populate. Then, select Next.
12. Make sure to disable the setting Upgrade SAP Host Agent to the version of the
provided SAPHOSTAGENT.SAR archive. Then, select Next.
13. Enter the instance number for the SAP HANA database, and the database system
administrator password. Then, select Next.
18. Make sure the PAS instance number and instance host are correct. Then, select
Next.
21. On ICM User Management for the SAP Web Dispatcher, select Next.
22. On SLD Destination for the SAP System OS Level, configure these settings:
b. Enable Do not create Message Server Access Control List. Then, select Next
25. On Secure Storage Key Generation, make sure to select Individual Key. Then,
select Next.
c. Select Next.
27. For Clean up operating system users, select Yes. Then, select Next.
28. In your CLI, open your temporary directory for the installation.
29. Make sure there's a copy of the parameters file inifile.params . For example,
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/DISTRIBUTED/ABAP/APP1/inifile
.params .
30. In SWPM, select Cancel. You can now install PAS through the unattended method.
Bash
cp <path_to_config_file>/inifile.params
/tmp/app_template/pas.inifile.params
) Important
You might not see some of these settings in 2020 versions of SAP products. In that
case, skip the step.
2. Check that you have all necessary media and tools installed on your VM.
Bash
4. Create a temporary directory for your installation as follows. Replace <sid> with
your SID.
Bash
a. Go to the URL for the SWPM GUI. You got this URL when you generated the
unattended installation file for ASCS.
6. In the drop-down menu, SAP S/4HANA Server 2020 > SAP HANA Database >
Installation > Application Server ABAP > High-Availability System > Additional
Application Server Instance.
10. Set the main password for all users. Then, select Next.
12. Wait for the list below-the-fold-list to populate. Then, select Next.
13. Make sure to enable Upgrade SAP Host Agent to the version of the provided
SAPHOSTAGENT.SAR archive. Then, select Next.
14. Enter the instance number of your SAP HANA database and the database system
administrator password. Then, select Next.
19. Make sure the AAS instance number and instance host are correct. Then, select
Next.
22. On ICM User Management for the SAP Web Dispatcher, select Next.
23. On SLD Destination for the SAP System OS Level, make sure to enable No SLD
destination. Then, select Next.
24. Enable Do not create Message Server Access Control List. Then, select Next.
26. Set the password for the user TMSADM in Client 000 to the main password. Then,
select Next.
30. On Software Package Browser, select the table Detected Packages. If the
individual package location for SUM 2.0 is empty, set the package path to
usr/sap/install/config . Then, select Next.
31. Wait for the package location to populate. Then, select Next.
33. Make sure to enable Yes, clean up operating system users. Then, select Next.
34. Through the CLI, check that your temporary directory now has a copy of the
parameter file. For example,
/tmp/sapinst_instdir/S4HANA2020/CORE/HDB/INSTALL/AS/APPS/inifile.params .
Bash
cp <path_to_inifile>/inifile.params
/tmp/app_template/aas.inifile.params
37. In SWPM, select Cancel. You can now do the AAS installation through the
unattended method.
1. If you haven't already, download each parameter file you created (ASCS, PAS, and
AAS). You need these files on the computer or VM from which you're working.
5. Copy the header of the ASCS parameter file into the combination file. For example:
yml
#######################################################################
##################################################
#
#
# Installation service 'SAP S/4HANA Server 2020 > SAP HANA Database >
Installation #
# > Application Server ABAP > Distributed System > ASCS Instance',
product id 'NW_ABAP_ASCS:S4HANA2020.CORE.HDB.ABAP' #
#
#
#######################################################################
##################################################
6. For each inifile.params file you have, copy the product identifier line from the
header. Then, copy the product identifiers into the header of your combination file.
For example:
yml
#######################################################################
######################################################################
#
#
# Installation service 'SAP S/4HANA Server 2020 > SAP HANA Database >
Installation #
# > Application Server ABAP > Distributed System > ASCS Instance',
product id 'NW_ABAP_ASCS:S4HANA2020.CORE.HDB.ABAP'
#
# > Application Server ABAP > Distributed System > Database
Instance', product id 'NW_ABAP_DB:S4HANA2020.CORE.HDB.ABAP'
#
# > Application Server ABAP > Distributed System > Primary
Application Server Instance', product id
'NW_ABAP_CI:S4HANA2020.CORE.HDB.ABAP' #
# > Additional SAP System Instances > Additional Application Server
Instance', product id 'NW_DI:S4HANA2020.CORE.HDB.PD' #
#
#
#######################################################################
######################################################################
7. Open your bom.yml file in an editor.
9. For each inifile.params file you have, copy the product identifier from the header
into the appropriate part of product_ids . For example, copy your ASCS to scs :
yml
product_ids:
scs: "NW_ABAP_ASCS:S4HANA2020.CORE.HDB.ABAP"
db: ""
pas: ""
aas: ""
web: ""
10. Remove any lines that you commented out or left blank.
Improve readability
Next, improve the readability of your combination file:
yml
archives.downloadBasket =
/usr/sap/install/download_basket
HDB_Schema_Check_Dialogs.schemaName = SAPHANADB
HDB_Schema_Check_Dialogs.schemaPassword = MyDefaultPassw0rd
HDB_Userstore.doNotResolveHostnames = x00dx0000l09d4
a. archives.downloadBasket = {{ download_basket_dir }}
b. HDB_Schema_Check_Dialogs.schemaPassword = {{ main_password }}
c. HDB_Userstore.doNotResolveHostnames = {{ hdb_hostname }}
d. hostAgent.sapAdmPassword = {{ main_password }}
e. NW_AS.instanceNumber = {{ aas_instance_number }}
g. NW_CI_Instance.ascsVirtualHostname = {{ scs_hostname }}
h. NW_CI_Instance.ciInstanceNumber = {{ pas_instance_number }}
j. NW_CI_Instance.ciVirtualHostname = {{ pas_hostname }}
k. NW_CI_Instance.scsVirtualHostname = {{ scs_hostname }}
l. NW_DI_Instance.virtualHostname = {{ aas_hostname }}
m. NW_getFQDN.FQDN = {{ sap_fqdn }}
n. NW_GetMasterPassword.masterPwd = {{ main_password }}
p. NW_HDB_DB.abapSchemaPassword = {{ main_password }}
q. NW_HDB_getDBInfo.dbhost = {{ hdb_hostname }}
s. NW_HDB_getDBInfo.instanceNumber = {{ hdb_instance_number }}
t. NW_HDB_getDBInfo.systemDbPassword = {{ main_password }}
v. NW_HDB_getDBInfo.systemPassword = {{ main_password }}
7. If you don't already have a directory called templates, create this directory.
9. Select Upload.
1. Open bom.yml .
2. In the section templates , add your new template file names. For example:
yml
templates:
- name: "S4HANA_2020_ISS_v001 ini file"
file: S4HANA_2020_ISS_v001.inifile.params
override_target_location: "{{ target_media_location }}/config"
3. If you're using the scripted application BOM preparation, remove the # before the
template.
8. Select Upload.
10. Select your BOM file, bom.yml , from your computer or VM.
You can deploy all SAP Deployment Automation Framework components by using shell
scripts.
You can bootstrap the deployer in the control plane by using the install_deployer shell
script.
You can bootstrap the SAP library in the control plane by using the install_library shell
script.
Other operations
Set the deployment credentials by using the Set SPN secrets shell script.
Update the Terraform state file by using the Update Terraform state shell script.
Next step
Deploy the control plane by using Bash
deploy_controlplane.sh
Article • 09/20/2023
Synopsis
The deploy_controlplane.sh script deploys the control plane, including the deployer
VMs, Azure Key Vault, and the SAP library.
The deployer VM has installations of Ansible and Terraform. This VM is used to deploy
the SAP systems.
Syntax
Bash
Description
Deploys the control plane, which includes the deployer VM and the SAP library. For
more information, see Configuring the control plane and Deploying the control plane
Examples
Example 1
This example deploys the control plane, as defined by the parameter files. The process
prompts you for the SPN details.
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
Example 2
This example deploys the control plane, as defined by the parameter files. The process
adds the deployment credentials to the deployment's key vault.
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
\
--subscription "${ARM_SUBSCRIPTION_ID}"
\
--spn_id "${ARM_CLIENT_ID}"
\
--spn_secret "${ARM_CLIENT_SECRET}"
\
--tenant_id "${ARM_TENANT_ID}"
Parameters
--deployer_parameter_file
Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.
YAML
Type: String
Aliases: `-d`
Required: True
--library_parameter_file
Sets the parameter file for the SAP library. For more information, see Configuring the
control plane.
YAML
Type: String
Aliases: `-l`
Required: True
--subscription
Sets the target Azure subscription.
YAML
Type: String
Aliases: `-s`
Required: False
--spn_id
Sets the service principal's app ID. For more information, see Prepare the deployment
credentials.
YAML
Type: String
Aliases: `-c`
Required: False
--spn_secret
Sets the Service Principal password. For more information, see Prepare the deployment
credentials.
YAML
Type: String
Aliases: `-p`
Required: False
--tenant_id
Sets the tenant ID for the service principal. For more information, see Prepare the
deployment credentials.
YAML
Type: String
Aliases: `-t`
Required: False
--storageaccountname
Sets the name of the storage account that contains the Terraform state files.
YAML
Type: String
Aliases: `-a`
Required: False
--force
YAML
Type: SwitchParameter
Aliases: `-f`
Required: False
--auto-approve
YAML
Type: SwitchParameter
Aliases: `-i`
Required: False
--recover
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related Links
+GitHub repository: SAP on Azure Deployment Automation Framework
install_workloadzone.sh
Article • 09/19/2023
Synopsis
You can use the install_workloadzone.sh script to deploy a new SAP workload zone.
Syntax
Bash
Description
The install_workloadzone.sh script deploys a new SAP workload zone. The workload
zone contains the shared resources for all SAP VMs.
Examples
Example 1
This example deploys the workload zone, as defined by the parameter files. The process
prompts you for the SPN details.
Bash
Example 2
This example deploys the workload zone, as defined by the parameter files. The process
adds the deployment credentials to the deployment's key vault.
Bash
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-
INFRASTRUCTURE
export subscriptionId=<subscriptionID>
export appId=<appID>
export spnSecret="<password>"
export tenantId=<tenantID>
export keyvault=<keyvaultName>
export storageAccount=<storageaccountName>
export statefileSubscription=<statefile_subscription>
export DEPLOYMENT_REPO_PATH=~/Azure_SAP_Automated_Deployment/sap-automation
${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
--parameter_file DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars \
--keyvault $keyvault \
--state_subscription $statefileSubscription \
--storageaccountname $storageAccount \
--subscription $subscriptionId \
--spn_id $appId \
--spn_secret $spnSecret \
--tenant_id $tenantId
Parameters
--parameter_file
Sets the parameter file for the workload zone. For more information, see Configuring
the workload zone.
YAML
Type: String
Aliases: `-p`
Required: True
--deployer_tfstate_key
Type: String
Aliases: `-d`
Required: False
deployer_environment
YAML
Type: String
Aliases: `-e`
Required: False
--state_subscription
YAML
Type: String
Aliases: `-k`
Required: False
--storageaccountname
Sets the name of the storage account that contains the Terraform state files.
YAML
Type: String
Aliases: `-a`
Required: False
--keyvault
Type: String
Aliases: `-v`
Required: False
--subscription
YAML
Type: String
Aliases: `-s`
Required: False
-spn_id
Sets the service principal's app ID. For more information, see Prepare the deployment
credentials.
YAML
Type: String
Aliases: `-c`
Required: False
--spn_secret
Sets the service principal password. For more information, see Prepare the deployment
credentials.
YAML
Type: String
Aliases: `-p`
Required: False
--tenant_id
Sets the tenant ID for the service principal. For more information, see Prepare the
deployment credentials.
YAML
Type: String
Aliases: `-t`
Required: False
--force
YAML
Type: SwitchParameter
Aliases: `-f`
Required: False
--auto-approve
YAML
Type: SwitchParameter
Aliases: `-i`
Required: False
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
Installer.sh
Article • 02/10/2023
Synopsis
You can use the command installer.sh to deploy a new SAP system. The script can be
used to deploy all the different types of deployments.
Syntax
Bash
Description
The installer.sh script deploys or updates a new SAP system of the specified type.
Examples
Example 1
Deploys or updates an SAP System.
Bash
Example 2
Deploys or updates an SAP System.
Bash
Example 3
Deploys or updates an SAP Library.
Bash
Parameters
--parameter_file
Sets the parameter file for the system. For more information, see Configuring the SAP
system.
YAML
Type: String
Aliases: `-p`
Required: True
--type
YAML
Type: String
Accepted values: sap_deployer, sap_landscape, sap_library, sap_system
Aliases: `-t`
Required: True
--deployer_tfstate_key
Sets the name of the state file for the deployer deployment.
YAML
Type: String
Aliases: `-d`
Required: False
-landscape_tfstate_key
Sets the name of the state file for the workload zone deployment.
YAML
Type: String
Aliases: `-l`
Required: False
--state_subscription
YAML
Type: String
Aliases: `-k`
Required: False
--storageaccountname
Sets the name of the storage account that contains the Terraform state files.
YAML
Type: String
Aliases: `-a`
Required: False
--keyvault
YAML
Type: String
Aliases: `-v`
Required: False
--force
YAML
Type: SwitchParameter
Aliases: `-f`
Required: False
--auto-approve
YAML
Type: SwitchParameter
Aliases: `-i`
Required: False
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
install_deployer.sh
Article • 02/10/2023
Synopsis
You can use the script install_deployer.sh to set up a new deployer VM in the control
plane.
Syntax
Bash
Description
The script install_deployer.sh sets up a new deployer in the control plane.
The deployer VM has installation of Ansible and Terraform. You use the deployer VM to
deploy the SAP artifacts.
Examples
Example 1
Bash
Parameters
--parameterfile
Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.
YAML
Type: String
Aliases: `-p`
Required: True
--auto-approve
YAML
Type: SwitchParameter
Aliases: `-i`
Required: False
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
Install_library.sh
Article • 02/10/2023
Synopsis
The install_library.sh script sets up a new SAP Library.
Syntax
Bash
Description
The install_library.sh command sets up a new SAP Library in the control plane. The
SAP Library provides the storage for Terraform state files and SAP installation media.
Examples
Example 1
Bash
Parameters
--parameterfile
Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.
YAML
Type: String
Aliases: `-p`
Required: True
--deployer_statefile_foldername
Sets the relative folder path to the folder that contains the deployer VM's parameter file,
named terraform.tfstate .
YAML
Type: String
Aliases: `-d`
Required: True
--auto-approve
YAML
Type: SwitchParameter
Aliases: `-i`
Required: False
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
remove_controlplane.sh
Article • 09/20/2023
Synopsis
Removes the control plane, including the deployer VM and the SAP library. It's
important to remove the terraform deployed artifacts using Terraform to ensure that the
removals are done correctly.
Syntax
Bash
Description
Removes the SAP control plane, including the deployer VM and the SAP library.
Examples
Example 1
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/remove_controlplane.sh.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
Example 2
Bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>"
export ARM_CLIENT_SECRET="<password>"
export ARM_TENANT_ID="<tenantId>"
export env_code="MGMT"
export region_code="WEEU"
export vnet_code="DEP01"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-
automation"
export
CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}"
--tenant "${ARM_TENANT_ID}"
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/remove_controlplane.sh.sh
\
--deployer_parameter_file
"${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-
INFRASTRUCTURE.tfvars" \
--library_parameter_file
"${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-
SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
--subscription xxxxxxxxxxx
--storage_account mgmtweeutfstate###
Parameters
--deployer_parameter_file
Sets the parameter file for the deployer VM. For more information, see Configuring the
control plane.
YAML
Type: String
Aliases: `-d`
Required: True
--library_parameter_file
Sets the parameter file for the SAP library. For more information, see Configuring the
control plane.
YAML
Type: String
Aliases: `-l`
Required: True
--subscription
Sets the subscription that contains the SAP library. For more information, see
Configuring the control plane.
YAML
Type: String
Aliases: `-l`
Required: True
--storage_account
Sets the storage account name of the tfstate storage account in SAP library. For more
information, see Configuring the control plane.
YAML
Type: String
Aliases: `-l`
Required: True
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
Remover.sh
Article • 02/10/2023
Synopsis
You can use the command remover.sh to remove a new SAP system. The script can be
used to remove workload zones and SAP systems.
Syntax
Bash
Description
The remover.sh script deploys or updates a new deployment of the specified type.
Examples
Example 1
Removes an SAP System deployment.
Bash
Example 2
Removes a workload deployment.
Bash
remover.sh --parameterfile DEV-WEEU-SAP00-INFRASTRUCTURE.tfvars --type
sap_landscape
Parameters
--parameter_file
YAML
Type: String
Aliases: `-p`
Required: True
--type
YAML
Type: String
Accepted values: sap_deployer, sap_landscape, sap_library, sap_system
Aliases: `-t`
Required: True
--help
YAML
Type: SwitchParameter
Aliases: `-h`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
set_secrets.sh
Article • 02/10/2023
Synopsis
Sets the service principal secrets in Azure Key Vault.
Syntax
Bash
Description
Sets the secrets in Key Vault that the deployment automation requires.
EXAMPLES
EXAMPLE 1
Bash
Parameters
--region
Sets the name of the Azure region for deployment.
YAML
Type: String
Aliases: `-r`
Required: True
--environment
YAML
Type: String
```yaml
Type: String
Aliases: `-e`
Required: True
--vault
YAML
Type: String
Aliases: `-v`
Required: True
-spn_id
Sets the service principal's app ID. For more information, see Prepare the deployment
credentials.
YAML
Type: String
Aliases: `-c`
Required: False
--spn_secret
Sets the Service Principal password. For more information, see Prepare the deployment
credentials.
YAML
Type: String
Aliases: `-p`
Required: False
--tenant_id
Sets the tenant ID for the service principal. For more information, see Prepare the
deployment credentials.
YAML
Type: String
Aliases: `-t`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
advanced_state_management.sh
Article • 12/22/2023
Synopsis
Allows for Terraform state file management.
Syntax
Bash
Description
You can use this script to:
This script is useful if resources are modified or created without using Terraform.
Examples
Example 1
List the contents of the Terraform state file.
Bash
parameter_file_name="DEV-WEEU-SAP01-X00.tfvars"
deployment_type="sap_system"
subscriptionID="<subscriptionId>"
#This is the name of the storage account containing the terraform state
files
storage_accountname="<storageaccountname>"
$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh
\
--parameterfile "${parameter_file_name}" \
--type "${deployment_type}" \
--operation list \
--subscription "${subscriptionID}" \
--storage_account_name "${storage_accountname}" \
--terraform_keyfile "${key_file}"
Example 2
Importing a Virtual Machine
Bash
parameter_file_name="DEV-WEEU-SAP01-X00.tfvars"
deployment_type="sap_system"
subscriptionID="<subscriptionId>"
#This is the name of the storage account containing the terraform state
files
storage_accountname="<storageaccountname>"
$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh
\
--parameterfile "${parameter_file_name}" \
--type "${deployment_type}" \
--operation import \
--subscription "${subscriptionID}" \
--storage_account_name "${storage_accountname}" \
--terraform_keyfile "${key_file}" \
--tf_resource_name "${tf_resource_name}" \
--azure_resource_id "${azure_resource_id}"
Example 3
Removing a storage account from the state file
Bash
parameter_file_name="DEV-WEEU-SAP01-X00.tfvars"
deployment_type="sap_system"
subscriptionID="<subscriptionId>"
#This is the name of the storage account containing the terraform state
files
storage_accountname="<storageaccountname>"
$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh
\
--parameterfile "${parameter_file_name}" \
--type "${deployment_type}" \
--operation remove \
--subscription "${subscriptionID}" \
--storage_account_name "${storage_accountname}" \
--terraform_keyfile "${key_file}" \
--tf_resource_name "${tf_resource_name}"
Parameters
--parameterfile
YAML
Type: String
Aliases: `-p`
Required: True
--type
Sets the type of system. Valid values include: sap_deployer , sap_library , sap_landscape ,
and sap_system .
YAML
Type: String
Aliases: `-t`
Accepted values: sap_deployer, sap_landscape, sap_library, sap_system
Required: True
--operation
Sets the operation to perform. Valid values include: sap_deployer , import , list , and
remove .
YAML
Type: String
Aliases: `-t`
Accepted values: import, list, remove
Required: True
--terraform_keyfile
YAML
Type: String
Aliases: `-k`
Required: True
--subscription
YAML
Type: String
Aliases: `-s`
Required: False
--storageaccountname
Sets the name of the storage account that contains the Terraform state files.
YAML
Type: String
Aliases: `-a`
Required: False
--tf_resource_name
YAML
Type: String
Aliases: `-n`
Required: False
--azure_resource_id
YAML
Type: String
Aliases: `-i`
Required: False
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
update_sas_token.sh
Article • 02/10/2023
Synopsis
Updates the SAP Library SAS token in Azure Key Vault
Syntax
Bash
update_sas_token.sh
Description
Updates the SAP Library SAS token in Azure Key Vault. Prompts for the SAP library
storage account name and the deployer key vault name.
EXAMPLES
EXAMPLE 1
Prompts for the SAP library storage account name and the deployer key vault name.
Bash
update_sas_token.sh
EXAMPLE 2
Bash
export SAP_LIBRARY_TF=mgmtweeusaplibXXX
export SAP_KV_TF=MGMTWEEUDEP00userYYY
update_sas_token.sh
Parameters
None
Notes
v0.9 - Initial version
Related links
GitHub repository: SAP on Azure Deployment Automation Framework
What is Azure Monitor for SAP
solutions?
Article • 05/23/2023
When you have critical SAP applications and business processes that rely on Azure
resources, you might want to monitor those resources for availability, performance, and
operation. Azure Monitor for SAP solutions is an Azure-native monitoring product for
SAP landscapes that run on Azure. It uses specific parts of the Azure Monitor
infrastructure.
You can use Azure Monitor for SAP solutions with both SAP on Azure virtual machines
(VMs) and SAP on Azure Large Instances.
Azure Monitor for SAP solutions uses the Azure Monitor capabilities of Log Analytics
and workbooks. With it, you can:
Create custom visualizations by editing the default that Azure Monitor for SAP
solutions provides.
Write custom queries.
Create custom alerts by using Log Analytics workspaces.
Take advantage of the flexible retention period in Azure Monitor Logs and Log
Analytics.
Connect monitoring data with your ticketing system.
OS (Linux) data
CPU use, fork count, running processes, and blocked processes
Memory use and distribution among used, cached, and buffered
Swap use, paging, and swap rate
File system usage, along with number of bytes read and written per block device
Read/write latency per block device
Ongoing I/O count and persistent memory read/write bytes
Network packets in/out and network bytes in/out
You can monitor multiple instances of a component type across multiple SAP
systems (SIDs) within a virtual network by using a single resource of Azure Monitor
for SAP solutions. For example, you can monitor multiple HANA databases, HA
clusters, Microsoft SQL Server instances, and SAP NetWeaver systems of multiple
SIDs.
The architecture diagram shows the SAP HANA provider as an example. You can
configure multiple providers for corresponding components to collect data from
those components. Examples include HANA database, HA cluster, Microsoft SQL
Server instance, and SAP NetWeaver.
The Azure portal, where you access Azure Monitor for SAP solutions.
The Azure Monitor for SAP solutions resource, where you view monitoring data.
The managed resource group, which is deployed automatically as part of the Azure
Monitor for SAP solutions resource's deployment. Inside the managed resource
group, resources like these help collect data:
An Azure Functions resource hosts the monitoring code. This logic collects data
from the source systems and transfers the data to the monitoring framework.
An Azure Key Vault resource holds the SAP HANA database credentials and
stores information about providers.
A Log Analytics workspace is the destination for storing data. Optionally, you
can choose to use an existing workspace in the same subscription as your Azure
Monitor for SAP solutions resource at deployment.
A storage account is associated with the Azure Functions resource. It's used to
manage triggers and executions of logging functions.
You can also use Kusto Query Language (KQL) to run log queries against the raw tables
inside the Log Analytics workspace.
You can use Kusto queries to help you monitor your Azure Monitor for SAP solutions
resources. The following sample query gives you data from a custom log for a specified
time range. You can view the list of custom tables by expanding the Custom Logs
section. You can specify the time range and the number of rows. In this example, you
get five rows of data for your selected time range:
Kusto
Custom_log_table_name
| take 5
You're responsible for paying the cost of the underlying components in the managed
resource group. You're also responsible for consumption costs associated with data use
and retention. For more information, see:
Next steps
For a list of custom logs relevant to Azure Monitor for SAP solutions and
information on related data types, see Data reference for Azure Monitor for SAP
solutions.
For information on providers available for Azure Monitor for SAP solutions, see
Azure Monitor for SAP solutions providers.
Azure Monitor for SAP solutions
FAQ
FAQ
This article provides answers to frequently asked questions (FAQ) about Azure Monitor
for SAP solutions.
HANA database
The underlying infrastructure
The High-availability cluster
Microsoft SQL server
SAP Netweaver availability
SAP Application Instance availability metrics
Does this service replace SAP Solution
manager?
No. Customers can still use SAP Solution manager for Business process monitoring.
Next steps
Learn more about Azure Monitor for SAP solutions.
In this quickstart, you get started with Azure Monitor for SAP solutions by using the
Azure portal to deploy resources and configure providers.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
Create or choose a virtual network for Azure Monitor for SAP solutions that has
access to the source SAP system's virtual network.
Create a subnet with an address range of IPv4/25 or larger in the virtual network
that's associated with Azure Monitor for SAP solutions, with subnet delegation
assigned to Microsoft.Web/serverFarms.
Create a monitoring resource for Azure
Monitor for SAP solutions
1. Sign in to the Azure portal .
2. In the search box, search for and select Azure Monitor for SAP solutions.
4. On the Providers tab, you can start creating providers along with the monitoring
resource. You can also create providers later by going to the Providers tab in the
Azure Monitor for SAP solutions resource.
5. On the Tags tab, you can add tags to the monitoring resource. Make sure to add
all the mandatory tags if you have a tag policy in place.
6. On the Review + create tab, review the details and select Create.
Next steps
Learn more about Azure Monitor for SAP solutions.
In this quickstart, get started with Azure Monitor for SAP solutions by using the
Az.Workloads PowerShell module to create Azure Monitor for SAP solutions resources.
You create a resource group, set up monitoring, and create a provider instance.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
If you choose to use PowerShell locally, this article requires that you install the Az
PowerShell module. Connect to your Azure account by using the Connect-
AzAccount cmdlet. For more information about installing the Az PowerShell
module, see Install Azure PowerShell. Alternately, you can use Azure Cloud Shell.
Azure PowerShell
If you have multiple Azure subscriptions, select the subscription in which the
resources should be billed by using the Set-AzContext cmdlet:
Azure PowerShell
Create or choose a virtual network for Azure Monitor for SAP solutions that has
access to the source SAP system's virtual network.
Create a subnet with an address range of IPv4/25 or larger in the virtual network
that's associated with Azure Monitor for SAP solutions, with subnet delegation
assigned to Microsoft.Web/serverFarms.
Create a resource group
Create an Azure resource group by using the New-AzResourceGroup cmdlet. A resource
group is a logical container in which Azure resources are deployed and managed as a
group.
The following example creates a resource group with the specified name and in the
specified location:
Azure PowerShell
Azure PowerShell
$monitor_name = 'Contoso-AMS-Monitor'
$rg_name = 'Contoso-AMS-RG'
$subscription_id = '00000000-0000-0000-0000-000000000000'
$location = 'eastus'
$managed_rg_name = 'MRG_Contoso-AMS-Monitor'
$subnet_id = '/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/ams-vnet-
rg/providers/Microsoft.Network/virtualNetworks/ams-vnet-eus/subnets/Contoso-
AMS-Monitor'
$route_all = 'RouteAll'
To get the properties of an SAP monitor, use the Get-AzWorkloadsMonitor cmdlet. The
following example gets the properties of an SAP monitor for the specified subscription,
resource group, and resource name:
Azure PowerShell
Azure PowerShell
In the following code, hostname is the host name or IP address for SAP Web Dispatcher
or the application server. SapHostFileEntry is the IP address, fully qualified domain
name, or host name of every instance that's listed in GetSystemInstanceList.
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-NW'
$SapClientId = '000'
$SapHostFileEntry = '["10.0.0.0 x01scscl1.ams.azure.com x01scscl1,10.0.0.0
x01erscl1.ams.azure.com x01erscl1,10.0.0.1 x01appvm1.ams.azure.com
x01appvm1,10.0.0.2 x01appvm2.ams.azure.com x01appvm2"]'
$hostname = 'x01appvm0'
$instance_number = '00'
$password = 'Password@123'
$sapportNumber = '8000'
$sap_sid = 'X01'
$sap_username = 'AMS_NW'
$providerSetting = New-AzWorkloadsProviderSapNetWeaverInstanceObject -
SapClientId $SapClientId -SapHostFileEntry $SapHostFileEntry -SapHostname
$hostname -SapInstanceNr $instance_number -SapPassword $password -
SapPortNumber $sapportNumber -SapSid $sap_sid -SapUsername $sap_username -
SslPreference Disabled
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-HANA'
$hostname = '10.0.0.0'
$sap_sid = 'X01'
$username = 'SYSTEM'
$password = 'password@123'
$dbName = 'SYSTEMDB'
$instance_number = '00'
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-OS'
$hostname = 'http://10.0.0.0:9100/metrics'
$sap_sid = 'X01'
$providerSetting = New-AzWorkloadsProviderPrometheusOSInstanceObject -
PrometheusUrl $hostname -SapSid $sap_sid -SslPreference Disabled
New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name
$provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id
-ProviderSetting $providerSetting
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-HA'
$PrometheusHa_Url = 'http://10.0.0.0:44322/metrics'
$sap_sid = 'X01'
$cluster_name = 'haCluster'
$hostname = '10.0.0.0'
$providerSetting = New-AzWorkloadsProviderPrometheusHaClusterInstanceObject
-ClusterName $cluster_name -Hostname $hostname -PrometheusUrl
$PrometheusHa_Url -Sid $sap_sid -SslPreference Disabled
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-SQL'
$hostname = '10.0.0.0'
$sap_sid = 'X01'
$username = 'AMS_SQL'
$password = 'Password@123'
$port = '1433'
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-DB2'
$hostname = '10.0.0.0'
$sap_sid = 'X01'
$username = 'AMS_DB2'
$password = 'password@123'
$dbName = 'X01'
$port = '5912'
Azure PowerShell
Clean up resources
If you don't need the resources that you created in this article, you can delete them by
using the following examples.
Azure PowerShell
$subscription_id = '00000000-0000-0000-0000-000000000000'
$rg_name = 'Contoso-AMS-RG'
$monitor_name = 'Contoso-AMS-Monitor'
$provider_name = 'Contoso-AMS-Monitor-DB2'
Azure PowerShell
$monitor_name = 'Contoso-AMS-Monitor'
$rg_name = 'Contoso-AMS-RG'
$subscription_id = '00000000-0000-0000-0000-000000000000'
U Caution
If resources outside the scope of this article exist in the specified resource group,
they'll also be deleted.
Azure PowerShell
Next steps
Learn more about Azure Monitor for SAP solutions.
In the context of Azure Monitor for SAP solutions, a provider contains the connection
information for a corresponding component and helps to collect data from there. There
are multiple provider types. For example, an SAP HANA provider is configured for a
specific component within the SAP landscape, like an SAP HANA database. You can
configure an Azure Monitor for SAP solutions resource (also known as an SAP monitor
resource) with multiple providers of the same type or multiple providers of multiple
types.
You can choose to configure different provider types for data collection from the
corresponding component in their SAP landscape. For example, you can configure one
provider for the SAP HANA provider type, another provider for the high-availability
cluster provider type, and so on.
You can also configure multiple providers of a specific provider type to reuse the same
SAP monitor resource and associated managed group. For more information, see
Manage Azure Resource Manager resource groups by using the Azure portal.
We recommend that you configure at least one provider when you deploy an Azure
Monitor for SAP solutions resource. By configuring a provider, you start data collection
from the corresponding component for which the provider is configured.
If you don't configure any providers at the time of deployment, the Azure Monitor for
SAP solutions resource is still deployed, but no data is collected. You can add providers
after deployment through the SAP monitor resource in the Azure portal. You can add or
delete providers from the SAP monitor resource at any time.
SAP system and application server availability (for example, instance process
availability of Dispatcher, ICM, Gateway, Message Server, Enqueue Server, IGS
Watchdog) (SAPOsControl).
Work process usage statistics and trends (SAPOsControl).
Enqueue lock statistics and trends (SAPOsControl).
Queue usage statistics and trends (SAPOsControl).
SMON metrics (Tcode - /SDF/SMON) (RFC).
SWNC workload, memory, transaction, user, RFC usage (Tcode - St03n) (RFC).
Short dumps (Tcode - ST22) (RFC).
Object lock (Tcode - SM12) (RFC).
Failed updates (Tcode - SM13) (RFC).
System logs analysis (Tcode - SM21) (RFC).
Batch jobs statistics (Tcode - SM37) (RFC).
Outbound queues (Tcode - SMQ1) (RFC).
Inbound queues (Tcode - SMQ2) (RFC).
Transactional RFC (Tcode - SM59) (RFC).
STMS Change Transport System metrics (Tcode - STMS) (RFC).
Fully qualified domain name (FQDN) of the SAP Web Dispatcher or the SAP
application server.
SAP system ID, Instance no.
Host file entries of all SAP application servers that get listed via the SAPcontrol
GetSystemInstanceList web method.
For SOAP+RFC:
For more information, see Configure SAP NetWeaver for Azure Monitor for SAP
solutions.
Host IP address.
HANA SQL port number.
SYSTEMDB username and password.
We recommend that you configure the SAP HANA provider against SYSTEMDB.
However, you can configure more providers against other database tenants.
For more information, see Configure SAP HANA provider for Azure Monitor for SAP
solutions.
Provider type: SQL Server
You can configure one or more SQL Server providers to enable data collection from SQL
Server on virtual machines . The SQL Server provider connects to SQL Server over the
SQL port. It then pulls data from the database and pushes it to the Log Analytics
workspace in your subscription. Configure SQL Server for SQL authentication and for
signing in with the SQL Server username and password. Set the SAP database as the
default database for the provider. The SQL Server provider collects data every 60
seconds up to every hour from the SQL Server.
For more information, see Configure SQL Server for Azure Monitor for SAP solutions.
2. Configure a high-availability cluster provider for each node within the Pacemaker
cluster.
Name: A name for this provider. It should be unique for this Azure Monitor
for SAP solutions instance.
Prometheus endpoint: http://<servername or ip address>:9664/metrics .
SID: For SAP systems, use the SAP SID. For other systems (for example, NFS
clusters), use a three-character name for the cluster. The SID must be distinct
from other clusters that are monitored.
Cluster name: The cluster name used when you're creating the cluster. You
can find the cluster name in the cluster property cluster-name .
Hostname: The Linux hostname of the virtual machine (VM).
For more information, see Create a high-availability cluster provider for Azure Monitor
for SAP solutions.
1. Install Node_Exporter on each BareMetal or VM node. You have two options for
installing Node_Exporter :
Name: A name for this provider that's unique to the Azure Monitor for SAP
solutions instance.
Node Exporter endpoint: Usually http://<servername or ip
address>:9100/metrics .
For more information, see Configure Linux provider for Azure Monitor for SAP solutions.
2 Warning
Database availability.
Number of connections.
Logical and physical reads.
Waits and current locks.
Top 20 runtime and executions.
For more information, see Create IBM Db2 provider for Azure Monitor for SAP solutions.
Next steps
Learn how to deploy Azure Monitor for SAP solutions from the Azure portal.
Deploy Azure Monitor for SAP solutions by using the Azure portal
Set up a network for Azure Monitor for
SAP solutions
Article • 04/29/2024
In this how-to guide, you learn how to configure an Azure virtual network so that you
can deploy Azure Monitor for SAP solutions. You learn how to:
Create a new subnet with an IPv4/25 block or larger because you need at least 100 IP
addresses for monitoring resources. After you successfully create a subnet, verify the
following steps to ensure connectivity between the Azure Monitor for SAP solutions
subnet and your SAP environment subnet:
If both the subnets are in different virtual networks, do a virtual network peering
between the virtual networks.
If the subnets are associated with user-defined routes, make sure the routes are
configured to allow traffic between the subnets.
If the SAP environment subnets have network security group (NSG) rules, make
sure the rules are configured to allow inbound traffic from the Azure Monitor for
SAP solutions subnet.
If you have a firewall in your SAP environment, make sure the firewall is configured
to allow inbound traffic from the Azure Monitor for SAP solutions subnet.
For more information, see how to integrate your app with an Azure virtual network.
There are multiple methods to address restricted or blocked outbound internet access.
Choose the method that works best for your use case:
You can configure the Route All setting when you create an Azure Monitor for SAP
solutions resource through the Azure portal. If your SAP environment doesn't allow
outbound internet access, disable Route All. If your SAP environment allows outbound
internet access, keep the default setting to enable Route All.
You can only use this option before you deploy an Azure Monitor for SAP solutions
resource. It's not possible to change the Route All setting after you create the Azure
Monitor for SAP solutions resource.
ノ Expand table
Prometheus OS 9100
SQL Server 1433 (can be different if you aren't using the default port)
DB2 Server 25000 (can be different if you aren't using the default port)
You can use this option after you deploy an Azure Monitor for SAP solutions resource.
1. Find the subnet associated with your Azure Monitor for SAP solutions managed
resource group:
a. Sign in to the Azure portal .
b. Search for or select the Azure Monitor for SAP solutions service.
c. On the Overview page for Azure Monitor for SAP solutions, select your Azure
Monitor for SAP solutions resource.
d. On the managed resource group's page, select the Azure Functions app.
e. On the app's page, select the Networking tab. Then select VNET Integration.
f. Review and note the subnet details. You need the subnet's IP address to create
rules in the next step.
2. Select the subnet's name to find the associated NSG. Note the NSG's information.
ノ Expand table
The Azure Monitor for SAP solution's subnet IP address refers to the IP of the subnet
associated with your Azure Monitor for SAP solutions resource. To find the subnet, go to
the Azure Monitor for SAP solutions resource in the Azure portal. On the Overview
page, review the vNet/subnet value.
For the rules that you create, allow_vnet must have a lower priority than deny_internet.
All other rules also need to have a lower priority than allow_vnet. The remaining order
of these other rules is interchangeable.
Next steps
Quickstart: Set up Azure Monitor for SAP solutions through the Azure portal
Quickstart: Set up Azure Monitor for SAP solutions with PowerShell
Configure alerts in Azure Monitor for
SAP solutions in Azure portal
Article • 01/29/2024
In this how-to guide, you learn how to configure alerts in Azure Monitor for SAP
solutions. You can configure alerts and notifications from the Azure portal using its
browser-based interface.
Prerequisites
An Azure subscription.
A deployment of an Azure Monitor for SAP solutions resource with at least one
provider. You can configure providers for:
The SAP application (NetWeaver)
SAP HANA
Microsoft SQL Server
High availability (HA) Pacemaker clusters
IBM Db2
2. In the Azure portal, browse and select your Azure Monitor for SAP solutions
resource. Make sure you have at least one provider configured for this resource.
3. Navigate to the workbook you want to use. For example, SAP HANA.
9. For Action group, select or create an action group to configure the notification
setting. You can edit frequency and severity information according to your
requirements.
11. Select Deploy alert rule to finish your alert rule configuration. You can choose to
see the alert template by selecting View template.
12. Navigate to Alert rules to view the newly created alert rule. When and if alerts are
fired, you can view them under Fired alerts.
Centralized Alert Management: Gain a holistic view of all alerts fired across
different providers within a single, intuitive interface. With the new Alerts
experience, you can easily track and manage alerts from various sources in one
place, providing a comprehensive overview of your SAP landscape's health.
Unified Alert Rules: Simplify your alert configuration by centralizing all alert rules
across different providers. This streamlined approach ensures consistency in rule
management, making it easier to define, update, and maintain alert rules for your
SAP solutions.
Grid View for Rule Status and Bulk Operations: Efficiently manage your alert rules
using the grid view, allowing you to see the status of all rules and make bulk
changes with ease. Enable or disable multiple rules simultaneously, providing a
seamless experience for maintaining the health of your SAP environment.
Alert Action Group Management: Take control of your alert action groups directly
from the new Alerts experience. Manage and configure alert action groups
effortlessly, ensuring that the right stakeholders are notified promptly when critical
alerts are triggered.
Alert Processing Rules for Maintenance Periods Enable alert processing rules, a
powerful feature that allows you to take specific actions or suppress alerts during
maintenance periods. Customize the behavior of alerts to align with your
maintenance schedule, minimizing unnecessary notifications and disruptions.
Export to CSV: Facilitate data analysis and reporting by exporting fired alerts and
alert rules to CSV format. This feature empowers you to share, analyze, and archive
alert data seamlessly, supporting your organization's reporting and compliance
requirements.
To access the new Alerts experience in Azure Monitor for SAP Solutions:
3. Click on the "Alerts" tab to explore the enhanced alert management capabilities.
Next steps
Learn more about Azure Monitor for SAP solutions.
In this article, learn about secure communication with TLS 1.2 or later in Azure Monitor
for SAP solutions.
Azure Monitor for SAP solutions resources and their associated managed resource
group components are deployed within a virtual network in a subscription. Azure
Functions is one component in a managed resource group. Azure Functions connects to
an appropriate SAP system by using connection properties that you provide, pulls
required telemetry data, and pushes that data to Log Analytics.
Azure Monitor for SAP solutions provides encryption of monitoring telemetry data in
transit by using approved cryptographic protocols and algorithms. Traffic between Azure
Functions and SAP systems is encrypted with TLS 1.2 or later. By choosing this option,
you can enable secure communication.
Enabling TLS 1.2 or later for telemetry data in transit is an optional feature. You can
choose to enable or disable this feature according to your requirements.
Supported certificates
To enable secure communication in Azure Monitor for SAP solutions, you can choose to
use either a root certificate or a server certificate.
We highly recommend that you use root certificates. For root certificates, Azure Monitor
for SAP solutions supports only certificates from certificate authorities (CAs) that
participate in the Microsoft Trusted Root Program.
Certificates must be signed by a trusted root authority. Self-signed certificates are not
supported.
If you select a root certificate, you need to verify that it comes from a Microsoft-
supported CA. You can then continue to create the provider instance. Subsequent data
in transit is encrypted through this root certificate.
If you select a server certificate, make sure that it's signed by a trusted CA. After you
upload the certificate, it's stored in a storage account within the managed resource
group in the Azure Monitor for SAP solutions resource. Subsequent data in transit is
encrypted through this certificate.
7 Note
Each provider type might have prerequisites that you must fulfill to enable secure
communication.
Next steps
Configure Azure Monitor for SAP solutions providers
Enable Insights to troubleshoot SAP
workload issues (preview)
Article • 01/18/2024
) Important
Insights in Azure Monitor for SAP solutions is currently in PREVIEW. See the
Supplemental Terms of Use for Microsoft Azure Previews for legal terms that
apply to Azure features that are in beta, preview, or otherwise not yet released into
general availability.
The Insights capability in Azure Monitor for SAP Solutions helps you troubleshoot
Availability and Performance issues on your SAP workloads. It helps you correlate key
SAP components issues with SAP logs, Azure platform metrics and health events. In this
how-to-guide, learn to enable Insights in Azure Monitor for SAP solutions. You can use
SAP Insights with only the latest version of the service, Azure Monitor for SAP solutions
and not Azure Monitor for SAP solutions (classic)
7 Note
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
An existing NetWeaver and HANA(optional) provider. To configure a NetWeaver
provider, see the How to guides for NetWeaver provider configuration.
(Optional) Alerts set up for availability and/or performance issues on the
NetWeaver/HANA provider. To configure a NetWeaver provider, see the How to
guides for setting up Alerts on Azure Monitor for SAP
1. Open the AMS instance of your choice and visit the insights tab under Monitoring
on the left navigation pane and choose to Configure Insights.
2. Choose the 'Add role assignment' button to open the role assignment experience.
3. Choose the scope at which you would want to assign the Reader role. You can
assign the reader role to multiple resource groups at a time under a subscription
scope. Make sure that the scope(s) chosen encompass the SAP system's
infrastructure on Azure. Save the role assignments.
This script gives your AMS instance Reader role permission over the subscriptions that
hold the SAP systems. Feel free to modify the script to scope it down to a resource
group or a set of virtual machines.
PowerShell
cd <script_path>
PowerShell
7. If the VMs belong to a different subscription than AMS, set the list of subscriptions
in which VMs of the SAP system are present (use subscription IDs):
PowerShell
) Important
To run this script successfully, ensure you have Contributor + User Access Admin or
Owner access on all subscriptions in the list. See steps to assign Azure roles.
PowerShell
.\AMS_AIOPS_SETUP.ps1 -ArmId $armId -subscriptions $subscriptions
PowerShell
) Important
You might have to wait for up to 30 minutes for your AMS to start receiving
metadata of the infrastructure that it needs to monitor.
Availability issues
Performance degradations
) Important
As a user of the Insights capability, you will require reader access on all virtual
machines on which the SAP systems are hosted that you're trying to monitor using
AMS. This is to make sure that you're able to view Azure monitor metrics and
Resource health events of these virtual machines in context of SAP issues. See steps
to assign Azure roles.
Availability Insights
This capability helps you get an overview regarding availability of your SAP system in
one place. You can also correlate SAP availability with Azure platform VM availability and
its health events easing the overall root-causing process.
2. If you completed all the steps mentioned, you should see the above screen asking
for context to be set up. You can set the Time range, SID and the provider
(optional, All selected by default).
3. On the top, you're able to see all the fired alerts related to SAP system and
instance availability on this screen.
4. If you're able to see SAP system availability trend, categorized by VM - SAP
process list. If you selected a fired alert in the previous step, you're able to see
these trends in context with the fired alert. If not, these trends respect the time
range you set on the main Time range filter.
5. You can see the Azure virtual machine on which the process is hosted and the
corresponding availability trends for the combination. To view detailed insights,
select the 'Investigate' link.
6. It opens a context pane that shows you availability insights on the corresponding
virtual machine and the SAP application. It has two categories of insights:
Azure platform: VM health events filtered by the time range set, either by the
workbook filter or the selected alert. This pane also consists of VM availability
metric trend for the chosen VM.
Performance Insights
This capability helps you get an overview regarding performance of your SAP system in
one place. You can also correlate key SAP performance issues with related SAP
application logs alongside Azure platform utilization metrics and SAP workload
configuration drifts easing the overall root-causing process.
1. Open the AMS instance of your choice and visit the insights tab under Monitoring
on the left navigation pane.
2. On the top, you're able to see all the fired alerts related to SAP application
performance degradations.
3. Next you're able to see key metrics related to performance issues and its trend
during the timerange you chose.
4. To view detailed insights issues, you can either choose to investigate a fired alert or
view insights for a key metric.
5. On investigating, you see a context pane, which shows you four categories of
metrics in context of the issue/key metric chosen.
We have insights only for a limited set of issues as part of the preview. We extend this
capability to most of the issues supported by AMS alerts before this capability is
Generally Available(GA).
Next steps
For information on providers available for Azure Monitor for SAP solutions, see
Azure Monitor for SAP solutions providers.
Configure SAP NetWeaver for Azure
Monitor for SAP solutions
Article • 07/21/2023
In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with
Azure Monitor for SAP solutions.
User can select between the two connection types when configuring SAP Netweaver
provider to collect information from SAP system. Metrics are collected by using
SAP Control - The SAP start service provides multiple services, including
monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use
SAP Control, which is a SOAP web service interface that exposes these capabilities.
The SAP Control interface differentiates between protected and unprotected web
service methods . It's necessary to unprotect some methods to use Azure
Monitor for SAP solutions with NetWeaver.
SAP RFC - Azure Monitor for SAP solutions also provides ability to collect
additional information from the SAP system using Standard SAP RFC. It's available
only as part of Azure Monitor for SAP solution.
You can collect the below metric using SAP NetWeaver Provider
SAP system and application server availability (for example Instance process
availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS
Watchdog) (SAP Control)
Work process usage statistics and trends (SAP Control)
Enqueue Lock statistics and trends (SAP Control)
Queue usage statistics and trends (SAP Control)
SMON Metrics (transaction code - /SDF/SMON) (RFC)
SWNC Workload, Memory, Transaction, User, RFC Usage (transaction code -
St03n) (RFC)
Short Dumps (transaction code - ST22) (RFC)
Object Lock (transaction code - SM12) (RFC)
Failed Updates (transaction code - SM13) (RFC)
System Logs Analysis (transaction code - SM21) (RFC)
Batch Jobs Statistics (transaction code - SM37) (RFC)
Outbound Queues (transaction code - SMQ1) (RFC)
Inbound Queues (transaction code - SMQ2) (RFC)
Transactional RFC (transaction code - SM59) (RFC)
STMS Change Transport System Metrics (transaction code - STMS) (RFC)
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
Refer to troubleshooting section to resolve any issue faced while adding the SAP
NetWeaver Provider.
Value
10. Restart the SAPStartSRV service on each instance in the SAP system. Restarting the
services doesn't restart the entire system. This process only restarts SAPStartSRV
(on Windows) or the daemon process (in Unix or Linux).
You must restart SAPStartSRV on each instance of the SAP system for the SAP
Control web methods to be unprotected. These read-only SOAP APIs are required
for the NetWeaver provider to fetch metric data from the SAP system. Failure to
unprotect these methods results in empty or missing visualizations on the
NetWeaver metric workbook.
b. On Linux systems, use the following commands to restart the host. Replace
<instance number> with your SAP system's instance number.
Command
c. Repeat the previous steps for each instance profile (or) you can restart the SAP
system in lower environments as another option.
You can refer to the link to unprotect the web-methods in the SAP Windows virtual
machine.
1. Create or upload role in the SAP NW ABAP system. Azure Monitor for SAP
solutions requires this role to connect to SAP. The role uses the least privileged
access. Download and unzip Z_AMS_NETWEAVER_MONITORING.zip
a. Sign in to your SAP system.
b. Use the transaction code PFCG > select on Role Upload in the menu.
c. Upload the Z_AMS_NETWEAVER_MONITORING.SAP file from the ZIP file.
d. Select Execute to generate the role. (ensure the profile is also generated as part
of the role upload)
You can also refer to the link to import role in PFCG and generate profile for
successfully configuring Netweaver provider for your SAP system.
3. Enable SICF Services to access the RFC via the SAP Internet Communication
Framework (ICF)
a. Go to transaction code SICF.
b. Go to the service path /default_host/sap/bc/soap/ .
c. Activate the services wsdl, wsdl11 and RFC.
It's also recommended to check that you enabled the ICF ports.
4. SMON - Enable SMON to monitor the system performance.Make sure the version
of ST-PI is SAPK-74005INSTPI.
You'll see empty visualization as part of the workbook when it isn't configured.
a. Enable the SDF/SMON snapshot service for your system. Turn on daily
monitoring. For instructions, see SAP Note 2651881 .
b. Configure SDF/SMON metrics to be aggregated every minute.
c. Recommended scheduling SDF/SMON as a background job in your target SAP
client each minute.
d. If you notice empty visualization as part of the workbook tab "System
Performance - CPU and Memory (/SDF/SMON)", please apply the below SAP
note:
i. Release 740 SAPKB74006-SAPKB74025 - Release 755 Until SAPK-
75502INSAPBASIS. For specific support package versions please refer to the
SAP NOTE.- SAP Note 2246160 .
ii. If the metric collection does not work with the above note then please try -
SAP Note 3268727
To enable TLS 1.2 or higher with SAP NetWeaver provider please execute steps
mentioned on this SAP document
Check if SAP systems are configured for secure communication using TLS 1.2 or
higher
a. Go to transaction RZ10.
b. Open DEFAULT profile, select Extended Maintenance and click change.
c. Below configuration is for TLS1.2 the bit mask will be 544: PFS. If TLS version is
higher, then bit mask will be greater than 544.
d. For Application Server, enter the IP address or the fully qualified domain name
(FQDN) of the SAP NetWeaver system to monitor. For example,
sapservername.contoso.com where sapservername is the hostname and
contoso.com is the domain. If you're using a hostname, make sure there's
connectivity from the virtual network that you used to create the Azure Monitor
for SAP solutions resource.
e. For Instance number, specify the instance number of SAP NetWeaver (00-99)
f. For Connection type - select either SOAP + RFC or SOAP based on the metric
collected (refer above section for details)
h. For SAP ICM HTTP Port, enter the port that the ICM is using, for example,
80(NN) where (NN) is the instance number.
i. For SAP username, enter the name of the user that you created to connect to
the SAP system.
k. For Host file entries, provide the DNS mappings for all SAP VMs associated with
the SID Enter all SAP application servers and ASCS host file entries in Host file
entries. Enter host file mappings in comma-separated format. The expected
format for each entry is IP address, FQDN, hostname. For example: 192.X.X.X
sapservername.contoso.com sapservername,192.X.X.X
sapservername2.contoso.com sapservername2. To determine all SAP
hostnames associated with the SID, Sign in to the SAP system using the sidadm
user. Then, run the following command (or) you can leverage the script below to
generate the hostfile entries.
Bash
a. Check the input hostname, instance number, and host file mappings for the
hostname provided.
b. Follow the instruction for determining the hostfile entries Host file entries
section.
c. Ensure the NSG/firewall is not blocking the port – 5XX13 or 5XX14. (XX - SAP
Instance Number)
d. Check if AMS and SAP VMs are in the same vNet or are attached using vNet
peering.
a. When Signing in to the SAP system as sidadm . Run the following command.
Replace <instance number> with your system's instance number.
Command
b. When signing in as non SIDADM user. Run the following command, replace
<instance number> with your system's instance number, <admin user> with your
Command
c. Review the output. Ensure in the output you see the name of methods
GetQueueStatistic ABAPGetWPTable EnqGetStatistic GetProcessList
GetEnvironment ABAPGetSystemWPTable
To validate the rules, run a test query against the web methods. Replace the
<hostname> with your hostname, <instance number> with your SAP instance
PowerShell
$SAPHostName = "<hostname>"
$InstanceNumber = "<instance number>"
$Function = "ABAPGetWPTable"
[System.Net.ServicePointManager]::ServerCertificateValidationCallback =
{$true}
$sapcntrluri = "https://" + $SAPHostName + ":5" + $InstanceNumber +
"14/?wsdl"
$sapcntrl = New-WebServiceProxy -uri $sapcntrluri -namespace
WebServiceProxy -class sapcntrl
$FunctionObject = New-Object ($sapcntrl.GetType().NameSpace +
".$Function")
$sapcntrl.$Function($FunctionObject)
3. Ensuring the Internet communication Framework port is open. ErrorCode:
RFCSoapApiNotEnabled
d. Right-click the ping service and choose Test Service. SAP starts your default
browser.
e. If the port can't be reached, or the test fails, open the port in the SAP VM.
i. For Linux, run the following commands. Replace <your port> with your
configured port.
Bash
Bash
ii. For Windows, open Windows Defender Firewall from the Start menu. Select
Advanced settings in the side menu, then select Inbound Rules. To open a
port, select New Rule. Add your port and set the protocol to TCP.
This function module has the same parameters as BAPI_XMI_LOGON but stores
them in the table BTCOPTIONS.
3. SWNC metrics
Next steps
Learn about Azure Monitor for SAP solutions provider types
Configure SAP HANA provider for Azure
Monitor for SAP solutions
Article • 07/25/2023
In this how-to guide, you learn how to configure an SAP HANA provider for Azure
Monitor for SAP solutions through the Azure portal.
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
d. For IP address, enter the IP address or hostname of the server that runs the SAP
HANA instance that you want to monitor. If you're using a hostname, make sure
there's connectivity within the virtual network.
e. For Database tenant, enter the HANA database that you want to connect to. We
recommend that you use SYSTEMDB because tenant databases don't have all
monitoring views.
f. For Instance number, enter the instance number of the database (0-99). The
SQL port is automatically determined based on the instance number.
g. For Database username, enter the dedicated SAP HANA database user. This
user needs the MONITORING or BACKUP CATALOG READ role assignment. For
nonproduction SAP HANA instances, use SYSTEM instead.
h. For Database password, enter the password for the database username. You
can either enter the password directly or use a secret inside Azure Key Vault.
6. Save your changes to the Azure Monitor for SAP solutions resource.
7 Note
Azure Monitor for SAP solutions supports HANA 2.0 SP6 and later versions. Legacy
HANA 1.0 is not supported.
Next steps
Learn about Azure Monitor for SAP solutions provider types
Configure SQL Server for Azure Monitor
for SAP solutions
Article • 06/20/2023
In this how-to guide, you learn how to configure a SQL Server provider for Azure
Monitor for SAP solutions through the Azure portal.
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
SQL
1. Open the Azure Monitor for SAP solutions resource in the Azure portal.
2. On the resource menu, under Settings, select Providers.
3. On the provider page, select Add to add a new provider.
4. On the Add provider page, enter all required information:
a. For Type, select Microsoft SQL Server.
b. For Name, enter a name for the provider.
c. (Optional) Select Enable secure communication and choose a certificate type
from the dropdown list.
d. For Host name, enter the IP address of the hostname.
e. For Port, enter the port on which SQL Server is listening. The default is 1433.
f. For SQL username, enter a username for the SQL Server account.
g. For Password, enter a password for the account.
h. For SID, enter the SAP system identifier.
i. Select Create to create the provider.
5. Repeat the previous step as needed to create more providers.
6. Select Review + create to complete the deployment.
Next steps
Learn about Azure Monitor for SAP solutions provider types
Create high-availability cluster provider
for Azure Monitor for SAP solutions
Article • 03/06/2024
In this how-to guide, you learn how to create a high-availability (HA) Pacemaker cluster
provider for Azure Monitor for SAP solutions. You install the HA agent and then create
the provider for Azure Monitor for SAP solutions.
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
Install an HA agent
Before you add providers for HA (Pacemaker) clusters, install the appropriate agent for
RHEL/SUSE in your environment in each of the cluster node.
For SUSE-based clusters, install ha_cluster_provider in each node. For more information,
see the HA cluster exporter installation guide . Supported SUSE versions include SLES
for SAP 12 SP3 and later versions.
For SUSE-based Pacemaker clusters, Please follow below steps to install in each of the
cluster node
Bash
Bash
sudo systemctl start prometheus-ha_cluster_exporter
Bash
3. Data is then collected in the system by ha_cluster_exporter. You can export the
data via URL http://<ip address of the server>:9644/metrics . To check if the
metrics are fetched via URL on the server where the ha_cluster_exporter is installed,
Run below command on the server.
Bash
curl http://localhost:9644/metrics
For RHEL-based clusters, install performance co-pilot (PCP) and the pcp-pmda-
hacluster subpackage in each node. For more information, see the PCP HACLUSTER
agent installation guide . Supported RHEL versions include 8.2, 8.4, and later versions.
For RHEL-based Pacemaker clusters, Please follow below steps to install in each of the
cluster node
Bash
Bash
Bash
Bash
cd $PCP_PMDAS_DIR/hacluster
Bash
sudo ./Install
Bash
Bash
5. Data is then collected in the system by PCP. You can export the data by using
pmproxy via URL http://<ipaddress of the serrver>:44322/metrics?
names=ha_cluster . To check if the metrics are fetched via URL on the server where
Bash
curl http://localhost:44322/metrics?names=ha_cluster
8. Configure providers for each node of the cluster by entering the endpoint URL for
HA Cluster Exporter Endpoint.
9. Enter the SID - SAP system ID, Hostname - SAP hostname of the Virtual machine
(Command hostname -s for SUSE and RHEL based servers will give hostname
detail.), Cluster - Provide any custom name that is easy to identify the SAP system
cluster - this Name will be visible in the workbook for metrics (need not have to be
the cluster name configured on the server).
10. Click on "Start test" under "Prerequisite check (Preview) - highly recommended" -
This test will help validate the connectivity from AMS subnet to the SAP source
system and list out if any error's found - which need to be addressed before
provider creation otherwise the provider creation will fail with error.
12. Create provider for each of the servers in the cluster to be able to see the metrics
in the workbook For example - If the Cluster has three servers configured, Create
three providers for each of the three servers with all of the above steps followed.
Troubleshooting
Use the following troubleshooting steps for common errors.
Bash
Bash
3. Verify that the Prometheus endpoint is reachable from the subnet that you
provided when you created the Azure Monitor for SAP solutions resource.
Next steps
Learn about Azure Monitor for SAP solutions provider types
Configure Linux provider for Azure
Monitor for SAP solutions
Article • 12/20/2023
In this how-to guide, you learn how to create a Linux OS provider for Azure Monitor for
SAP solutions resources.
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
Install the node exporter latest version in each SAP host that you want to
monitor, either BareMetal or Azure virtual machine (VM). For more information, see
the node exporter GitHub repository .
Node exporter uses the default port 9100 to expose the metrics. If you want to use
a custom port, make sure to open the port in the firewall and use the same port
while creating the provider.
Default port 9100 or custom port that will be configured for node exporter should
be open and listening on the Linux host.
Right click on the relevant node exporter version for linux from
https://prometheus.io/download/#node_exporter and copy the link address which will
be used in the below command. For example -
https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporte
r-1.6.1.linux-amd64.tar.gz
1. Change to the directory where you want to install the node exporter.
2. Run wget
https://github.com/prometheus/node_exporter/releases/download/v<xxx>/node_expo
4. Run cd node_exporter-<xxx>linux-amd64
5. Run ./node_exporter .
6. Run ./node_exporter --web.listen-address=":9100" &
7. The node exporter now starts collecting data. You can export the data at
http://<ip>:9100/metrics .
wget
https://github.com/prometheus/node_exporter/releases/download/v<xxx>/node_ex
porter-<xxx>.linux-amd64.tar.gz
tar xvfz node_exporter-<xxx>.linux-amd64.tar.gz
cd node_exporter-<xxx>linux-amd64
nohup ./node_exporter --web.listen-address=":9100" &
7 Note
Replace this xxxx with the version of node exporter. For example, 1.6.1 .
shell
# Change to the directory where node exporter bits are downloaded and
copy the node_exporter folder to path /usr/bin
sudo mv node_exporter-<xxxx>.linux-amd64 /usr/bin
# Create a node_exporter as a service file under etc/systemd/system
sudo tee /etc/systemd/system/node_exporter.service<<EOF
[Unit]
Description=Node Exporter
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/usr/bin/node_exporter-<xxxx>.linux-amd64/node_exporter $ARGS
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target
EOF
# Reload the system daemon and start the node exporter service.
b. If you're using ufw , run _ufw_ _allow_ _9100/tcp_ and then run _ufw_
_reload_ .
7. If the Linux host is an Azure VM, make sure that all applicable network security
groups allow inbound traffic at port 9100 from VirtualNetwork as the source.
8. Select Add provider to save your changes.
9. Continue to add more providers as needed.
10. Select Review + create to review the settings.
11. Select Create to finish creating the resource.
Troubleshooting
Use these steps to resolve common errors.
1. Check the default port 9100 or custom port that is configured for node exporter is
open and listening on the Linux host.
2. Try to restart the node exporter agent:
a. Go to the folder where you installed the node exporter (the file name resembles
node_exporter-<xxxx>-amd64 ).
b. Run ./node_exporter .
c. Run nohup ./node_exporter & command to enable node_exporter. Adding
nohup and & to above command decouples the node_exporter from linux
machine commandline. If not included node_exporter would stop when the
commandline is closed.
3. Verify that the Prometheus endpoint is reachable from the subnet that you
provided when you created the Azure Monitor for SAP solutions resource.
Suggestion
Use this suggestion for troubleshooting
Next steps
Learn about Azure Monitor for SAP solutions provider types
Create IBM Db2 provider for Azure
Monitor for SAP solutions
Article • 06/20/2023
In this how-to guide, you learn how to create an IBM Db2 provider for Azure Monitor for
SAP solutions through the Azure portal.
Prerequisites
An Azure subscription.
An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor
for SAP solutions resource, see the quickstart for the Azure portal or the quickstart
for PowerShell.
SQL
Next, if you don't have an SAPAPP role in your Db2 server, use the following query to
create the role.
SQL
Next steps
Learn about Azure Monitor for SAP solutions provider types
Data reference for Azure Monitor for
SAP solutions
Article • 05/15/2023
This article provides a reference of log data collected to analyze the performance and
availability of Azure Monitor for SAP solutions. See Monitor SAP on Azure for details on
collecting and analyzing monitoring data for SAP on Azure.
Metrics
Azure Monitor for SAP solutions doesn't support metrics.
SapHana_HostConfig_CL
For more information, see M_LANDSCAPE_HOST_CONFIGURATION System View in
the SAP HANA SQL and System Views Reference.
SapHana_HostInformation_CL
For more information, see M_HOST_INFORMATION System View in the SAP HANA
SQL and System Views Reference.
SapHana_SystemOverview_CL
For more information, see M_SYSTEM_OVERVIEW System View in the SAP HANA SQL
and System Views Reference.
SapHana_LoadHistory_CL
For more information, see M_LOAD_HISTORY_HOST System View in the SAP HANA
SQL and System Views Reference.
SapHana_Disks_CL
For more information, see M_DISKS System View in the SAP HANA SQL and System
Views Reference.
SapHana_SystemAvailability_CL
For more information, see M_SYSTEM_AVAILABILITY System View in the SAP HANA
SQL and System Views Reference.
SapHana_BackupCatalog_CL
For more information, see:
SapHana_SystemReplication_CL
For more information, see M_SERVICE_REPLICATION System View in the SAP HANA
SQL and System Views Reference.
Prometheus_OSExporter_CL
For more information, see prometheus / node_exporter on GitHub .
Prometheus_HaClusterExporter_CL
For more information, see ClusterLabs/ha_cluster_exporter .
MSSQL_DBConnections_CL
For more information, see:
sys.dm_exec_sessions (Transact-SQL)
sys.databases (Transact-SQL)
MSSQL_SystemProps_CL
For more information, see:
sys.dm_os_windows_info (Transact-SQL)
sys.database_files (Transact-SQL)
sys.dm_exec_sql_text (Transact-SQL)
sys.dm_exec_query_stats (Transact-SQL)
sys.dm_io_virtual_file_stats (Transact-SQL)
sys.dm_db_partition_stats (Transact-SQL)
sys.dm_os_performance_counters (Transact-SQL)
sys.dm_os_wait_stats (Transact-SQL)
sys.fn_xe_file_target_read_file (Transact-SQL)
SQL Server Operating System Related Dynamic Management Views (Transact-SQL)
sys.availability_groups (Transact-SQL)
sys.dm_exec_requests (Transact-SQL)
sys.dm_xe_session_targets (Transact-SQL)
sys.fn_xe_file_target_read_file (Transact-SQL)
backupset (Transact-SQL)
sys.sysprocesses (Transact-SQL)
MSSQL_FileOverview_CL
For more information, see sys.database_files (Transact-SQL).
MSSQL_MemoryOverview_CL
For more information, see sys.dm_os_memory_clerks (Transact-SQL).
MSSQL_Top10Statements_CL
For more information, see:
sys.dm_exec_sql_text (Transact-SQL)
sys.dm_exec_query_stats (Transact-SQL)
MSSQL_IOPerformance_CL
For more information, see sys.dm_io_virtual_file_stats (Transact-SQL).
MSSQL_TableSizes_CL
For more information, see sys.dm_db_partition_stats (Transact-SQL).
MSSQL_BatchRequests_CL
For more information, see sys.dm_os_performance_counters (Transact-SQL).
MSSQL_WaitPercs_CL
For more information, see sys.dm_os_wait_stats (Transact-SQL).
MSSQL_PageLifeExpectancy2_CL
For more information, see sys.dm_os_performance_counters (Transact-SQL).
MSSQL_Error_CL
For more information, see sys.fn_xe_file_target_read_file (Transact-SQL).
MSSQL_CPUUsage_CL
For more information, see SQL Server Operating System Related Dynamic Management
Views (Transact-SQL).
MSSQL_AOOverview_CL
For more information, see sys.availability_groups (Transact-SQL).
MSSQL_AOWaiter_CL
For more information, see sys.dm_exec_requests (Transact-SQL).
MSSQL_AOWaitstats_CL
For more information, see sys.dm_os_wait_stats (Transact-SQL).
MSSQL_AOFailovers_CL
For more information, see:
sys.dm_xe_session_targets (Transact-SQL)
sys.fn_xe_file_target_read_file (Transact-SQL)
MSSQL_BckBackups2_CL
For more information, see: backupset (Transact-SQL).
MSSQL_BlockingProcesses_CL
For more information, see sys.sysprocesses (Transact-SQL).
Next steps
For more information on using Azure Monitor for SAP solutions, see Monitor SAP
on Azure.
For more information on Azure Monitor, see Monitoring Azure resources with
Azure Monitor.
What is SAP HANA on Azure (Large
Instances)?
Article • 02/10/2023
7 Note
HANA Large Instance service is in sunset mode and does not accept new customers
anymore. Providing units for existing HANA Large Instance customers is still
possible. For alternatives, please check the offers of HANA certified Azure VMs in
the HANA Hardware Directory .
The customer isolation within the infrastructure stamp is performed in tenants, which
looks like:
These bare-metal server units are supported to run SAP HANA only. The SAP application
layer or workload middle-ware layer runs in virtual machines. The infrastructure stamps
that run the SAP HANA on Azure (Large Instances) units are connected to the Azure
network services backbones. In this way, low-latency connectivity between SAP HANA
on Azure (Large Instances) units and virtual machines is provided.
"Revision 3" (Rev 3): Are the stamps that were made available for customer to
deploy before July 2019
"Revision 4" (Rev 4): New stamp design that is deployed in close proximity to Azure
VM hosts and which so far are released in the Azure regions of:
West US2
East US
East US2 (across two Availability Zones)
South Central US (across two Availability Zones)
West Europe
North Europe
This document is one of several documents that cover SAP HANA on Azure (Large
Instances). This document introduces the basic architecture, responsibilities, and services
provided by the solution. High-level capabilities of the solution are also discussed. For
most other areas, such as networking and connectivity, four other documents cover
details and drill-down information. The documentation of SAP HANA on Azure (Large
Instances) doesn't cover aspects of the SAP NetWeaver installation or deployments of
SAP NetWeaver in VMs. SAP NetWeaver on Azure is covered in separate documents
found in the same Azure documentation container.
The different documents of HANA Large Instance guidance cover the following areas:
Next steps
Several common definitions are widely used in the Architecture and Technical
Deployment Guide. Note the following terms and their meanings:
SAP landscape: Refers to the entire SAP assets in your IT landscape. The SAP
landscape includes all production and non-production environments.
SAP system: The combination of DBMS layer and application layer of, for example,
an SAP ERP development system, an SAP BW test system, and an SAP CRM
production system. Azure deployments don't support dividing these two layers
between on-premises and Azure. An SAP system is either deployed on-premises or
it's deployed in Azure. You can deploy the different systems of an SAP landscape
into either Azure or on-premises. For example, you can deploy the SAP CRM
development and test systems in Azure while you deploy the SAP CRM production
system on-premises. For SAP HANA on Azure (Large Instances), it's intended that
you host the SAP application layer of SAP systems in VMs and the related SAP
HANA instance on a unit in the SAP HANA on Azure (Large Instances) stamp.
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI-
certified and dedicated to run SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run
HANA instances in on SAP HANA TDI-certified hardware that's deployed in Large
Instance stamps in different Azure regions. The related term HANA Large Instance
is short for SAP HANA on Azure (Large Instances) and is widely used in this
technical deployment guide.
Cross-premises: Describes a scenario where VMs are deployed to an Azure
subscription that has site-to-site, multi-site, or Azure ExpressRoute connectivity
between on-premises data centers and Azure. In common Azure documentation,
these kinds of deployments are also described as cross-premises scenarios. The
reason for the connection is to extend on-premises domains, on-premises Azure
Active Directory/OpenLDAP, and on-premises DNS into Azure. The on-premises
landscape is extended to the Azure assets of the Azure subscriptions. With this
extension, the VMs can be part of the on-premises domain.
Domain users of the on-premises domain can access the servers and run services
on those VMs (such as DBMS services). Communication and name resolution
between VMs deployed on-premises and Azure-deployed VMs is possible. This
scenario is typical of the way in which most SAP assets are deployed. For more
information, see Azure VPN Gateway and Create a virtual network with a site-to-
site connection by using the Azure portal.
Tenant: A customer deployed in HANA Large Instance stamp gets isolated into a
tenant. A tenant is isolated in the networking, storage, and compute layer from
other tenants. Storage and compute units assigned to the different tenants can't
see each other or communicate with each other on the HANA Large Instance
stamp level. A customer can choose to have deployments into different tenants.
Even then, there is no communication between tenants on the HANA Large
Instance stamp level.
SKU category: For HANA Large Instance, the following two categories of SKUs are
offered:
Type I class: S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and
S224m
Type II class: S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm,
and S960m
Stamp: Defines the Microsoft internal deployment size of HANA Large Instances.
Before HANA Large Instance units can get deployed, a HANA Large Instance stamp
consisting out of compute, network, and storage racks need to be deployed in a
datacenter location. Such a deployment is called a HANA Large instance stamp or
from Revision 4 (see below) on we use the alternate of term of Large Instance Row
Revision: There are two different stamp revisions for HANA Large Instance stamps.
These differ in architecture and proximity to Azure virtual machine hosts.
"Revision 3" (Rev 3) is the original design deployed from the middle of 2016.
"Revision 4.2" (Rev 4.2) is a new design that provides closer proximity to Azure
virtual machine hosts. Rev 4.2 offers ultra-low network latency between Azure
VMs and HANA Large Instance units. Resources in the Azure portal are referred
to as BareMetal Infrastructure. Customers can access their resources as
BareMetal instances from the Azure portal.
A variety of additional resources are available on how to deploy an SAP workload in the
cloud. If you plan to execute a deployment of SAP HANA in Azure, you need to be
experienced with and aware of the principles of Azure IaaS and the deployment of SAP
workloads on Azure IaaS. Before you continue, see Use SAP solutions on Azure virtual
machines for more information.
Next steps
Refer to HLI Certification.
Certification
Article • 02/10/2023
Besides the NetWeaver certification, SAP requires a special certification for SAP HANA to
support SAP HANA on certain infrastructures, such as Azure IaaS and BareMetal
Infrastructure.
The core SAP Note on NetWeaver, and to a degree SAP HANA certification, is SAP Note
#1928533 – SAP applications on Azure: Supported products and Azure VM types .
The certification records for SAP HANA on Azure Large Instances can be found in the
SAP HANA certified IaaS Platforms site.
The SAP HANA on Azure (Large Instances) types, referred to in SAP HANA certified IaaS
Platforms site, provides Microsoft and SAP customers the ability to deploy:
The solution is based on the SAP-HANA certified dedicated hardware stamp (SAP HANA
tailored data center integration – TDI ). If you run an SAP HANA TDI-configured
solution, all the above SAP HANA-based applications work on the hardware
infrastructure.
Compared to running SAP HANA in VMs, this solution offers the benefit of much larger
memory volumes.
Key concepts
To enable this solution, you need to understand the following key aspects:
The SAP application layer and non-SAP applications run in VMs that are hosted in
the usual Azure hardware stamps.
Customer on-premises infrastructure, data centers, and application deployments
are connected to the cloud platform through ExpressRoute (recommended) or a
virtual private network (VPN). Active Directory and DNS also are extended into
Azure.
The SAP HANA database instance for HANA workload runs on SAP HANA on Azure
(Large Instances). The Large Instance stamp is connected into Azure networking, so
software running in VMs can interact with the HANA instance running in HANA
Large Instance.
Hardware of SAP HANA on Azure (Large Instances) is dedicated hardware provided
in an IaaS with SUSE Linux Enterprise Server or Red Hat Enterprise Linux
preinstalled. As with virtual machines, further updates and maintenance to the
operating system is your responsibility.
Installation of HANA or any other components necessary to run SAP HANA on
units of HANA Large Instance is your responsibility. All respective ongoing
operations and administration of SAP HANA on Azure are also your responsibility.
You can also install other components in your Azure subscription that connect to
SAP HANA on Azure (Large Instances). For example, components that enable
communication with the SAP HANA database, such as:
Jump servers
RDP servers
SAP HANA Studio
SAP Data Services for SAP BI scenarios
Network monitoring solutions.
As in Azure, HANA Large Instance offers support for high availability and disaster
recovery functionality.
Next steps
Learn about available SKUs for HANA Large Instances.
West Europe
North Europe
Germany West Central with Zones support
East US with Zones support
East US 2
South Central US
West US 2 with Zones support
BareMetal Infrastructure (certified for SAP HANA workloads) service based on Rev 3* has
limited availability in the following regions:
West US
East US
Australia East
Australia Southeast
Japan East
) Important
Be aware of the first column that represents the status of HANA certification for
each of the Large Instance types in the list. The column should correlate with the
SAP HANA hardware directory for the Azure SKUs that start with the letter S.
SAP Model Total Memory Memory Storage Availability
HANA Memory DRAM Optane
certified
CPU cores = sum of non-hyper-threaded CPU cores of the sum of the processors
of the server unit.
CPU threads = sum of compute threads provided by hyper-threaded CPU cores of
the sum of the processors of the server unit. Most units are configured by default
to use Hyper-Threading Technology.
Based on supplier recommendations, S768m, S768xm, and S960m aren't
configured to use Hyper-Threading for running SAP HANA.
) Important
The following SKUs, though still supported, can't be purchased anymore: S72,
S72m, S144, S144m, S192, and S192m.
Specific configurations chosen are dependent on workload, CPU resources, and desired
memory. It's possible for the OLTP workload to use the SKUs that are optimized for the
OLAP workload.
S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, and S224m, S224oo,
S224om, S224ooo, S224oom are referred to as the "Type I class" of SKUs.
All other SKUs are referred to as the "Type II class" of SKUs.
If you're interested in SKUs that aren't yet listed in the SAP hardware directory,
contact your Microsoft account team to get more information.
Tenant considerations
A complete HANA Large Instance stamp isn't exclusively allocated for a single
customer's use. This applies to the racks of compute and storage resources connected
through a network fabric deployed in Azure as well. HANA Large Instance infrastructure,
like Azure, deploys different customer "tenants" that are isolated from one another in
the following three levels:
Network: Isolation through virtual networks within the HANA Large Instance
stamp.
Storage: Isolation through storage virtual machines that have storage volumes
assigned and isolate storage volumes between tenants.
Compute: Dedicated assignment of server units to a single tenant. No hard or soft
partitioning of server units. No sharing of a single server or host unit between
tenants.
The deployments of HANA Large Instance units between different tenants aren't visible
to each other. HANA Large Instance units deployed in different tenants can't
communicate directly with each other on the HANA Large Instance stamp level. Only
HANA Large Instance units within one tenant can communicate with each other on the
HANA Large Instance stamp level.
A deployed tenant in the Large Instance stamp is assigned to one Azure subscription for
billing purposes. For a network, it can be accessed from virtual networks of other Azure
subscriptions within the same Azure enrollment. If you deploy with another Azure
subscription in the same Azure region, you also can choose to ask for a separated HANA
Large Instance tenant.
SAP HANA on HANA Large Instances vs. on
VMs
There are significant differences between running SAP HANA on HANA Large Instances
and SAP HANA running on VMs deployed in Azure:
There is no virtualization layer for SAP HANA on Azure (Large Instances). You get
the performance of the underlying bare-metal hardware.
Unlike Azure, the SAP HANA on Azure (Large Instances) server is dedicated to a
specific customer. There is no possibility that a server unit or host is hard or soft
partitioned. As a result, a HANA Large Instance unit is used as assigned as a whole
to a tenant and with that to you. A reboot or shutdown of the server doesn't lead
automatically to the operating system and SAP HANA being deployed on another
server. (For Type I class SKUs, the only exception is if a server encounters issues
and redeployment needs to be performed on another server.)
Unlike Azure, where host processor types are selected for the best
price/performance ratio, the processor types chosen for SAP HANA on Azure
(Large Instances) are the highest performing of the Intel E7v3 and E7v4 processor
line.
Next steps
Learn about sizing for HANA Large Instances.
HLI Sizing
Sizing
Article • 02/10/2023
In this article, we'll look at information helpful with sizing for HANA Large Instances. In
general, sizing for HANA Large Instances is no different than sizing for HANA.
For more information on how to run these reports and obtain their most recent patches
or versions, read the following SAP Notes:
Memory requirements
Memory requirements for HANA increase as data volume grows. Be aware of your
current memory consumption to help you predict what it's going to be in the future.
Based on memory requirements, you then can map your demand into one of the HANA
Large Instance SKUs.
Next steps
Learn about onboarding requirements for HANA Large Instances.
Onboarding requirements
Onboarding requirements
Article • 02/10/2023
This article lists the requirements for running SAP HANA on Azure Large Instances (also
known as BareMetal Infrastructure instances).
Microsoft Azure
An Azure subscription that can be linked to SAP HANA on Azure (Large Instances).
Microsoft Premier support contract. For specific information related to running SAP
in Azure, see SAP Support Note #2015553 – SAP on Microsoft Azure: Support
prerequisites . If you use HANA Large Instance units with 384 and more CPUs,
you also need to extend the Premier support contract to include Azure Rapid
Response.
Awareness of the HANA Large Instance SKUs you need after you complete a sizing
exercise with SAP.
Network connectivity
ExpressRoute between on-premises to Azure: To connect your on-premises data
center to Azure, make sure to order at least a 1-Gbps connection from your ISP.
Connectivity between HANA Large Instances and Azure uses ExpressRoute
technology as well. This ExpressRoute connection between the HANA Large
Instances and Azure is included in the price of the HANA Large Instances. The price
also includes all data ingress and egress charges for this specific ExpressRoute
circuit. So you won't have added costs beyond your ExpressRoute link between on-
premises and Azure.
Operating system
Licenses for SUSE Linux Enterprise Server 12 and SUSE Linux Enterprise Server 15
for SAP Applications.
7 Note
The operating system delivered by Microsoft isn't registered with SUSE. It isn't
connected to a Subscription Management Tool instance.
SUSE Linux Subscription Management Tool deployed in Azure on a VM. This tool
provides the capability for SAP HANA on Azure (Large Instances) to be registered
and respectively updated by SUSE. (There's no internet access within the HANA
Large Instance data center.)
Licenses for Red Hat Enterprise Linux 7.9 and 8.2 for SAP HANA.
7 Note
The operating system delivered by Microsoft isn't registered with Red Hat. It
isn't connected to a Red Hat Subscription Manager instance.
Red Hat Subscription Manager deployed in Azure on a VM. The Red Hat
Subscription Manager provides the capability for SAP HANA on Azure (Large
Instances) to be registered and respectively updated by Red Hat. (There is no direct
internet access from within the tenant deployed on the Azure Large Instance
stamp.)
SAP requires you to have a support contract with your Linux provider as well. This
requirement isn't removed by the solution of HANA Large Instance or the fact that
you run Linux in Azure. Unlike with some of the Linux Azure gallery images, the
service fee is not included in the solution offer of HANA Large Instance. It's your
responsibility to fulfill the requirements of SAP as far as support contracts with the
Linux distributor.
For SUSE Linux, look up the requirements of support contracts in SAP Note
#1984787 - SUSE Linux Enterprise Server 12: Installation notes and SAP Note
#1056161 - SUSE priority support for SAP applications .
For Red Hat Linux, you need to have the correct subscription levels that include
support and service updates to the operating systems of HANA Large Instance.
Red Hat recommends the Red Hat Enterprise Linux subscription for SAP
solution. Refer to https://access.redhat.com/solutions/3082481 .
For the support matrix of the different SAP HANA versions with the different Linux
versions, see SAP Note #2235581 .
For the compatibility matrix of the operating system and HLI firmware/driver versions,
refer OS Upgrade for HLI.
) Important
For Type II units SLES 12 SP5, SLES 15 SP2 and SLES 15 SP3 OS versions are
supported at this point.
Database
Licenses and software installation components for SAP HANA (platform or
enterprise edition).
Applications
Licenses and software installation components for any SAP applications that
connect to SAP HANA and related SAP support contracts.
Licenses and software installation components for any non-SAP applications used
with SAP HANA on Azure (Large Instances) environments and related support
contracts.
Skills
Experience with and knowledge of Azure IaaS and its components.
Experience with and knowledge of how to deploy an SAP workload in Azure.
SAP HANA installation certified personal.
SAP architect skills to design high availability and disaster recovery around SAP
HANA.
SAP
Expectation is that you're an SAP customer and have a support contract with SAP.
Especially for implementations of the Type II class of HANA Large Instance SKUs,
consult with SAP on versions of SAP HANA and the eventual configurations on
large-sized scale-up hardware.
Next steps
Learn about using SAP HANA data tiering and extension nodes.
SAP supports a data tiering model for SAP Business Warehouse (BW) with different SAP
NetWeaver releases and SAP BW/4HANA. For more information about the data tiering
model, see SAP BW/4HANA and SAP BW on HANA with SAP HANA extension nodes .
With HANA Large Instance, you can use the option 1 configuration of SAP HANA
extension nodes, as explained in the FAQ and SAP blog documents. Option 2
configurations can be set up with the following HANA Large Instance SKUs: S72m, S192,
S192m, S384, and S384m.
SAP HANA sizing guidelines usually require double the amount of data volume
compared to memory. When you run your SAP HANA instance with hot data, only
50 percent or less of your memory stores data. Ideally, the remaining memory is
held for SAP HANA to do its work.
That means in a HANA Large Instance S192 unit with 2 TB of memory running an
SAP BW database, you only have 1 TB in data volume.
If you use another SAP HANA extension node option 1, also a S192 HANA Large
Instance SKU, it gives you another 2-TB capacity in data volume. In the option 2
configuration, you get another 4 TB for warm data volume. Compared to the hot
node, the full memory capacity of the "warm" extension node can be used for
storing data for option 1. Double the memory can be used for data volume in the
option 2 SAP HANA extension node configuration.
You end up with a capacity of 3 TB for your data and a hot-to-warm ratio of 1:2 for
option 1. You have 5 TB of data and a 1:4 ratio with the option 2 extension node
configuration.
The higher the data volume compared to memory, the greater your chances that the
warm data you're asking for is stored on disk.
Next steps
Learn about the operations model for SAP HANA on Azure (Large Instances) and your
responsibilities.
The service provided with SAP HANA on Azure (Large Instances) is aligned with Azure
IaaS services. You get a HANA Large Instance with an installed operating system
optimized for SAP HANA. As with Azure IaaS VMs, most of the tasks of hardening the
operating system (OS), installing another software, installing HANA, operating the OS
and HANA, and updating the OS and HANA are your responsibility. Microsoft doesn't
force OS updates or HANA updates on you.
As shown in the preceding diagram, SAP HANA on Azure (Large Instances) is a multi-
tenant IaaS offering. For the most part, the division of responsibility is at the OS-
infrastructure boundary. Microsoft is responsible for all aspects of the service below the
line of the operating system. You're responsible for all aspects of the service above the
line. The OS is your responsibility. You can continue to use most current on-premises
methods you might employ for compliance, security, application management, basis,
and OS management. The systems appear as if they're in your network.
This service is optimized for SAP HANA, so you'll need to work with Microsoft to use the
underlying infrastructure capabilities for the best results.
Your responsibilities
The following list provides more detail on each of the layers and your responsibilities:
Networking: All the internal networks for the Large Instance stamp running SAP HANA.
Your responsibility includes access to storage, connectivity between the instances (for
scale-out and other functions), connectivity to the landscape, and connectivity to Azure
where the SAP application layer is hosted in VMs. It also includes WAN connectivity
between Azure Data Centers for disaster recovery purposes and replication. All networks
are partitioned by the tenant and have quality of service applied.
Storage: The virtualized partitioned storage for all volumes needed by the SAP HANA
servers, and for snapshots.
Servers: The dedicated physical servers to run the SAP HANA databases assigned to
tenants. The servers of the Type I class of SKUs are hardware abstracted. With these
types of servers, the server configuration is collected and maintained in profiles, which
can be moved from one physical hardware to another physical hardware. Such a
(manual) move of a profile by operations can be compared to Azure service healing. The
servers of the Type II class SKUs don't offer this capability.
O/S: The OS you choose (SUSE Linux or Red Hat Linux) that's running on the servers. The
OS images you're supplied with were provided by the individual Linux vendor to
Microsoft for running SAP HANA. You must have a subscription with the Linux vendor
for the specific SAP HANA-optimized image. You're responsible for registering the
images with the OS vendor.
From the point of handover by Microsoft, you're responsible for any further patching of
the Linux operating system. This patching includes added packages that might be
necessary for a successful SAP HANA installation and that weren't included by the Linux
vendor in their SAP HANA optimized OS images. (For more information, see SAP's
HANA installation documentation and SAP Notes.)
The underlying infrastructure of the HANA Large Instance provides functionality for
backup and restore of the OS volume. Using this functionality is your responsibility.
Middleware: The SAP HANA Instance, primarily. Administration, operations, and
monitoring are your responsibility. You can use storage snapshots for backup and
restore and disaster recovery. These capabilities are provided by the infrastructure.
You're responsible to design high availability or disaster recovery with these capabilities
and monitoring to determine whether storage snapshots executed successfully.
Data: Your data managed by SAP HANA, and other data such as backup files located on
volumes or file shares. Your responsibilities include monitoring disk free space and
managing the content on the volumes. You're also responsible for monitoring the
successful execution of backups of disk volumes and storage snapshots. Successful
execution of data replication to disaster recovery sites is the responsibility of Microsoft.
Applications: The SAP application instances, or in the case of non-SAP applications, the
application layer of those applications. Your responsibilities include deployment,
administration, operations, and monitoring of those applications. You're responsible for
capacity planning of CPU resource consumption, memory consumption, Azure storage
consumption, and network bandwidth consumption within virtual networks. You're also
responsible for capacity planning for resource consumption from virtual networks to
SAP HANA on Azure (Large Instances).
WANs: The connections you establish from on-premises to Azure deployments for
workloads. All customers with HANA Large Instances use Azure ExpressRoute for
connectivity. This connection isn't part of the SAP HANA on Azure (Large Instances)
solution. You're responsible for the setup of this connection.
Archive: You might prefer to archive copies of data by using your own methods in
storage accounts. Archiving requires management, compliance, costs, and operations.
You're responsible for generating archive copies and backups on Azure and storing
them in a compliant way.
Next steps
Learn about compatible operating systems for HANA Large Instances.
SLES 12 SP2 Not offered S72, S72m, S96, S144, S144m, S192, S192m, S192xm
anymore
SLES 12 SP3 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm
SLES 12 SP4 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m
SLES 12 SP5 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m
SLES 15 SP1 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m
RHEL 7.6 Available S72, S72m, S96, S144, S144m, S192, S192m, S192xm,
S224, S224m
SLES 12 SP2 Not offered S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
anymore S768xm, S960m
SLES 12 SP3 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S960m
Operating Availability SKUs
System
SLES 12 SP4 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S960m
SLES 12 SP5 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m
SLES 15 SP1 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m
RHEL 7.6 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m
RHEL 7.9 Available S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m,
S768xm, S896m, S960m
Next steps
Learn more about:
Available SKUs
Upgrading the operating system
Supported scenarios for HANA Large Instances
Install HANA on SAP HANA on Azure
(Large Instances)
Article • 02/10/2023
In this article, we'll walk through installing HANA on SAP HANA on Azure Large
Instances (otherwise known as BareMetal Infrastructure).
Prerequisites
To install HANA on SAP HANA on Azure (Large Instances), first:
Provide Microsoft with all the data to deploy for you on an SAP HANA Large
Instance.
Receive the SAP HANA Large Instance from Microsoft.
Create an Azure virtual network that is connected to your on-premises network.
Connect the ExpressRoute circuit for HANA Large Instances to the same Azure
virtual network.
Install an Azure virtual machine that you use as a jump box for HANA Large
Instances.
Ensure that you can connect from the jump box to your HANA Large Instance and
vice versa.
Check whether all the necessary packages and patches are installed.
Read the SAP notes and documentation about HANA installation on the operating
system you're using. Make sure that the HANA release of choice is supported on
the operating system release.
The HANA Large Instance units aren't directly connected to the internet. You can't
directly download the installation packages from SAP to the HANA Large Instance
virtual machine. Instead, you download the packages to the jump box virtual machine.
You need an SAP S-user or other user, which allows you to access the SAP Marketplace.
1. Sign in, and go to SAP Service Marketplace . Select Download Software >
Installations and Upgrade > By Alphabetical Index. Then select Under H – SAP
HANA Platform Edition > SAP HANA Platform Edition 2.0 > Installation.
Download the files shown in the following screenshot.
2. In this example, we downloaded SAP HANA 2.0 installation packages. On the Azure
jump box virtual machine, expand the self-extracting archives into the directory as
shown below.
3. As the archives are extracted, copy the directory created by the extraction (in this
case, 51052030). Copy the directory from the HANA Large Instance unit
/hana/shared volume into a directory you created.
) Important
Don't copy the installation packages into the root or boot LUN. Space is
limited and needs to be used by other processes as well.
Install SAP HANA on the HANA Large Instance
unit
1. To install SAP HANA, sign in as user root. Only root has enough permissions to
install SAP HANA. Set permissions on the directory you copied over into
/hana/shared.
To install SAP HANA by using the graphical user interface setup, the gtk2 package
needs to be installed on HANA Large Instances. To check whether it's installed, run
the following command:
(In later steps, we show the SAP HANA setup with the graphical user interface.)
2. Go into the installation directory, and navigate into the sub directory
HDB_LCM_LINUX_X86_64.
./hdblcmgui
3. Now you'll progress through a sequence of screens in which you provide the data
for the installation. In this example, we're installing the SAP HANA database server
and the SAP HANA client components. So our selection is SAP HANA Database.
4. Select Install New System.
5. Select among several other components that you can install.
6. Choose the SAP HANA Client and the SAP HANA Studio. Also install a scale-up
instance. Then select Single-Host System.
7. Next you'll provide some data. For the installation path, use the /hana/shared
directory.
) Important
As HANA System ID (SID), you must provide the same SID as you provided
Microsoft when you ordered the HANA Large Instance deployment. Choosing
a different SID causes the installation to fail, due to access permission
problems on the different volumes.
8. Provide the locations for the HANA data files and the HANA log files.
7 Note
The SID you specified when you defined system properties (two screens ago)
should match the SID of the mount points. If there is a mismatch, go back and
adjust the SID to the value you have on the mount points.
Provide the System Administrator User ID and ID of User Group that you
provided to Microsoft when you ordered the unit deployment. Otherwise, the
installation of SAP HANA on the HANA Large Instance unit will fail.
11. The next two screens aren't shown here. They enable you to provide the password
for the SYSTEM user of the SAP HANA database, and the password for the sapadm
user. The latter is used for the SAP Host Agent that gets installed as part of the
SAP HANA database instance.
After defining the password, you see a confirmation screen. check all the data
listed, and continue with the installation. You'll reach a progress screen that
documents the installation progress, like this one:
12. As the installation finishes, you should see a screen like this one:
The SAP HANA instance should now be up and running, and ready for usage. You
can connect to it from SAP HANA Studio. Make sure you check for and apply the
latest updates.
Next steps
Learn about SAP HANA Large Instances high availability and disaster recovery on Azure.
SAP HANA Large Instances high availability and disaster recovery on Azure
SAP HANA (Large Instances)
architecture on Azure
Article • 02/10/2023
In this article, we'll describe the architecture for deploying SAP HANA on Azure Large
Instances (otherwise known as BareMetal Infrastructure).
At a high level, the SAP HANA on Azure (Large Instances) solution has the SAP
application layer on virtual machines (VMs). The database layer is on the SAP certified
HANA Large Instance (HLI). The HLI is located in the same Azure region as the Azure
IaaS VMs.
7 Note
Deploy the SAP application layer in the same Azure region as the SAP database
management system (DBMS) layer. This rule is well documented in published
information about SAP workloads on Azure.
Architectural overview
The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI-
certified hardware configuration. The hardware is a non-virtualized, bare metal, high-
performance server for the SAP HANA database. It gives you the flexibility to scale
resources for the SAP application layer to meet your needs.
The architecture shown is divided into three sections:
Center: Shows Azure IaaS and, in this case, use of VMs to host SAP or other
applications that use SAP HANA as a DBMS. Smaller HANA instances that function
with the memory that VMs provide are deployed in VMs together with their
application layer. For more information about virtual machines, see Virtual
machines .
Azure network services are used to group SAP systems together with other
applications into virtual networks. These virtual networks connect to on-premises
systems and to SAP HANA on Azure (Large Instances).
For SAP NetWeaver applications and databases that are supported to run in Azure,
see SAP Support Note #1928533 – SAP applications on Azure: Supported products
and Azure VM types . For documentation on how to deploy SAP solutions on
Azure, see:
Use SAP on Windows virtual machines
Use SAP solutions on Azure virtual machines
Left: Shows the SAP HANA TDI-certified hardware in the Azure Large Instance
stamp. The HANA Large Instance units connect to the virtual networks of your
Azure subscription using the same technology on-premises servers use to connect
into Azure. In May 2019, we introduced an optimization that allows communication
between the HANA Large Instance units and the Azure VMs without the
ExpressRoute Gateway. This optimization, called ExpressRoute FastPath, is shown in
the preceding diagram by the red lines.
Tenants
Within the multi-tenant infrastructure of the Large Instance stamp, customers are
deployed as isolated tenants. At deployment of the tenant, you name an Azure
subscription within your Azure enrollment. This Azure subscription is the one the HANA
Large Instance is billed against. These tenants have a 1:1 relationship to the Azure
subscription.
For a network, it's possible to access a HANA Large Instance deployed in one tenant in
one Azure region from different virtual networks belonging to different Azure
subscriptions. Those Azure subscriptions must belong to the same Azure enrollment.
Available SKUs
Just as Azure allows you to choose between different VM types, you can choose from
different SKUs of HANA Large Instances. You can select the SKU appropriate for the
specific SAP HANA workload type. SAP applies memory-to-processor-socket ratios for
varying workloads based on the Intel processor generations. For more information on
available SKUs, see Available SKUs for HLI.
Next steps
Learn about SAP HANA Large Instances network architecture.
In this article, we'll look at the network architecture for deploying SAP HANA on Azure
Large Instances (otherwise known as BareMetal Infrastructure).
It's likely that not all IT systems are located in Azure already. Your SAP landscape may be
hybrid as well. Your database management system (DBMS) and SAP application may use
a mixture of NetWeaver, S/4HANA, and SAP HANA. Your SAP application might even use
another DBMS.
Azure offers different services that allow you to run the DBMS, NetWeaver, and
S/4HANA systems in Azure. Azure offers network technology to make Azure look like a
virtual data center to your on-premises software deployments. The Azure network
functionality includes:
When integrating HANA Large Instances into the Azure data center network fabric,
Azure ExpressRoute technology is used as well.
7 Note
Only one Azure subscription can be linked to only one tenant in a HANA Large
Instance stamp in a specific Azure region. Conversely, a single HANA Large Instance
stamp tenant can be linked to only one Azure subscription. This requirement is
consistent with other billable objects in Azure.
) Important
Only the Azure Resource Manager deployment method is supported with SAP
HANA on Azure (Large Instances).
7 Note
The maximum throughput you can achieve with a ExpressRoute gateway is 10 Gbps
by using an ExpressRoute connection. Copying files between a VM that resides in a
virtual network and a system on-premises (as a single copy stream) doesn't achieve
the full throughput of the different gateway SKUs. To leverage the complete
bandwidth of the ExpressRoute gateway, use multiple streams or copy different
files in parallel streams of a single file.
The HANA Large Instances of your tenant are connected through another
ExpressRoute circuit into your virtual networks. The on-premises to Azure virtual
network ExpressRoute circuits and the circuits between Azure virtual networks and
HANA Large Instances don't share the same routers. Their load conditions remain
separate.
The workload profile between the SAP application layer and the HANA Large
Instance is of a different nature. SAP HANA generates many small requests and
bursts like data transfers (result sets) into the application layer.
The SAP application architecture is more sensitive to network latency than typical
scenarios where data is exchanged between on-premises and Azure.
The Azure ExpressRoute gateway has at least two ExpressRoute connections. One
circuit is connected from on-premises and one is connected from the HANA Large
Instance. This configuration leaves only room for two more circuits from different
MSEEs to connect to the ExpressRoute Gateway. This restriction is independent of
the usage of ExpressRoute FastPath. All the connected circuits share the maximum
bandwidth for incoming data of the ExpressRoute gateway.
With Revision 3 of HANA Large Instance stamps, the network latency between VMs and
HANA Large Instance units can be higher than typical VM-to-VM network round-trip
latencies. Depending on the Azure region, values can exceed the 0.7-ms round-trip
latency classified as below average in SAP Note #1100926 - FAQ: Network
performance . Depending on Azure Region and the tool to measure network round-
trip latency between an Azure VM and HANA Large Instance, the latency can be up to 2
milliseconds. Still, customers successfully deploy SAP HANA-based production SAP
applications on SAP HANA Large Instances. Make sure you test your business processes
thoroughly with Azure HANA Large Instances. A new functionality, called ExpressRoute
FastPath, can substantially reduce the network latency between HANA Large Instances
and application layer VMs in Azure (see below).
Revision 4 of HANA Large Instance stamps improves network latency between Azure
VMs deployed in proximity to the HANA Large Instance stamp. Latency meets the
average or better than average classification as documented in SAP Note #1100926 -
FAQ: Network performance if Azure ExpressRoute FastPath is configured (see below).
To deploy Azure VMs in proximity to HANA Large Instances of Revision 4, you need to
apply Azure Proximity Placement Groups. Proximity placement groups can be used to
locate the SAP application layer in the same Azure datacenter as Revision 4 hosted
HANA Large Instances. For more information, see Azure Proximity Placement Groups for
optimal network latency with SAP applications.
To provide deterministic network latency between VMs and HANA Large Instance, using
the ExpressRoute gateway SKU is essential. Unlike the traffic patterns between on-
premises and VMs, the traffic patterns between VMs and HANA Large Instances can
develop small but high bursts of requests and data volumes. To handle such bursts, we
highly recommend using the UltraPerformance gateway SKU. For the Type II class of
HANA Large Instance SKUs, using the UltraPerformance gateway SKU as a ExpressRoute
gateway is mandatory.
) Important
Given the overall network traffic between the SAP application and database layers,
only the HighPerformance or UltraPerformance gateway SKUs for virtual networks
are supported for connecting to SAP HANA on Azure (Large Instances). For HANA
Large Instance Type II SKUs, only the UltraPerformance gateway SKU is supported
as a ExpressRoute gateway. Exceptions apply when using ExpressRoute FastPath
(see below).
ExpressRoute FastPath
In May 2019, we released ExpressRoute FastPath. FastPath lowers the latency between
HANA Large Instances and Azure virtual networks that host the SAP application VMs.
With FastPath, the data flows between VMs and HANA Large Instances aren't routed
through the ExpressRoute gateway. The VMs assigned in the subnet(s) of the Azure
virtual network directly communicate with the dedicated enterprise edge router.
) Important
ExpressRoute FastPath requires that the subnets running the SAP application VMs
are in the same Azure virtual network that is connected to the HANA Large
Instances. VMs located in Azure virtual networks that are peered with the Azure
virtual network connected to the HANA Large Instance units do not benefit from
ExpressRoute FastPath. As a result, typical hub and spoke virtual network designs,
where the ExpressRoute circuits connect against a hub virtual network and virtual
networks containing the SAP application layer (spokes) are peered, the optimization
by ExpressRoute FastPath won't work. ExpressRoute FastPath also doesn't currently
support user defined routing rules (UDR). For more information, see ExpressRoute
virtual network gateway and FastPath.
For more information on how to configure ExpressRoute FastPath, see Connect a virtual
network to HANA large instances.
7 Note
7 Note
To run SAP landscapes in Azure, connect to the enterprise edge router closest to
the Azure region in the SAP landscape. HANA Large Instance stamps are connected
through dedicated enterprise edge routers to minimize network latency between
VMs in Azure IaaS and HANA Large Instance stamps.
The ExpressRoute gateway for the VMs that host SAP application instances are
connected to one ExpressRoute circuit that connects to on-premises. The same virtual
network is connected to a separate enterprise edge router. That edge router is
dedicated to connecting to Large Instance stamps. Again, with FastPath, the data flow
from HANA Large Instances to the SAP application layer VMs isn't routed through the
ExpressRoute gateway. This configuration reduces the network round-trip latency.
This system is a straightforward example of a single SAP system. The SAP application
layer is hosted in Azure. The SAP HANA database runs on SAP HANA on Azure (Large
Instances). The assumption is that the ExpressRoute gateway bandwidth of 2-Gbps or
10-Gbps throughput doesn't represent a bottleneck.
You might create a special virtual network that connects to HANA Large Instances when:
Doing backups directly from the HANA instances in a HANA Large Instance to a
VM in Azure that hosts NFS shares.
Copying large backups or other files from HANA Large Instances to disk space
managed in Azure.
Use a separate virtual network to host VMs that manage storage for mass transfer of
data between HANA Large Instances and Azure. This arrangement avoids large file or
data transfer from HANA Large Instances to Azure on the ExpressRoute gateway that
serves the VMs running the SAP application layer.
Use multiple virtual networks for a single, larger SAP application layer.
Deploy one separate virtual network for each SAP system deployed, compared to
combining these SAP systems in separate subnets under the same virtual network.
The following diagram shows a more expandable networking architecture for SAP
HANA on Azure (Large Instances):
Depending on the rules and restrictions you want to apply between the different virtual
networks hosting VMs of different SAP systems, you should peer those virtual networks.
For more information about virtual network peering, see Virtual network peering.
Routing in Azure
By default deployment, three network routing considerations are important for SAP
HANA on Azure (Large Instances):
SAP HANA on Azure (Large Instances) can be accessed only through Azure VMs
and the dedicated ExpressRoute connection, not directly from on-premises. Direct
access from on-premises to the HANA Large Instance units, as delivered by
Microsoft to you, isn't possible immediately. The transitive routing restrictions are
because of the current Azure network architecture used for SAP HANA Large
Instances. Some administration clients and any applications that need direct
access, such as SAP Solution Manager running on-premises, can't connect to the
SAP HANA database. For exceptions, see the following section, Direct Routing to
HANA Large Instances.
If you have HANA Large Instance units deployed in two different Azure regions for
disaster recovery, the same transient routing restrictions apply as in the past. In
other words, IP addresses of a HANA Large Instance in one region (for example, US
West) weren't routed to a HANA Large Instance deployed in another region (for
example, US East). This restriction is independent of the use of Azure network
peering across regions or cross-connecting the ExpressRoute circuits that connect
HANA Large Instances to virtual networks. For a graphic representation, see the
figure in the section, Use HANA Large Instance units in multiple regions. This
restriction, which came with the deployed architecture, prohibited the immediate
use of HANA system replication for disaster recovery. For recent changes, again,
see Use HANA Large Instance units in multiple regions.
SAP HANA on Azure Large Instances has an assigned IP address from the server IP
pool address range that you submitted when requesting the HANA Large Instance
deployment. For more information, see SAP HANA (Large Instances) infrastructure
and connectivity on Azure. This IP address is accessible through the Azure
subscriptions and circuit that connects Azure virtual networks to HANA Large
Instances. The IP address assigned out of that server IP pool address range is
directly assigned to the hardware unit. It's not assigned through network address
translation (NAT) anymore, as was the case in the first deployments of this solution.
A reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with
Traffic Manager deployed in the Azure virtual network that connects to HANA
Large Instances and to on-premises as a virtual firewall/traffic routing solution.
Using IPTables rules in a Linux VM to enable routing between on-premises
locations and HANA Large Instance units, or between HANA Large Instance units in
different regions. The VM running IPTables must be deployed in the Azure virtual
network that connects to HANA Large Instances and to on-premises. The VM must
be sized so that the network throughput of the VM is sufficient for the expected
network traffic. For more information on VM network bandwidth, check the article
Sizes of Linux virtual machines in Azure.
Azure Firewall would be another solution to enable direct traffic between on-
premises and HANA Large instance units.
All the traffic of these solutions would be routed through an Azure virtual network. As
such, the traffic could also be restricted by the soft appliances used or by Azure Network
Security Groups. In this way, specific IP addresses or IP address ranges from on-premises
could either be blocked or explicitly allowed access to HANA Large Instances.
7 Note
Be aware that implementation and support for custom solutions involving third-
party network appliances or IPTables isn't provided by Microsoft. Support must be
provided by the vendor of the component used or by the integrator.
Enable direct access from on-premises to your HANA Large Instance units
deployed in different regions.
Enable direct communication between your HANA Large Instance units deployed
in different regions.
In Azure regions where Global Reach is offered, you can request enabling Global Reach
for your ExpressRoute circuit. That circuit connects your on-premises network to the
Azure virtual network that connects to your HANA Large Instances. There are costs for
the on-premises side of your ExpressRoute circuit. For more information, see the pricing
for Global Reach Add-On . You won't pay added costs for the circuit that connects the
HANA Large Instances to Azure.
) Important
When using Global Reach to enable direct access between your HANA Large
Instance units and on-premises assets, the network data and control flow is not
routed through Azure virtual networks. Instead, network data and control flow is
routed directly between the Microsoft enterprise exchange routers. So any NSG or
ASG rules, or any type of firewall, NVA, or proxy you deployed in an Azure virtual
network, won't be touched. If you use ExpressRoute Global Reach to enable direct
access from on-premises to HANA Large instance units, restrictions and
permissions to access HANA large Instance units need to be defined in firewalls
on the on-premises side.
Similarly, ExpressRoute Global Reach can be used to connect two HANA Large Instance
tenants deployed in different regions. The isolation is the ExpressRoute circuits that your
HANA Large Instance tenants use to connect to Azure in both regions. There are no
added charges for connecting two HANA Large Instance tenants deployed in different
regions.
) Important
The data flow and control flow of the network traffic between the HANA Large
instance tenants won't be routed through Azure networks. So you can't use Azure
functionality or network virtual appliances (NVAs) to enforce communication
restrictions between your HANA Large Instances tenants.
For more information on how to enable ExpressRoute Global Reach, see Connect a
virtual network to HANA large instances.
The preceding figure shows how the virtual networks in both regions are connected to
two ExpressRoute circuits. The circuits are used to connect to SAP HANA on Azure
(Large Instances) in both Azure regions (grey lines). The reason for the two cross
connections is to protect from an outage of the MSEEs on either side. The
communication flow between the two virtual networks in the two Azure regions is
supposed to be handled over the global peering of the two virtual networks in the two
different regions (blue dotted line). The thick red line describes the ExpressRoute Global
Reach connection. This connection allows the HANA Large Instance units of your tenants
in different regions to communicate with each other.
) Important
If you used multiple ExpressRoute circuits, use AS Path prepending and Local
Preference BGP settings to ensure proper routing of traffic.
Next steps
Learn about the storage architecture of SAP HANA (Large Instances).
In this article, we'll look at the storage architecture for deploying SAP HANA on Azure
Large Instances (also known as BareMetal Infrastructure).
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP
HANA on the classic deployment model per SAP recommended guidelines.
Type I class of HANA Large Instances come with four times the memory volume as
storage volume. Whereas Type II class of HANA Large Instances come with a volume
intended for storing HANA transaction log backups. For more information, see Install
and configure SAP HANA (Large Instances) on Azure.
See the following table for storage allocation. The table lists the rough capacity for
volumes provided with the different HANA Large Instance units.
More recent SKUs of HANA Large Instances are delivered with the following storage
configurations.
Actual deployed volumes might vary based on deployment and the tool used to show
the volume sizes.
If you subdivide a HANA Large Instance SKU, a few examples of possible division pieces
might look like:
These sizes are rough volume numbers that can vary slightly based on deployment and
the tools used to look at the volumes. There are also other partition sizes, such as 2.5 TB.
These storage sizes are calculated using a formula similar to the one used for the
previous partitions. The term "partitions" doesn't mean the operating system, memory,
or CPU resources are partitioned. It indicates storage partitions for the different HANA
instances you might want to deploy on one single HANA Large Instance unit.
If you need more storage, you can buy more in 1-TB units. The extra storage may be
added as more volume or used to extend one or more of the existing volumes. You can't
reduce the sizes of the volumes as originally deployed and as documented by the
previous tables. You also aren't able to change the names of the volumes or mount
names. The storage volumes previously described are attached to the HANA Large
Instance units as NFS4 volumes.
You can use storage snapshots for backup and restore and disaster recovery purposes.
For more information, see SAP HANA (Large Instances) high availability and disaster
recovery on Azure.
For more information on the storage layout for your scenario, see HLI supported
scenarios.
Run multiple SAP HANA instances on one
HANA Large Instance unit
It's possible to host more than one active SAP HANA instance on HANA Large Instance
units. To provide the capabilities of storage snapshots and disaster recovery, such a
configuration requires a volume set per instance. Currently, HANA Large Instance units
can be subdivided as follows:
S72, S72m, S96, S144, S192: In increments of 256 GB, with 256 GB as the smallest
starting unit. Different increments such as 256 GB and 512 GB can be combined to
the maximum memory of the unit.
S144m and S192m: In increments of 256 GB, with 512 GB as the smallest unit.
Different increments such as 512 GB and 768 GB can be combined to the
maximum memory of the unit.
Type II class: In increments of 512 GB, with the smallest starting unit of 2 TB.
Different increments such as 512 GB, 1 TB, and 1.5 TB can be combined to the
maximum memory of the unit.
The following examples show what it might look like running multiple SAP HANA
instances.
With the Type I class of SKUs of HANA Large Instance, the volume storing the boot LUN
is encrypted. In Revision 3 HANA Large Instance stamps using Type II class of SKUs, you
need to encrypt the boot LUN with OS methods. In Revision 4 HANA Large Instance
stamps using Type II class of SKUs, the volume storing the boot LUN is encrypted at rest
by default.
) Important
In order to prevent HANA from trying to grow data files beyond the 16 TB file size
limit of HANA Large Instance storage, you need to set the following parameters in
the global.ini configuration file of HANA:
datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285
Next steps
Learn about deploying SAP HANA (Large Instances).
SAP HANA (Large Instances) deployment
Supported scenarios for HANA Large
Instances
Article • 02/10/2023
This article describes the supported scenarios and architectural details for HANA Large
Instances (HLI).
7 Note
If your scenario isn't mentioned in this article, contact the Microsoft Service
Management team to assess your requirements. Before you set up the HLI unit,
validate the design with SAP or your service implementation partner.
Overview
HANA Large Instances support various architectures to help you accomplish your
business requirements. The following sections cover the architectural scenarios and their
configuration details.
The derived architectural designs are purely from an infrastructure perspective. Consult
SAP or your implementation partners for the HANA deployment. If your scenarios aren't
listed in this article, contact the Microsoft account team to review the architecture and
derive a solution for you.
7 Note
These architectures are fully compliant with Tailored Data Integration (TDI) design
and are supported by SAP.
This article describes the details of the two components in each supported architecture:
Ethernet
Storage
Ethernet
Each provisioned server comes preconfigured with sets of Ethernet interfaces. The
Ethernet interfaces configured on each HLI unit are categorized into four types:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
If necessary, you can define more NIC cards on your own. However, the configurations
of existing NICs can't be changed.
7 Note
You might find additional interfaces that are physical interfaces or bonding.
Consider only the previously mentioned interfaces for your use case. Ignore any
others.
The distribution for units with two assigned IP addresses should look as follows:
Ethernet “A” should have an assigned IP address that's within the server IP pool
address range that you submitted to Microsoft. This IP address should be
maintained in the /etc/hosts directory of the operating system (OS).
Ethernet “C” should have an assigned IP address that's used for communication to
NFS. You don't need to maintain this address in the etc/hosts directory to allow
instance-to-instance traffic within the tenant.
For HANA system replication or HANA scale-out deployment, a blade configuration with
two assigned IP addresses isn't suitable. If you have only two assigned IP addresses, and
you want to deploy such a configuration, contact SAP HANA on Azure Service
Management. They can assign you a third IP address in a third VLAN. For HANA Large
Instances with three assigned IP addresses on three NIC ports, the following usage rules
apply:
Ethernet “A” should have an assigned IP address that's outside of the server IP
pool address range that you submitted to Microsoft. This IP address shouldn't be
maintained in the etc/hosts directory of the OS.
Ethernet “C” should have an assigned IP address that's used for communication to
NFS storage. This type of address shouldn't be maintained in the etc/hosts
directory.
Ethernet “D” should be used exclusively for access to fencing devices for
Pacemaker. This interface is required when you configure HANA system replication
and want to achieve auto failover of the operating system by using an SBD-based
device.
Storage
Storage is preconfigured based on the requested topology. The volume sizes and mount
points vary depending on the number of servers and SKUs, and the configured
topology. For more information, review your required scenarios (later in this article). If
you require more storage, you can purchase it in 1-TB increments.
7 Note
Supported scenarios
The architectural diagrams in the next sections use the following notations:
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
Volume size distribution is based on the database size in memory. To learn what
database sizes in memory are supported in a multi-SID environment, see Overview
and architecture.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.
In the diagram, only a single-SID system is shown at the primary site, but multi-SID
(MCOS) systems are supported as well. At the DR site, the HLI unit is used for the QA
instance. Production operations run from the primary site. During DR failover (or failover
test), the QA instance at the DR site is taken down.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.
7 Note
As of December 2019, this architecture is supported only for the SUSE operating
system.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
Fencing: An SBD is configured for the fencing device setup. However, the use of
fencing is optional.
In the diagram, a multipurpose scenario is shown at the DR site, where the HLI unit is
used for the QA instance. Production operations run from the primary site. During DR
failover (or failover test), the QA instance at the DR site is taken down.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
Fencing: An SBD is configured for the fencing setup. However, the use of fencing is
optional.
At the DR site: Two sets of storage volumes are required for primary and secondary
node replication.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The boot volume for SKU Type I class is replicated to the DR node.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
On standby: The volumes and mount points are configured (marked as “Required
for HANA installation”) for the HANA instance installation at the standby unit.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
On the DR node
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as
“Required for HANA installation”) for the production HANA instance installation at
the DR HLI unit.
At the DR site: The data, log backups, and shared volumes (marked as “Storage
Replication”) are replicated via snapshot from the production site. These volumes
are mounted during failover only. For more information, see Disaster recovery
failover procedure.
The boot volume for SKU Type I class is replicated to the DR node.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured on both HLI units (Primary and DR):
ノ Expand table
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
The primary node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
ノ Expand table
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
For MCOS: Volume size distribution is based on the database size in memory. To
learn what database sizes in memory are supported in a multi-SID environment,
see Overview and architecture.
At the DR site: The volumes and mount points are configured (marked as “PROD
Instance at DR site”) for the production HANA instance installation at the DR HLI
unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The primary node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
At the DR site
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
At the DR site
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured (marked as “PROD
DR instance”) for the production HANA instance installation at the DR HLI unit.
At the DR site: The data, log backups, log, and shared volumes for QA (marked as
“QA instance installation”) are configured for the QA instance installation.
The primary site node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.
Architecture diagram
Ethernet
The following network interfaces are preconfigured:
ノ Expand table
NIC logical SKU Name with SUSE Name with RHEL Use case
interface type OS OS
Storage
The following mount points are preconfigured:
ノ Expand table
On the DR node
Mount point Use case
Key considerations
/usr/sap/SID is a symbolic link to /hana/shared/SID.
At the DR site: The volumes and mount points are configured for the production
HANA instance installation at the DR HLI unit.
The primary site node syncs with the DR node by using HANA system replication.
Global Reach is used to link the ExpressRoute circuits together to make a private
network between your regional networks.
Next steps
Learn about deploying HANA Large Instances.
In this article, we'll list the information you'll need to deploy SAP HANA Large Instances
(otherwise known as BareMetal Infrastructure instances). First, for background, see:
Required information
You've purchased SAP HANA on Azure Large Instances from Microsoft and want to
deploy. Microsoft will need the following information from you:
Customer name.
Business contact information (including email address and phone number).
Technical contact information (including email address and phone number).
Technical networking contact information (including email address and phone
number).
Azure deployment region (for example, West US, Australia East, or North Europe).
SAP HANA on Azure (large instances) SKU (configuration).
For every Azure deployment region:
A /29 IP address range for ER-P2P connections that connect Azure virtual
networks to HANA Large Instances.
A /24 CIDR Block used for the HANA Large Instances server IP pool.
Optional when using ExpressRoute Global Reach, reserve another /29 IP address
range. The added range enables direct routing from on-premises to HANA
Large Instance units. The added range also enables routing between HANA
Large Instance units in different Azure regions. This particular range can't
overlap with the IP address ranges you defined before.
The IP address range values used in the virtual network address space attribute of
every Azure virtual network that connects to the HANA Large Instances.
Data for each HANA Large Instances system:
Desired hostname, ideally with a fully qualified domain name.
Desired IP address for the HANA Large Instance unit out of the Server IP pool
address range. (The first 30 IP addresses in the server IP pool address range are
reserved for internal use within HANA Large Instances.)
SAP HANA SID name for the SAP HANA instance (required to create the
necessary SAP HANA-related disk volumes). Microsoft needs the HANA SID for
creating the permissions for sidadm on the NFS volumes. These volumes attach
to the HANA Large Instance unit. The HANA SID is also used as one of the name
components of the disk volumes that get mounted. If you want to run more
than one HANA instance on the unit, you should list multiple HANA SIDs. Each
one gets a separate set of volumes assigned.
In the Linux OS, the sidadm user has a group ID. This ID is required to create the
necessary SAP HANA-related disk volumes. The SAP HANA installation usually
creates the sapsys group, with a group ID of 1001. The sidadm user is part of
that group.
In the Linux OS, the sidadm user has a user ID. This ID is required to create the
necessary SAP HANA-related disk volumes. If you're running several HANA
instances on the unit, list all the sidadm users.
The Azure subscription ID for the Azure subscription to which SAP HANA on Azure
HANA Large Instances are going to be directly connected. This subscription ID
references the Azure subscription, which is going to be charged with the HANA
Large Instance unit or units.
After you provide the preceding information, Microsoft provisions SAP HANA on Azure
(Large Instances). Microsoft sends you information to link your Azure virtual networks to
HANA Large Instances. You can also access the HANA Large Instance units.
Next steps
See the following articles in sequence to connect to the HANA Large Instances after
Microsoft has deployed them:
In this article, we'll look at what's involved in connecting your Azure VMs to HANA Large
Instances (otherwise known as BareMetal Infrastructure instances).
The article What is SAP HANA on Azure (Large Instances)? mentions that the minimal
deployment of HANA Large Instances with the SAP application layer in Azure looks like
this:
Looking closer at the Azure virtual network side, you'll need:
The definition of an Azure virtual network into which you're going to deploy the
VMs of the SAP application layer.
The definition of a default subnet in the Azure virtual network that is really the one
into which the VMs are deployed.
The Azure virtual network that's created needs to have at least one VM subnet and
one Azure ExpressRoute virtual network gateway subnet. These subnets should be
assigned the IP address ranges as specified and discussed in the following sections.
7 Note
The Azure virtual network for HANA Large Instances must be created by using the
Azure Resource Manager deployment model. The older Azure deployment model,
commonly known as the classic deployment model, isn't supported by the HANA
Large Instance solution.
You can use the Azure portal, PowerShell, an Azure template, or the Azure CLI to create
the virtual network. (For more information, see Create a virtual network using the Azure
portal). In the following example, we look at a virtual network that's created by using the
Azure portal.
In this documentation, address space refers to the address space that the Azure virtual
network is allowed to use. This address space is also the address range that the virtual
network uses for BGP route propagation. This address space can be seen here:
In the previous example, with 10.16.0.0/16, the Azure virtual network was given a rather
large and wide IP address range to use. All the IP address ranges of subsequent subnets
within this virtual network can have their ranges within that address space. We don't
usually recommend such a large address range for a single virtual network in Azure. But
let's look into the subnets defined in the Azure virtual network:
You see a virtual network with a first VM subnet (here called "default") and a subnet
called "GatewaySubnet".
In the two previous images, the virtual network address space covers both the subnet
IP address range of the Azure VM and that of the virtual network gateway.
You can restrict the virtual network address space to the specific ranges used by each
subnet. You can also define the virtual network address space of a virtual network as
multiple specific ranges, as shown here:
In this case, the virtual network address space has two spaces defined. They're the
same as the IP address ranges defined for the subnet IP address range of the Azure VM
and the virtual network gateway.
You can use any naming standard you like for these tenant subnets (VM subnets).
However, there must always be one, and only one, gateway subnet for each virtual
network that connects to the SAP HANA on Azure (Large Instances) ExpressRoute
circuit. This gateway subnet has to be named "GatewaySubnet" to make sure the
ExpressRoute gateway is properly placed.
2 Warning
It's critical that the gateway subnet always be named "GatewaySubnet".
You can use multiple VM subnets and non-contiguous address ranges. These address
ranges must be covered by the virtual network address space of the virtual network.
They can be in an aggregated form. They can also be in a list of the exact ranges of the
VM subnets and the gateway subnet.
The following list summarizes important facts about Azure virtual networks that connect
to HANA Large Instances:
You must submit the virtual network address space to Microsoft when you're
initially deploying HANA Large Instances.
The virtual network address space can be one larger range that covers the ranges
for both the subnet IP address range of the Azure VM and the virtual network
gateway.
Or you can submit multiple ranges that cover the different IP address ranges of
VM subnet IP address range(s) and the virtual network gateway IP address range.
The defined virtual network address space is used for BGP routing propagation.
The name of the gateway subnet must be: "GatewaySubnet".
The address space is used as a filter on the HANA Large Instance side to allow or
disallow traffic to the HANA Large Instance units from Azure. The BGP routing
information of the Azure virtual network and the IP address ranges configured for
filtering on the HANA Large Instance side should match. Otherwise, connectivity
issues can occur.
There are further important details about the gateway subnet. For more
information, see Connect a virtual network to HANA large instances.
Virtual network address space: The virtual network address space is the IP
address ranges that you assign to your address space parameter in the Azure
virtual networks. These networks connect to the SAP HANA Large Instance
environment. We recommend that this address space parameter is a multi-line
value. It should consist of the subnet range of the Azure VM and the subnet
range(s) of the Azure gateway.
This subnet range was shown in the previous graphics. It must NOT overlap with
your on-premises or server IP pool or ER-P2P address ranges. How do you get
these IP address range(s)? Your corporate network team or service provider should
provide one or more IP address range(s) that aren't used inside your network. For
example, the subnet of your Azure VM is 10.0.1.0/24, and the subnet of your Azure
gateway subnet is 10.0.2.0/28. We recommend that your Azure virtual network
address space is defined as: 10.0.1.0/24 and 10.0.2.0/28. Although the address
space values can be aggregated, we recommend matching them to the subnet
ranges. This way you can avoid accidentally reusing IP address ranges within larger
address spaces elsewhere in your network. The virtual network address space is
an IP address range. It needs to be submitted to Microsoft when you ask for an
initial deployment.
Azure VM subnet IP address range: This IP address range is the one you assign to
the Azure virtual network subnet parameter. This parameter is in your Azure virtual
network and connects to the SAP HANA Large Instance environment. This IP
address range is used to assign IP addresses to your Azure VMs. The IP addresses
out of this range are allowed to connect to your SAP HANA Large Instance
server(s). If needed, you can use multiple Azure VM subnets. We recommend a /24
CIDR block for each Azure VM subnet. This address range must be a part of the
values used in the Azure virtual network address space. How do you get this IP
address range? Your corporate network team or service provider should provide an
IP address range that isn't being used inside your network.
Address range for ER-P2P connectivity: This range is the IP range for your SAP
HANA Large Instance ExpressRoute (ER) P2P connection. This range of IP addresses
must be a /29 CIDR IP address range. This range must NOT overlap with your on-
premises or other Azure IP address ranges. This IP address range is used to set up
the ER connectivity from your ExpressRoute virtual gateway to the SAP HANA
Large Instance servers. How do you get this IP address range? Your corporate
network team or service provider should provide an IP address range that's not
currently being used inside your network. This range is an IP address range. It
needs to be submitted to Microsoft when you ask for an initial deployment.
Server IP pool address range: This IP address range is used to assign the
individual IP address to HANA Large Instance servers. The recommended subnet
size is a /24 CIDR block. If needed, it can be smaller, with as few as 64 IP addresses.
From this range, the first 30 IP addresses are reserved for use by Microsoft. Make
sure that you account for this when you choose the size of the range. This range
must NOT overlap with your on-premises or other Azure IP addresses. How do you
get this IP address range? Your corporate network team or service provider should
provide an IP address range that's not currently being used inside your network.
This range is an IP address range, which needs to be submitted to Microsoft
when asking for an initial deployment.
If you choose to use ExpressRoute Global Reach to enable direct routing from on-
premises to HANA Large Instance units, you need to reserve another /29 IP
address range. This range may not overlap with any of the other IP addresses
ranges you defined before.
If you choose to use ExpressRoute Global Reach to enable direct routing from a
HANA Large Instance tenant in one Azure region to another HANA Large Instance
tenant in another Azure region, you need to reserve another /29 IP address range.
This range may not overlap with the other IP address ranges you defined before.
For more information about ExpressRoute Global Reach and usage around HANA large
instances, see:
You need to define and plan the IP address ranges previously described. However, you
don't need to transmit all of them to Microsoft. The IP address ranges that you're
required to name to Microsoft are:
If you add more virtual networks that need to connect to HANA Large Instances, submit
the new Azure virtual network address space you're adding to Microsoft.
The following example shows the different ranges and some example ranges you need
to configure and eventually provide to Microsoft. The value for the Azure virtual network
address space isn't aggregated in the first example. However, it's defined from the
ranges of the first Azure VM subnet IP address range and the virtual network gateway
subnet IP address range.
You can use multiple VM subnets within the Azure virtual network when you configure
and submit the other IP address ranges of the added VM subnet(s) as part of the Azure
virtual network address space.
The preceding image doesn't show the added IP address range(s) required for the
optional use of ExpressRoute Global Reach.
You can also aggregate the data that you submit to Microsoft. In that case, the address
space of the Azure virtual network only includes one space. Using the IP address ranges
from the earlier example, the aggregated virtual network address space could look like
the following image:
In this example, instead of two smaller ranges that defined the address space of the
Azure virtual network, we have one larger range that covers 4096 IP addresses. Such a
large definition of the address space leaves some rather large ranges unused. Since the
virtual network address space value(s) are used for BGP route propagation, using the
unused ranges on-premises or elsewhere in your network can cause routing issues. The
preceding graphic doesn't show the added IP address range(s) required for the optional
use of ExpressRoute Global Reach.
We recommend that you keep the address space tightly aligned with the actual subnet
address space that you use. If needed, you can always add new address space values
later without incurring downtime on the virtual network.
) Important
Each IP address range in ER-P2P, the server IP pool, and the Azure virtual network
address space must NOT overlap with one another or with any other range that's
used in your network. Each must be discrete. As the two previous graphics show,
they also can't be a subnet of any other range. If overlaps occur between ranges,
the Azure virtual network might not connect to the ExpressRoute circuit.
1. Submit the IP address ranges for the Azure virtual network address space, the ER-
P2P connectivity, and server IP pool address range, together with other data that
has been listed at the beginning of the document. At this point, you could also
start to create the virtual network and the VM subnets.
2. An ExpressRoute circuit is created by Microsoft between your Azure subscription
and the HANA Large Instance stamp.
3. A tenant network is created on the Large Instance stamp by Microsoft.
4. Microsoft configures networking in the SAP HANA on Azure (Large Instances)
infrastructure to accept IP addresses from your Azure virtual network address
space that communicates with HANA Large Instances.
5. Depending on the specific SAP HANA on Azure (Large Instances) SKU that you
bought, Microsoft assigns a compute unit in a tenant network. It also allocates and
mounts storage, and installs the operating system (SUSE or Red Hat Linux). IP
addresses for these units are taken out of the Server IP Pool address range you
submitted to Microsoft.
At the end of the deployment process, Microsoft delivers the following data to you:
Information that's needed to connect your Azure virtual network(s) to the
ExpressRoute circuit that connects Azure virtual networks to HANA Large Instances:
Authorization key(s)
ExpressRoute PeerID
Data for accessing HANA Large Instances after you establish the ExpressRoute
circuit and Azure virtual network.
You can also find the sequence of connecting HANA Large Instances in the document
SAP HANA on Azure (Large Instances) Setup . Many of the steps are shown in an
example deployment in that document.
Next steps
Learn about connecting a virtual network to HANA Large Instance ExpressRoute.
You've created an Azure virtual network. You can now connect that network to SAP
HANA Large Instances (otherwise known as BareMetal Infrastructure instances). In this
article, we'll look at the steps you'll need to take.
7 Note
This step can take up to 30 minutes to complete. You create the new gateway in the
designated Azure subscription and then connect it to the specified Azure virtual
network.
7 Note
We recommend that you use the Azure Az PowerShell module to interact with
Azure. See Install Azure PowerShell to get started. To learn how to migrate to the
Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
# These values are used to create the gateway, update for how you wish
the GW components to be named
$myGWName = "VNet01GW"
$myGWConfig = "VNet01GWConfig"
$myGWPIPName = "VNet01GWPIP"
$myGWSku = "UltraPerformance" # Supported values for HANA large
instances are: UltraPerformance
The only supported gateway SKU for SAP HANA on Azure (Large Instances) is
UltraPerformance.
Run the following commands for each ExpressRoute gateway by using a different
AuthGUID for each connection. The first two entries shown in the following script come
from the information provided by Microsoft. Also, the AuthGUID is specific for every
virtual network and its gateway. If you want to add another Azure virtual network, you
need to get another AuthID for your ExpressRoute circuit that connects HANA Large
Instances into Azure from Microsoft.
PowerShell
# Create a new connection between the ER Circuit and your Gateway using the
Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName
$myGroupName
7 Note
You may need to connect the gateway to more than one ExpressRoute circuit associated
with your subscription. In that case, you'll need to run this step more than once. For
example, you're likely to connect the same virtual network gateway to the ExpressRoute
circuit that connects the virtual network to your on-premises network.
Applying ExpressRoute FastPath to existing
HANA Large Instance ExpressRoute circuits
You've seen how to connect a new ExpressRoute circuit created with a HANA Large
Instance deployment to an Azure ExpressRoute gateway on one of your Azure virtual
networks. But what if you already have your ExpressRoute circuits set up, and your
virtual networks are already connected to HANA Large Instances?
The new ExpressRoute FastPath reduces network latency. We recommend you apply the
change to take advantage of this reduced latency. The commands to connect a new
ExpressRoute circuit are the same as to change an existing ExpressRoute circuit. So you'll
need to run this sequence of PowerShell commands to change an existing circuit.
PowerShell
# Create a new connection between the ER Circuit and your Gateway using the
Authorization
$gw = Get-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName
$myGroupName
It's important that you add the last parameter as displayed above to enable the
ExpressRoute FastPath functionality.
Provide an address space range of a /29 address space. That address range may
not overlap with any of the other address space ranges you used so far connecting
HANA Large Instances to Azure. The address range should also not overlap with
any of the IP address ranges you used elsewhere in Azure or on-premises.
There's a limitation on the autonomous system numbers (ASNs) that can be used
to advertise your on-premises routes to HANA Large Instances. Your on-premises
mustn't advertise any routes with private ASNs in the range of 65000 – 65020 or
65515.
When connecting on-premises direct access to HANA Large instances, you need to
calculate a fee for the circuit that connects you to Azure. For more information,
check the pricing for Global Reach Add-On .
To have one or both of the scenarios applied to your deployment, open a support
message with Azure as described in Open a support request for HANA large Instances
The data and keywords you'll need to use for Microsoft to route and execute your
request are as follows:
7 Note
If you want to have both cases handled, you need to supply two different /29 IP
address ranges that don't overlap with any other IP address range used so far.
Next steps
Learn about other network requirements you may have to deploy SAP HANA Large
Instances on Azure.
In this article, we'll look at other network requirements you may have when deploying
SAP HANA Large Instances on Azure.
Prerequisites
This article assumes you've completed the steps in:
Add the new IP address range as a new range to the virtual network address space.
Don't generate a new aggregated range. Submit this change to Microsoft. This way you
can connect from that new IP address range to the HANA Large Instances in your client.
You can open an Azure support request to get the new virtual network address space
added. Once you receive confirmation, do the steps discussed in Connecting Azure VMs
to HANA Large Instances.
To create another subnet from the Azure portal, see Create a virtual network using the
Azure portal. To create one from PowerShell, see Create a virtual network using
PowerShell.
When the new circuit is created, and the SAP HANA on Microsoft Service Management
configuration is complete, you'll receive notification with the information you need to
continue. You can't connect Azure virtual networks to this added circuit if they're already
connected to another SAP HANA on Azure (Large Instance) ExpressRoute circuit in the
same Azure region.
Delete a subnet
To remove a virtual network subnet, you can use the Azure portal, PowerShell, or the
Azure CLI. If your Azure virtual network IP address range or address space was an
aggregated range, you don't have to take any action with Microsoft. (The virtual network
is still propagating the BGP route address space that includes the deleted subnet.)
You might have defined the Azure virtual network address range or address space as
multiple IP address ranges. One of these ranges could have been assigned to your
deleted subnet. Be sure to delete that from your virtual network address space. Then
inform SAP HANA on Microsoft Service Management to remove it from the ranges that
SAP HANA on Azure (Large Instances) is allowed to communicate with.
After you remove the virtual network, open an Azure support request to provide the IP
address space range or ranges to be removed.
ExpressRoute connection
Virtual network gateway
Virtual network gateway public IP
Virtual network
Next steps
Learn how to install and configure SAP HANA (Large Instances) on Azure.
) Important
In this article, we'll give an overview of high availability (HA) and disaster recovery (DR)
of SAP HANA on Azure Large Instances (otherwise known as BareMetal Infrastructure).
We'll also detail some of the requirements and considerations related to HA and DR.
Some of the processes described in this documentation are simplified. They aren't
intended as detailed steps to be included in operation handbooks. To create operation
handbooks for your configurations, run and test your processes with your specific HANA
versions and releases. You can then document the processes specific to your
configurations.
HA and DR
High availability and disaster recovery are crucial aspects of running your mission-critical
SAP HANA on the Azure (Large Instances) server. It's important to work with SAP, your
system integrator, or Microsoft to properly architect and implement the right high
availability and disaster recovery strategies. Also consider the recovery point objective
(RPO) and recovery time objective (RTO), which are specific to your environment.
Microsoft supports some SAP HANA high-availability capabilities with HANA Large
Instances. These capabilities include:
Storage replication: The storage system's ability to replicate all data to another
HANA Large Instance stamp in another Azure region. SAP HANA operates
independently of this method. This functionality is the default disaster recovery
mechanism offered for HANA Large Instances.
HANA system replication: The replication of all data in SAP HANA to a separate
SAP HANA system. The RTO is minimized through data replication at regular
intervals. SAP HANA supports asynchronous, synchronous in-memory, and
synchronous modes. Synchronous mode is used only for SAP HANA systems within
the same datacenter or less than 100 km apart. With the current design of HANA
Large Instance stamps, HANA system replication can be used for high availability
within one region only. HANA system replication requires a third-party reverse
proxy or routing component for disaster recovery configurations into another
Azure region.
Host auto-failover: A local fault-recovery solution for SAP HANA that's an
alternative to HANA system replication. If the primary node becomes unavailable,
you configure one or more standby SAP HANA nodes in scale-out mode, and SAP
HANA automatically fails over to a standby node.
SAP HANA on Azure (Large Instances) is offered in two Azure regions in four
geopolitical areas: US, Australia, Europe, and Japan. Two regions within a geopolitical
area that host HANA Large Instance (HLI) stamps are connected to separate dedicated
network circuits. These HLIs are used for replicating storage snapshots to provide
disaster recovery methods. Replication isn't set up by default but only for customers
who order disaster recovery functionality. Storage replication is dependent on the usage
of storage snapshots for HANA Large Instances. You can't choose an Azure region as a
DR region that's in a different geopolitical area.
Host automatic Possible with Dedicated DR setup. HANA volume sets are
failover: Scale- the standby Multipurpose DR setup. attached to all the nodes.
out (with or taking the DR synchronization by DR site must have the same
without standby) active role. using storage replication. number of nodes.
including 1+1 HANA controls
the role
switch.
Scenario High Disaster recovery option Comments
supported in availability
HANA Large option
Instances
HANA system Possible with Dedicated DR setup. Separate set of disk volumes
replication primary or Multipurpose DR setup. are attached to each node.
secondary DR synchronization by Only disk volumes of
setup. using storage replication. secondary replica in the
Secondary DR by using HANA system production site get replicated
moves to replication isn't yet to the DR location.
primary role in possible without third- One set of volumes is
a failover case. party components. required at the DR site.
HANA system
replication and
OS control
failover.
A dedicated DR setup is where the HANA Large Instance unit in the DR site isn't used for
running any other workload or non-production system. The unit is passive and is
deployed only if a disaster failover is executed. This setup isn't the preferred option for
most customers.
To learn about storage layout and ethernet details for your architecture, see HLI
supported scenarios.
7 Note
Before HANA2.0 SPS4 it was not supported to take database snapshots of multi-
tenant database container databases (more than one tenant). With SPS4 and newer
SAP is fully supporting this snapshot feature.
A multipurpose DR setup is where the HANA Large Instance unit on the DR site runs a
non-production workload. If there's a disaster, shut down the non-production system,
mount the storage-replicated (added) volume sets, and start the production HANA
instance. Most customers who use the HANA Large Instance disaster recovery
functionality use this configuration.
You can find more information on SAP HANA high availability in the following SAP
articles:
You can also connect all Azure virtual networks that connect to SAP HANA on Azure
(Large Instances) in one region to an ExpressRoute circuit that connects HANA Large
Instances in the other region. With this cross connect, services running on an Azure
virtual network in Region 1 can connect to HANA Large Instance units in Region 2, and
the other way around. This measure addresses a case in which only one of the MSEE
locations that connects to your on-premises location with Azure goes offline.
The following graphic illustrates a resilient configuration for disaster recovery cases:
Other requirements with HANA Large Instances
storage replication for disaster recovery
Order SAP HANA on Azure (Large Instances) SKUs of the same size as your
production SKUs and deploy them in the disaster recovery region. In current
customer deployments, these instances are used to run non-production HANA
instances. These configurations are referred to as multipurpose DR setups.
Order more storage on the DR site for each of your SAP HANA on Azure (Large
Instances) SKUs that you want to recover in the disaster recovery site. Buying more
storage lets you allocate the storage volumes. You can allocate the volumes that
are the target of the storage replication from your production Azure region into
the disaster recovery Azure region.
You may have SAP HANA system replication set up on primary and storage-based
replication to the DR site. Then you must purchase more storage at the DR site so
the data of both primary and secondary nodes gets replicated to the DR site.
Next steps
Learn about Backup and restore of SAP HANA on HANA Large Instances.
) Important
This article doesn't replace the SAP HANA administration documentation or SAP
Notes. We expect you have expertise in SAP HANA administration and operations,
especially with the topics of backup, restore, high availability, and disaster recovery.
In this article, screenshots from SAP HANA Studio are shown. Content, structure,
and the nature of the screens of SAP administration tools and the tools themselves
might change from SAP HANA release to release.
In this article, we'll walk through the steps of backing up and restoring SAP HANA on
HANA Large Instances (otherwise known as BareMetal Infrastructure).
Some of the processes described in this article are simplified. They aren't intended as
detailed steps to be included in operation handbooks. To create operation handbooks
for your configurations, run and test your processes with your specific HANA versions
and releases. You can then document the processes for your configurations.
One of the most important aspects of operating databases is to protect them from
catastrophic events. Such events may be caused by anything from natural disasters to
simple user errors. Backing up a database, with the ability to restore it to any point in
time, such as before someone deleted critical data, offers critical protection. You can
restore your database to a state that's as close as possible to the way it was prior to the
disruption.
SAP HANA on Azure (Large Instances) offers two backup and restore options:
You can use a third-party data protection tool to create backups. This tool should
be able to create application consistent snapshots or it must be able to use the
backing interface to stream with multiple sessions to a proper backup location.
There are several supported tools available. The choice of the tool should be
discussed and designed with the project team to meet the customer backup
windows requirements. And very important is to test the backup and restore
procedure during the project phase.
You can use storage snapshot backups with a utility provided by Microsoft as
described in the next chapter
7 Note
Before HANA2.0 SPS4 it was not supported to take database snapshots of multi-
tenant database container databases (more than one tenant). With SPS4 and newer
SAP is fully supporting this snapshot feature.
Transaction log backups are taken frequently and stored in the /hana/logbackups
volume or in Azure. You can trigger the /hana/logbackups volume that contains the
transaction log backups to take a snapshot separately. In that case, you don't need to
run a HANA data snapshot. Since all files in /hana/logbackup are consistent, because
they are “offline”, you can backup them also anytime to a different backup location to
archive them. If you must restore a database to a certain point in time, for a production
outage, the azacsnap tool can either clone any data snapshot to a new volume to
recover the database (preferred restore way) or restore a snapshot to the same data
volume where the database is located
7 Note
If you restore a older snapshot (snaprevert) to the original datavolume all newer
created snapshots will be deleted. The storage system is doing this because the
data points for the newer created snapshots will be invalid. Always start to revert
the latest snapshot or even better clone the snapshot to a new volume. By the
clone process nothing will be deleted.
7 Note
Storage snapshots consume storage space that's allocated to the HANA Large
Instance units. Consider the following aspects of scheduling storage snapshots and
how many storage snapshots to keep.
The specific mechanics of storage snapshots for SAP HANA on Azure (Large Instances)
include:
A specific storage snapshot at the point in time when it's taken consumes little
storage.
As data content changes and the content in SAP HANA data files change on the
storage volume, the snapshot needs to store the original block content and the
data changes.
As a result, the storage snapshot increases in size. The longer the snapshot exists,
the larger the storage snapshot becomes.
The more changes made to the SAP HANA database volume over the lifetime of a
storage snapshot, the larger the space consumption of the storage snapshot.
SAP HANA on Azure (Large Instances) comes with fixed volume sizes for the SAP HANA
data and log volumes. Taking snapshots of those volumes eats into your volume space.
You need to:
You can disable the storage snapshots when you either import masses of data or make
other significant changes to the HANA database.
The following sections provide information for taking these snapshots and include
general recommendations:
Although the hardware can sustain 255 snapshots per volume, you want to stay
well below this number. The recommendation is 250 or less.
Before you do storage snapshots, monitor and keep track of free space.
Lower the number of storage snapshots based on free space. You can lower the
number of snapshots that you keep, or you can extend the volumes. You can order
more storage in 1-terabyte units.
During activities such as moving data into SAP HANA with SAP platform migration
tools (R3load) or restoring SAP HANA databases from backups, disable storage
snapshots on the /hana/data volume.
During larger reorganizations of SAP HANA tables, avoid storage snapshots if
possible.
Storage snapshots are a prerequisite to take advantage of the DR capabilities of
SAP HANA on Azure (Large Instances).
If you create a backup VM make sure the latest HANA client is installed in that VM. With
this method azacsnap must be able open a remote database connection to a HANA
instance running in a different VM. You need to request a ssh-key and a storage user
from the Microsoft Support team to be able to access the storage. Without this ssh-key
and the user it is not possible to create snapshots.
Azacsnap is creating an user called azacsnap by default. If you prefer another name, you
can specify this during the installation. Check the above documentation for details.
Next steps
Read the article What is Azure Application Consistent Snapshot tool
Disaster Recovery principles and
preparation
Article • 02/10/2023
In this article, we'll discuss important disaster recovery (DR) principles for HANA Large
Instances (otherwise known as BareMetal Infrastructure). We'll walk through the steps
you need to take in preparation for disaster recovery. You'll also see how to achieve your
recovery time objective (RTO) and recovery point objective (RPO) in a disaster.
Most customers use the unit in the DR region to run non-production systems that use
an installed HANA instance. The HANA Large Instance needs to be of the same SKU as
the SKU used for production purposes. The following image shows what the disk
configuration between the server unit in the Azure production region and the disaster
recovery region looks like:
As shown in this overview graphic, you'll need to order a second set of disk volumes.
The target disk volumes associated with the HANA Large Instance server in the DR site
are the same size as the production volumes.
The following volumes are replicated from the production region to the DR site:
/hana/data
/hana/logbackups
/hana/shared (includes /usr/sap)
The /hana/log volume isn't replicated. The SAP HANA transaction log isn't needed when
restoring from those volumes.
The first transfer of the complete data of the volume should happen before the amount
of data becomes smaller than the deltas between snapshots. Then the volumes in the
DR site will contain all of the volume snapshots taken in the production site. Eventually,
you can use that DR system to get to an earlier status to recover lost data, without
rolling back the production system.
If there's an MCOD deployment with multiple independent SAP HANA instances on one
HANA Large Instance, all SAP HANA instances should have storage replicated to the DR
side.
When you use HANA System Replication for high-availability in your production site,
and you use storage-based replication for the DR site, the volumes of both nodes from
the primary site to the DR instance are replicated. Purchase extra storage (same size as
primary node) at the DR site to accommodate replication from both primary and
secondary nodes to the DR.
7 Note
The HANA Large Instance storage replication functionality mirrors and replicates
storage snapshots. If you don't take storage snapshots as described in Backup and
restore, there can't be any replication to the DR site. Storage snapshot execution is
a prerequisite to storage replication to the disaster recovery site.
Let's say the server instance hasn't yet been ordered with the extra storage volume set.
Then SAP HANA on Azure Service Management attaches the added volumes. They're a
target for the production replica to the HANA Large Instance on which you're running
the TST HANA instance. You'll need to provide the SID of your production HANA
instance. After SAP HANA on Azure Service Management confirms the attachment of
those volumes, you'll need to mount those volumes to the HANA Large Instance.
The next step is for you to install the second SAP HANA instance on the HANA Large
Instance in the DR Azure region where you run the TST HANA instance. The newly
installed SAP HANA instance needs to have the same SID. The users created need to
have the same UID and Group ID as the production instance. Read Backup and restore
for details. If the installation succeeds, you need to:
) Important
The /hana/log volume isn't replicated because it isn't necessary to restore the
replicated SAP HANA database to a consistent state in the disaster recovery site.
Next, set the storage snapshot backup schedule to achieve your RTO and RPO if there's
a disaster. To minimize the RPO, set the following replication intervals in the HANA
Large Instance service:
For the volumes covered by the combined snapshot (snapshot type hana), set to
replicate every 15 minutes to the equivalent storage volume targets in the disaster
recovery site.
For the transaction log backup volume (snapshot type logs), set to replicate every
3 minutes to the equivalent storage volume targets in the disaster recovery site.
Take a hana type storage snapshot every 30 minutes to 1 hour. For more
information, see Back up using Azure Application Consistent Snapshot tool.
Do SAP HANA transaction log backups every 5 minutes.
Take a logs type storage snapshot every 5-15 minutes. With this interval period,
you achieve an RPO of around 15-25 minutes.
With this setup, the sequence of transaction log backups, storage snapshots, and the
replication of the HANA transaction log backup volume and /hana/data, and
/hana/shared (includes /usr/sap) might look like the data shown in this graphic:
To achieve an even better RPO in the disaster recovery case, you can copy the HANA
transaction log backups from SAP HANA on Azure (Large Instances) to the other Azure
region. To achieve this further RPO reduction, take the following steps:
As the replication progresses, the snapshots on the PRD volumes in the DR Azure
regions aren't restored. The snapshots are only stored. If the volumes are mounted in
such a state, they represent the state in which you unmounted those volumes after the
PRD SAP HANA instance was installed on the server in the DR Azure region. They also
represent the storage backups that aren't yet restored.
If there's a failover, you can also choose to restore to an older storage snapshot instead
of to the latest storage snapshot.
Next steps
Learn about the disaster recovery failover procedure.
) Important
This article isn't a replacement for the SAP HANA administration documentation or
SAP Notes. We expect that you have a solid understanding of and expertise in SAP
HANA administration and operations, especially for backup, restore, high
availability, and disaster recovery (DR). In this article, screenshots from SAP HANA
Studio are shown. Content, structure, and the nature of the screens of SAP
administration tools and the tools themselves might change from SAP HANA
release to release.
In this article, we'll walk through the steps of failover to a DR site for SAP HANA on
Azure Large Instances (otherwise known as BareMetal Infrastructure).
You need the SAP HANA database to go back to the latest status of data. In this
case, there's a self-service script you can use to do the failover without the need to
contact Microsoft. For the failback, you need to work with Microsoft.
You want to restore to a storage snapshot that's not the latest replicated snapshot.
In this case, you need to work with Microsoft.
7 Note
The following steps must be done on the HANA Large Instance in the DR site.
To restore to the latest replicated storage snapshots, follow the steps in "Perform full DR
failover - azure_hana_dr_failover" in Microsoft snapshot tools for SAP HANA on Azure .
If you want to have multiple SAP HANA instances failed over, run the
azure_hana_dr_failover command several times. When requested, enter the SAP HANA
SID you want to fail over and restore.
You can test the DR failover without impacting the actual replication relationship. To do
a test failover, follow the steps in "Perform a test DR failover -
azure_hana_test_dr_failover" in Microsoft snapshot tools for SAP HANA on Azure .
) Important
Do not run any production transactions on the instance that you created in the DR
site through the process of testing a failover. The command
azure_hana_test_dr_failover creates a set of volumes that have no relationship to
the primary site. As a result, synchronization back to the primary site is not possible.
If you want to test multiple SAP HANA instances, run the script several times. When
requested, enter the SAP HANA SID of the instance you want to test for failover.
1. Shut down the nonproduction instance of HANA on the DR HANA Large Instance
that you're running. A dormant HANA production instance is preinstalled.
2. Make sure that no SAP HANA processes are running. Use the following command
for this check:
The output should show you the hdbdaemon process in a stopped state and no
other HANA processes in a running or started state.
3. Determine to which snapshot name or SAP HANA backup ID you want to have the
disaster recovery site restored. In real disaster recovery cases, this snapshot is
usually the latest snapshot. If you need to recover lost data, pick an earlier
snapshot.
4. Contact Azure Support through a high-priority support request. Ask for the restore
of that snapshot with the name and date of the snapshot. You can also identify it
by the HANA backup ID on the DR site. The default is for the operations side to
restore the /hana/data volume only. If you want to have the /hana/logbackups
volumes too, you need to specifically state that. Don't restore the /hana/shared
volume. Instead, choose specific files like global.ini out of the .snapshot directory
and its subdirectories after you remount the /hana/shared volume for PRD.
b. Restore the storage snapshot name or snapshot with the backup ID you chose
on the disaster recovery volumes.
After the restore, the disaster recovery volumes are available to be mounted to the
HANA Large Instances in the DR region.
1. Mount the disaster recovery volumes to the HANA Large Instance unit in the
disaster recovery site.
2. Start the dormant SAP HANA production instance.
3. Let's say you chose to copy transaction log backup logs to reduce the recovery
point objective (RPO) time. Then merge the transaction log backups into the newly
mounted DR /hana/logbackups directory. Don't overwrite existing backups. Copy
newer backups that weren't replicated with the latest replication of a storage
snapshot.
4. You can also restore single files out of the snapshots that weren't replicated to the
/hana/shared/PRD volume in the DR Azure region.
You've been running your SAP production workload for a while in the disaster recovery
site. As the problems in the production site are resolved, you want to fail back to your
production site. Because you can't lose data, the step back into the production site
involves several steps and close cooperation with the SAP HANA on Azure operations
team. It's up to you to trigger the operations team to start synchronizing back to the
production site after the problems are resolved.
Follow these steps:
1. The SAP HANA on Azure operations team gets the trigger to synchronize the
production storage volumes from the DR storage volumes, which now represent
the production state. In this state, the HANA Large Instance in the production site
is shut down.
2. The SAP HANA on Azure operations team monitors the replication and makes sure
that it's caught up before they inform you.
3. You shut down the applications that use the production HANA Instance in the
disaster recovery site. You then do a HANA transaction log backup. Next, you stop
the HANA instance that's running on the HANA Large Instances in the disaster
recovery site.
4. Now the operations team manually synchronizes the disk volumes again.
5. The SAP HANA on Azure operations team starts the HANA Large Instance in the
production site again. They hand it over to you. You make sure the SAP HANA
instance is shut down at the startup time of the HANA Large Instance.
6. You take the same database restore steps you did when you previously failed over
to the DR site.
For more information on the command and its output, see "Get DR replication status -
azure_hana_replication_status" in Microsoft snapshot tools for SAP HANA on Azure .
Next steps
Learn about monitoring SAP HANA (Large Instances) on Azure.
7 Note
This article contains references to the terms blacklist and slave, terms that Microsoft
no longer uses. When the term is removed from the software, we’ll remove it from
this article.
In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate
an SAP HANA database failover. You need to have a good understanding of Linux, SAP
HANA, and Pacemaker to complete the steps in this guide.
The following table includes the host names that are used throughout this article. The
code blocks in the article show the commands that need to be run, as well as the output
of those commands. Pay close attention to which node is referenced in each command.
...
SELINUX=disabled
...
SELINUX=disabled
4. Reboot the servers and then use the following command to verify the status of
selinux.
5. Configure NTP (Network Time Protocol). The time and time zones for both cluster
nodes must match. Use the following command to open chrony.conf and verify
the contents of the file.
a. The following contents should be added to config file. Change the actual values
as per your environment.
vi /etc/chrony.conf
chronyc tracking
Stratum : 3
chronyc sources
====================================================================
===========
a. First, install the latest updates on the system before you start to install the SBD
device.
b. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of
the resource-agents-sap-hana package installed, as documented in Support
Policies for RHEL High Availability Clusters - Management of SAP HANA in a
Cluster
subscription-manager repos
--enable=rhel-sap-hana-for-rhel-7-server-rpms
8. Install the Pacemaker, SBD, OpenIPMI, ipmitool, and fencing_sbd tools on all
nodes.
yum install pcs sbd fence-agent-sbd.x86_64 OpenIPMI
ipmitool
Configure Watchdog
In this section, you learn how to configure Watchdog. This section uses the same two
hosts, sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.
1. Make sure that the watchdog daemon is not running on any systems.
2. The default Linux watchdog, that will be installed during the installation, is the
iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore,
this watchdog must be disabled.
iTCO_wdt 13480 0
c. To make sure the driver is not loaded during the next system boot, the driver
must be blocklisted. To blocklist the iTCO modules, add the following to the end
of the 50-blacklist.conf file:
sollabdsm35:~ # vi /etc/modprobe.d/50-blacklist.conf
blacklist iTCO_wdt
blacklist iTCO_vendor_support
e. Test if the ipmi service is started. It is important that the IPMI timer is not
running. The timer management will be done from the SBD pacemaker service.
sollabdsm35:~ # ls -l /dev/watchdog
sollabdsm35:~ # vi /etc/sysconfig/ipmi
IPMI_SI=yes
DEV_IPMI=yes
IPMI_WATCHDOG=yes
IPMI_WATCHDOG_OPTIONS="timeout=20 action=reset nowayout=0
panic_wdt_timeout=15"
IPMI_POWEROFF=no
IPMI_POWERCYCLE=no
IPMI_IMB=no
Now the IPMI service is started and the device /dev/watchdog is created – But the
timer is still stopped. Later the SBD will manage the watchdog reset and enables
the IPMI timer.
SBD configuration
In this section, you learn how to configure SBD. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.
1. Make sure the iSCSI or FC disk is visible on both nodes. This example uses an FC-
based SBD device. For more information about SBD fencing, see Design Guidance
for RHEL High Availability Clusters - SBD Considerations and Support Policies for
RHEL High Availability Clusters - sbd and fence_sbd
multipath -ll
3600a098038304179392b4d6c6e2f4b62 dm-5 NETAPP ,LUN C-Mode
size=1.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 8:0:1:2 sdi 8:128 active ready running
| `- 10:0:1:2 sdk 8:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 8:0:3:2 sdj 8:144 active ready running
`- 10:0:3:2 sdl 8:176 active ready running
4. Creating the SBD discs and setup the cluster primitive fencing. This step must be
executed on first node.
vi /etc/sysconfig/sbd
SBD_DEVICE="/dev/mapper/3600a09803830417934d6c6e2f4b62"
SBD_PACEMAKER=yes
SBD_STARTMODE=always
SBD_DELAY_START=no
SBD_WATCHDOG_DEV=/dev/watchdog
SBD_WATCHDOG_TIMEOUT=15
SBD_TIMEOUT_ACTION=flush,reboot
SBD_MOVE_TO_ROOT_CGROUP=auto
SBD_OPTS=
UUID : ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce
SBD_DEVICE="/dev/mapper/3600a098038304179392b4d6c6e2f4b62"
## Type: yesno
Default: yes
# Whether to enable the pacemaker integration.
SBD_PACEMAKER=yes
Cluster initialization
In this section, you initialize the cluster. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.
passwd hacluster
Username: hacluster
Password:
sollabdsm35.localdomain: Authorized
sollabdsm36.localdomain: Authorized
WARNINGS:
Stack: corosync
2 nodes configured
0 resources configured
No resources
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/disabled
8. If one node is not joining the cluster check if the firewall is still running.
10. Stop the cluster restart the cluster services (on all nodes).
Active: active (running) since Wed 2021-01-20 01:43:41 EST; 9min ago
13. Restart the cluster (if not automatically started from pcsd).
15. Check the new cluster status with now one resource.
pcs status
Stack: corosync
2 nodes configured
1 resource configured
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
sbd: active/enabled
[root@node1 ~]#
16. Now the IPMI timer must run and the /dev/watchdog device must be opened by
sbd.
0 sollabdsm35 clear
1 sollabdsm36 clear
19. For the rest of the SAP HANA clustering you can disable fencing by setting:
The default and supported way is to create a performance optimized scenario where the
database can be switched over directly. Only this scenario is described here in this
document. In this case we recommend installing one cluster for the QAS system and a
separate cluster for the PRD system. Only in this case it is possible to test all
components before it goes into production.
Log Description
Replication
Mode
Synchronous Synchronous (mode=sync) means the log write is considered as successful when
the log entry has been written to the log volume of the primary and the
secondary instance. When the connection to the secondary system is lost, the
primary system continues transaction processing and writes the changes only to
the local disk. No data loss occurs in this scenario as long as the secondary
system is connected. Data loss can occur, when a takeover is executed while the
secondary system is disconnected. Additionally, this replication mode can run
with a full sync option. This means that log write is successful when the log
buffer has been written to the log file of the primary and the secondary instance.
In addition, when the secondary system is disconnected (for example, because of
network failure) the primary systems suspends transaction processing until the
connection to the secondary system is reestablished. No data loss occurs in this
scenario. You can set the full sync option for system replication only with the
parameter [system_replication]/enable_full_sync). For more information on how
to enable the full sync option, see Enable Full Sync Option for System
Replication.
Log Description
Replication
Mode
Asynchronous Asynchronous (mode=async) means the primary system sends redo log buffers
to the secondary system asynchronously. The primary system commits a
transaction when it has been written to the log file of the primary system and
sent to the secondary system through the network. It does not wait for
confirmation from the secondary system. This option provides better
performance because it is not necessary to wait for log I/O on the secondary
system. Database consistency across all services on the secondary system is
guaranteed. However, it is more vulnerable to data loss. Data changes may be
lost on takeover.
* su - hr2adm
VALUE
"normal"
b. SAP HANA system replication will only work after initial backup has been
performed. The following command creates an initial backup in the /tmp/
directory. Select a proper backup filesystem for the database.
ls -l /tmp
total 2031784
-rw-r----- 1 hr2adm sapsys 155648 Oct 26 23:31 backup_databackup_0_1
done.
hdbnsutil -sr_state
online: true
mode: primary
site id: 1
site name: DC1
Host Mappings:
~~~~~~~~~~~~~~
Site Mappings:
~~~~~~~~~~~~~~
DC1 (primary/)
Tier of DC1: 1
done.
su – hr2adm
b. For SAP HANA2.0 only, copy the SAP HANA system PKI SSFS_HR2.KEY and
SSFS_HR2.DAT files from primary node to secondary node.
scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
su - hr2adm
done.
hdbnsutil -sr_state
~~~~~~~~~
System Replication State
online: true
mode: syncmem
site id: 2
Host Mappings:
Site Mappings:
DC1 (primary/primary)
|---DC2 (syncmem/logreplay)
Tier of DC1: 1
Tier of DC2: 2
done.
~~~~~~~~~~~~~~
3. It is also possible to get more information on the replication status:
~~~~~
hr2adm@node1:/usr/sap/HR2/HDB00> python
/usr/sap/HR2/HDB00/exe/python_support/systemReplicationStatus.py
mode: PRIMARY
site id: 1
For more information about log replication mode, see the official SAP documentation .
To ensure that the replication traffic is using the right VLAN for the replication, it must
be configured properly in the global.ini . If you skip this step, HANA will use the Access
VLAN for the replication, which might be undesired.
The following examples show the host name resolution configuration for system
replication to a secondary site. Three distinct networks can be identified:
Network for internal SAP HANA communication between hosts at each site:
192.168.1.*
For more information, see Network Configuration for SAP HANA System Replication .
For system replication, it is not necessary to edit the /etc/hosts file, internal ('virtual')
host names must be mapped to IP addresses in the global.ini file to create a
dedicated network for system replication. The syntax for this is as follows:
global.ini
[system_replication_hostname_resolution]
<ip-address_site>=<internal-host-name_site>
SAP HANA startup on boot is disabled on all cluster nodes as the start and stop
will be managed by the cluster
SAP HANA system replication and takeover using tools from SAP are working
properly between cluster nodes
SAP HANA contains monitoring account that can be used by the cluster from both
cluster nodes
Both nodes are subscribed to 'High-availability' and 'RHEL for SAP HANA' (RHEL
6,RHEL 7) channels
In general, please execute all pcs commands only from on node because the CIB
will be automatically updated from the pcs shell.
Steps to configure
1. Configure pcs.
2. Configure corosync. For more information, see How can I configure my RHEL 7
High Availability Cluster with pacemaker and corosync .
cat /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: hana
transport: udpu
nodelist {
node {
ring0_addr: node1.localdomain
nodeid: 1
node {
ring0_addr: node2.localdomain
nodeid: 2
quorum {
provider: corosync_votequorum
two_node: 1
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
SID SAP System Identifier (SID) of SAP HANA installation. Must be the same
for all nodes.
Resource status
Clone: SAPHanaTopology_HR2_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_HR2_00 (class=ocf provider=heartbeat
type=SAPHanaTopology)
Attributes: InstanceNumber=00 SID=HR2
Operations: monitor interval=60 timeout=60
(SAPHanaTopology_HR2_00-monitor-interval-60)
start interval=0s timeout=180
(SAPHanaTopology_HR2_00-start-interval-0s)
stop interval=0s timeout=60
(SAPHanaTopology_HR2_00-stop-interval-0s)
Primary: SAPHana_HR2_00-primary
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Resource: SAPHana_HR2_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=false DUPLICATE_PRIMARY_TIMEOUT=7200
InstanceNumber=00 PREFER_SITE_TAKEOVER=true SID=HR2
Operations: demote interval=0s timeout=320 (SAPHana_HR2_00-demote-
interval-0s)
monitor interval=120 timeout=60 (SAPHana_HR2_00-monitor-
interval-120)
monitor interval=121 role=Secondary timeout=60
(SAPHana_HR2_00-monitor-
interval-121)
monitor interval=119 role=Primary timeout=60
(SAPHana_HR2_00-monitor-
interval-119)
promote interval=0s timeout=320 (SAPHana_HR2_00-promote-
interval-0s)
start interval=0s timeout=180 (SAPHana_HR2_00-start-
interval-0s)
stop interval=0s timeout=240 (SAPHana_HR2_00-stop-
interval-0s)
crm_mon -A1
....
2 nodes configured
5 resources configured
Active resources:
.....
Node Attributes:
* Node node1.localdomain:
+ hana_hr2_clone_state : PROMOTED
+ hana_hr2_remoteHost : node2
+ hana_hr2_roles : 4:P:primary1:primary:worker:primary
+ hana_hr2_site : DC1
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : PRIM
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node1
+ lpa_hr2_lpt : 1540866498
+ primary-SAPHana_HR2_00 : 150
* Node node2.localdomain:
+ hana_hr2_clone_state : DEMOTED
+ hana_hr2_op_mode : logreplay
+ hana_hr2_remoteHost : node1
+ hana_hr2_roles : 4:S:primary1:primary:worker:primary
+ hana_hr2_site : DC2
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : SOK
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node2
+ lpa_hr2_lpt : 30
+ primary-SAPHana_HR2_00 : 100
6. Create Virtual IP address resource. Cluster will contain Virtual IP address in order to
reach the Primary instance of SAP HANA. Below is example command to create
IPaddr2 resource with IP 10.7.0.84/24.
Attributes: ip=10.7.0.84
7. Create constraints.
After each pcs resource move command invocation, the cluster creates location
constraints to achieve the move of the resource. These constraints must be removed to
allow automatic failover in the future. To remove them you can use the command
following command.
demoted host:
result:
Promoted host:
\q to quit
hdbsql HR2=>
DB is online
With option the AUTOMATED_REGISTER=false , you cannot switch back and forth.
Consider setting this option to true to automate the registration of the demoted host.
References
1. Automated SAP HANA System Replication in Scale-Up in pacemaker cluster
2. Support Policies for RHEL High Availability Clusters - Management of SAP HANA in
a Cluster
3. Setting up Pacemaker on RHEL in Azure - Azure Virtual Machines
4. Azure HANA Large Instances control through Azure portal - Azure Virtual
Machines
Monitor SAP HANA (Large instances) on
Azure
Article • 02/10/2023
In this article, we'll look at monitoring SAP HANA Large Instances on Azure (otherwise
known as BareMetal Infrastructure).
SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment.
Monitoring the operating system and application is important. You'll want to know how
the applications consume the following resources:
CPU
Memory
Network bandwidth
Disk space
Monitor your SAP HANA Large Instances to see whether the above resources are
sufficient or whether they're being depleted. The following sections give more detail on
each of these resources.
Memory consumption
It's important to monitor memory consumption both within HANA and outside of HANA
on the SAP HANA Large Instance. Monitor how the data is consuming HANA-allocated
memory so you can stay within the sizing guidelines of SAP. Monitor memory
consumption on the Large Instance to make sure non-HANA software doesn't consume
too much memory. You don't want non-HANA software competing with HANA for
memory.
Network bandwidth
The bandwidth of the Azure Virtual Network (VNet) gateway is limited. Only so much
data can move into the Azure VNet. Monitor the data received by all Azure VMs within a
VNet. This way you'll know when you're nearing the limits of the Azure gateway SKU you
selected. It also makes sense to monitor incoming and outgoing network traffic on the
HANA Large Instance to track the volumes handled over time.
Disk space
Disk space consumption usually increases over time. Common causes include:
So it's important to monitor disk space usage and manage the disk space associated
with the HANA Large Instance.
Run the following command to generate the health check log file at
/var/log/health_check.
/opt/sgi/health_check/microsoft_tdi.sh
When you work with the Microsoft Support team to troubleshoot an issue, you may be
asked to provide the log files by using these diagnostic tools. You can zip the file using
this command:
Next steps
Learn about how to monitor and troubleshoot from within SAP HANA.
In this article, we'll look at monitoring and troubleshooting your SAP HANA on Azure
(Large Instances) using resources provided by SAP HANA.
To analyze problems related to SAP HANA on Azure (Large Instances), you'll want to
narrow down the root cause of a problem. SAP has published lots of documentation to
help you. FAQs related to SAP HANA performance can be found in the following SAP
Notes:
You may notice high CPU consumption on your SAP HANA database from:
Alert 5 (Host CPU usage) is raised for current or past CPU usage
The displayed CPU usage on the overview screen
The Load graph might show high CPU consumption, or high consumption in the past:
For detailed CPU usage troubleshooting steps, see SAP HANA Troubleshooting: CPU
Related Causes and Solutions .
You can check whether Transparent Huge Pages are enabled through the following Linux
command: cat /sys/kernel/mm/transparent_hugepage/enabled
If always is enclosed in brackets, it means that the Transparent Huge Pages are
enabled: [always] madvise never
If never is enclosed in brackets, it means that the Transparent Huge Pages are
disabled: always madvise [never]
The following Linux command should return nothing: rpm -qa | grep ulimit. If it appears
ulimit is installed, uninstall it immediately.
Memory
You may observe that the amount of memory allotted to the SAP HANA database is
higher than expected. The following alerts indicate issues with high memory usage:
For detailed memory troubleshooting steps, see SAP HANA Troubleshooting: Root
Causes of Memory Problems .
Network
Refer to SAP Note #2081065 – Troubleshooting SAP HANA Network and do the
network troubleshooting steps in this SAP Note.
3. Run Linux command ifconfig (the output shows whether any packet losses are
occurring).
Also, use the open-source IPERF tool (or similar) to measure real application network
performance.
For detailed network troubleshooting steps, see SAP HANA Troubleshooting: Network
Performance and Connectivity Problems .
Storage
Let's say there are issues with I/O performance. End users may then find applications, or
the system as a whole, runs sluggishly, is unresponsive, or can even stop responding. In
the Volumes tab in SAP HANA Studio, you can see the attached volumes and what
volumes are used by each service.
On the lower part of the screen (on the Volumes tab), you can see details of the
volumes, such as files and I/O statistics.
For I/O troubleshooting steps, see SAP HANA Troubleshooting: I/O Related Root Causes
and Solutions . For disk-related troubleshooting steps, see SAP HANA
Troubleshooting: Disk Related Root Causes and Solutions .
Diagnostic tools
Do an SAP HANA Health Check through HANA_Configuration_Minichecks. This tool
returns potentially critical technical issues that should have already been raised as alerts
in SAP HANA Studio.
1. Refer to SAP Note #1969700 – SQL statement collection for SAP HANA and
download the SQL Statements.zip file attached to that note. Store this .zip file on
the local hard drive.
2. In SAP HANA Studio, on the System Information tab, right-click in the Name
column and select Import SQL Statements.
3. Select the SQL Statements.zip file stored locally; a folder with the corresponding
SQL statements will be imported. At this point, the many different diagnostic
checks can be run with these SQL statements.
For example, to test SAP HANA System Replication bandwidth requirements, right-
click the Bandwidth statement under Replication: Bandwidth and select Open in
SQL Console.
Sample outputs:
Next steps
Learn how to set up high availability on the SUSE operating system using the fencing
device.
In this article, we'll walk through validating, configuring, and installing SAP HANA Large
Instances (HLIs) on Azure (otherwise known as BareMetal Infrastructure).
Prerequisites
Before reading this article, become familiar with:
Also see:
7 Note
Per SAP policy, the installation of SAP HANA must be performed by a person who's
passed the Certified SAP Technology Associate exam, SAP HANA Installation
certification exam, or who is an SAP-certified system integrator (SI).
When you're planning to install HANA 2.0, see SAP support note #2235581 - SAP HANA:
Supported operating systems . Make sure the operating system (OS) is supported with
the SAP HANA release you're installing. The supported OS for HANA 2.0 is more
restrictive than the supported OS for HANA 1.0. Confirm that the OS release you're
interested in is supported for the particular HANA Large Instance. Use this list ; select
the HLI to see the details of the supported OS list for that unit.
1. Check in the Azure portal whether the instance(s) are showing up with the correct
SKUs and OS. For more information, see Azure HANA Large Instances control
through Azure portal.
2. Register the OS of the instance with your OS provider. This step includes
registering your SUSE Linux OS in an instance of the SUSE Subscription
Management Tool (SMT) that's deployed in a VM in Azure.
The HANA Large Instance can connect to this SMT instance. (For more information,
see How to set up SMT server for SUSE Linux). If you're using a Red Hat OS, it
needs to be registered with the Red Hat Subscription Manager that you'll connect
to. For more information, see the remarks in What is SAP HANA on Azure (Large
Instances)?.
This step is necessary for patching the OS, which is your responsibility. For SUSE,
see the documentation on installing and configuring SMT .
3. Check for new patches and fixes of the specific OS release/version. Verify that the
HANA Large Instance has the latest patches. Sometimes the latest patches aren't
included, so be sure to check.
4. Check the relevant SAP notes for installing and configuring SAP HANA on the
specific OS release/version. Microsoft won't always configure an HLI completely.
Changing recommendations or changes to SAP notes or configurations dependent
on individual scenarios may make it impossible.
So be sure to read the SAP notes related to SAP HANA for your exact Linux release.
Also check the configurations of the OS release/version and apply the
configuration settings if you haven't already.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 65536 16777216 16777216
net.ipv4.tcp_wmem = 65536 16777216 16777216
Starting with SLES12 SP1 and Red Hat Enterprise Linux (RHEL) 7.2, these
parameters must be set in a configuration file in the /etc/sysctl.d directory. For
example, a configuration file with the name 91-NetApp-HANA.conf must be
created. For older SLES and RHEL releases, these parameters must be set
in/etc/sysctl.conf.
For all RHEL releases starting with RHEL 6.3, keep in mind:
5. Check the system time of your HANA Large Instance. The instances are deployed
with a system time zone. This time zone represents the location of the Azure
region in which the HANA Large Instance stamp is located. You can change the
system time or time zone of the instances you own.
If you order more instances into your tenant, you need to adapt the time zone of
the newly delivered instances. Microsoft has no insight into the system time zone
you set up with the instances after the handover. So newly deployed instances
might not be set in the same time zone as the one you changed to. It's up to you
to adapt the time zone of the instance(s) that were handed over, as needed.
6. Check etc/hosts. As the blades get handed over, they have different IP addresses
assigned for different purposes. It's important to check the etc/hosts file when
units are added into an existing tenant. The etc/hosts file of the newly deployed
systems may not be maintained correctly with the IP addresses of systems
delivered earlier. Ensure that a newly deployed instance can resolve the names of
the units you deployed earlier in your tenant.
Operating system
The swap space of the delivered OS image is set to 2 GB according to the SAP support
note #1999997 - FAQ: SAP HANA memory . If you want a different setting, you must
set it yourself.
SUSE Linux Enterprise Server 12 SP1 for SAP applications is the distribution of Linux
that's installed for SAP HANA on Azure (Large Instances). This distribution provides SAP-
specific capabilities, including pre-set parameters for running SAP on SLES effectively.
For several useful resources related to deploying SAP HANA on SLES, see:
The following documents are SAP support notes applicable to implementing SAP HANA
on SLES 12:
SAP support note #1944799 – SAP HANA guidelines for SLES operating system
installation
SAP support note #2205917 – SAP HANA DB recommended OS settings for SLES
12 for SAP applications
SAP support note #1984787 – SUSE Linux Enterprise Server 12: installation notes
SAP support note #171356 – SAP software on Linux: General information
SAP support note #1391070 – Linux UUID solutions
Red Hat Enterprise Linux for SAP HANA is another offer for running SAP HANA on
HANA Large Instances. Releases of RHEL 7.2 and 7.3 are available and supported. For
more information on SAP on Red Hat, see SAP HANA on Red Hat Linux site .
The following documents are SAP support notes applicable to implementing SAP HANA
on Red Hat:
SAP support note #2009879 - SAP HANA guidelines for Red Hat Enterprise Linux
(RHEL) operating system
SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL
7
SAP support note #1391070 – Linux UUID solutions
SAP support note #2228351 - Linux: SAP HANA Database SPS 11 revision 110 (or
higher) on RHEL 6 or SLES 11
SAP support note #2397039 - FAQ: SAP on RHEL
SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and
upgrade
Time synchronization
SAP applications built on the SAP NetWeaver architecture are sensitive to time
differences for the components of the SAP system. SAP ABAP short dumps with the
error title of ZDATE_LARGE_TIME_DIFF are probably familiar. That's because these short
dumps appear when the system time of different servers or virtual machines (VMs) is
drifting too far apart.
For SAP HANA on Azure (Large Instances), time synchronization in Azure doesn't apply
to the compute units in the Large Instance stamps. It also doesn't apply to running SAP
applications in native Azure VMs, because Azure ensures a system's time is properly
synchronized.
As a result, you need to set up a separate time server. This server will be used by SAP
application servers running on Azure VMs. It will also be used by the SAP HANA
database instances running on HANA Large Instances. The storage infrastructure in
Large Instance stamps is time-synchronized with Network Time Protocol (NTP) servers.
Networking
In designing your Azure virtual networks and connecting those virtual networks to the
HANA Large Instances, be sure to follow the recommendations described in:
Here are some details worth mentioning about the networking of the single units. Every
HANA Large Instance unit comes with two or three IP addresses assigned to two or
three network interface controller (NIC) ports. Three IP addresses are used in HANA
scale-out configurations and the HANA system replication scenario. One of the IP
addresses assigned to the NIC of the unit is out of the server IP pool that's described in
SAP HANA (Large Instances) overview and architecture on Azure.
For more information about Ethernet details for your architecture, see HLI supported
scenarios.
Storage
The storage layout for SAP HANA (Large Instances) is configured by SAP HANA on
Azure Service Management using SAP recommended guidelines.
The rough sizes of the different volumes with the different HANA Large Instances SKUs
is documented in SAP HANA (Large Instances) overview and architecture on Azure.
The naming conventions of the storage volumes are listed in the following table:
HANA usr/sap share the same volume. The nomenclature of the mountpoints includes
the system ID of the HANA instances and the mount number. In scale-up deployments,
there's only one mount, such as mnt00001. In scale-out deployments, you'll see as many
mounts as you have worker and primary nodes.
For scale-out environments, data, log, and log backup volumes are shared and attached
to each node in the scale-out configuration. For configurations that are multiple SAP
instances, a different set of volumes is created and attached to the HANA Large
Instance. For storage layout details for your scenario, see HLI supported scenarios.
HANA Large Instances come with generous disk volume for HANA/data and a volume
HANA/log/backup. We made the HANA/data so large because the storage snapshots
use the same disk volume. The more storage snapshots you do, the more space is
consumed by snapshots in your assigned storage volumes.
The HANA/log/backup volume isn't supposed to be the volume for database backups.
It's sized to be used as the backup volume for the HANA transaction log backups. For
more information, see SAP HANA (Large Instances) high availability and disaster
recovery on Azure.
You can increase your storage by purchasing extra capacity in 1-TB increments. This
extra storage can be added as new volumes to a HANA Large Instance.
During onboarding with SAP HANA on Azure Service Management, you'll specify a user
ID (UID) and group ID (GID) for the sidadm user and sapsys group (for example:
1000,500). During installation of the SAP HANA system, you must use these same values.
Because you want to deploy multiple HANA instances on a unit, you get multiple sets of
volumes (one set for each instance). So at deployment time, you need to define:
The SID of the different HANA instances (sidadm is derived from it).
The memory sizes of the different HANA instances. The memory size per instance
defines the size of the volumes in each individual volume set.
nfs rw, vers=4, hard, timeo=600, rsize=1048576, wsize=1048576, intr, noatime, lock
00
These mount points are configured in /etc/fstab as shown in the following screenshots:
The output of the command df -h on a S72m HANA Large Instance looks like:
The storage controller and nodes in the Large Instance stamps are synchronized to NTP
servers. Synchronizing the SAP HANA on Azure (Large Instances) and Azure VMs against
an NTP server is important. It eliminates significant time drift between the infrastructure
and the compute units in Azure or Large Instance stamps.
To optimize SAP HANA to the storage used underneath, set the following SAP HANA
configuration parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For SAP HANA 1.0 versions up to SPS12, these parameters can be set during the
installation of the SAP HANA database, as described in SAP note #2267798 -
Configuration of the SAP HANA database .
You can also configure the parameters after the SAP HANA database installation by
using the hdbparam framework.
The storage used in HANA Large Instances has a file size limitation. The size limitation is
16 TB per file. Unlike file size limitations in the EXT3 file systems, HANA isn't aware
implicitly of the storage limitation enforced by the HANA Large Instances storage. As a
result, HANA won't automatically create a new data file when the file size limit of 16 TB
is reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors
and the index server will crash at the end.
) Important
To prevent HANA from trying to grow data files beyond the 16 TB file size limit of
HANA Large Instance storage, set the following parameters in the SAP HANA
global.ini configuration file:
datavolume_striping=true
datavolume_striping_size_gb = 15000
See also SAP note #2400005
Be aware of SAP note #2631285
With SAP HANA 2.0, the hdbparam framework has been deprecated. So the parameters
must be set by using SQL commands. For more information, see SAP note #2399079:
Elimination of hdbparam in HANA 2 .
Refer to HLI supported scenarios to learn more about the storage layout for your
architecture.
Next steps
Go through the steps of installing SAP HANA on Azure (Large Instances).
7 Note
For Rev 4.2, follow the instructions in the Manage BareMetal Instances through
the Azure portal topic.
This document covers the way how HANA Large Instances are presented in Azure
portal and what activities can be conducted through Azure portal with HANA Large
Instance units that are deployed for you. Visibility of HANA Large Instances in Azure
portal is provided through an Azure resource provider for HANA Large Instances, which
currently is in public preview
Azure CLI
For more information, see the article Azure resource providers and types
In the screenshot shown, the resource provider was already registered. In case the
resource provider is not yet registered, press "re-register" or "register".
For more information, see the article Azure resource providers and types
In order to find the new Azure resource group, you list the resource group in your
subscription by navigating through the left navigation pane of the Azure portal
In the list of resource groups, you are getting listed, you might need to filter on the
subscription you used to have HANA Large Instances deployed
After filtering to the correct subscription, you still may have a long list of resource
groups. Look for one with a post-fix of -Txxx where "xxx" are three digits, like -T050.
As you found the resource group, list the details of it. The list you received could look
like:
All the units listed are representing a single HANA Large Instance unit that has been
deployed in your subscription. In this case, you look at eight different HANA Large
Instance units, which were deployed in your subscription.
If you deployed several HANA Large Instance tenants under the same Azure
subscription, you will find multiple Azure resource groups
In the overview screen, after clicking 'Show more', you are getting a presentation of the
unit, which looks like:
Looking at the different attributes shown, those attributes look hardly different than
Azure VM attributes. On the left-hand side header, it shows the Resource group, Azure
region, subscription name, and ID as well as some tags that you added. By default, the
HANA Large Instance units have no tag assigned. On the right-hand side of the header,
the name of the unit is listed as assigned when the deployment was done. The
operating system is shown as well as the IP address. As with VMs the HANA Large
instance unit type with the number of CPU threads and memory is shown as well. More
details on the different HANA Large Instance units, are shown here:
Additional data on the right lower side is the revision of the HANA Large Instance
stamp. Possible values are:
Revision 3
Revision 4
Revision 4 is the latest architecture released of HANA Large Instances with major
improvements in network latency between Azure VMs and HANA Large instance units
deployed in Revision 4 stamps or rows. Another very important information is found in
the lower right corner of the overview with the name of the Azure Proximity Placement
Group that is automatically created for each deployed HANA Large Instance unit. This
Proximity Placement Group needs to be referenced when deploying the Azure VMs that
host the SAP application layer. By using the Azure proximity placement group
associated with the HANA Large Instance unit, you make sure that the Azure VMs are
deployed in close proximity to the HANA Large Instance unit. The way how proximity
placement groups can be used to locate the SAP application layer in the same Azure
datacenter as Revision 4 hosted HANA Large Instance units is described in Azure
Proximity Placement Groups for optimal network latency with SAP applications.
An additional field in the right column of the header informs about the power state of
the HANA Large instance unit.
7 Note
The power state describes whether the hardware unit is powered on or off. It does
not give information about the operating system being up and running. As you
restart a HANA Large Instance unit, you will experience a small time where the state
of the unit changes to Starting to move into the state of Started. Being in the state
of Started means that the OS is starting up or that the OS has been started up
completely. As a result, after a restart of the unit, you can't expect to immediately
log into the unit as soon as the state switches to Started.
If you press 'See more', additional information is shown. One additional information is
displaying the revision of the HANA Large Instance stamp, the unit got deployed in. See
the article What is SAP HANA on Azure (Large Instances) for the different revisions of
HANA Large Instance stamps
One of the main activities recorded are restarts of a unit. The data listed includes the
status of the activity, the time stamp the activity got triggered, the subscription ID out of
which the activity got triggered and the Azure user who triggered the activity.
Another activity that is getting recorded are changes to the unit in the Azure meta data.
Besides the restart initiated, you can see the activity of Write HANAInstances. This type
of activity performs no changes on the HANA Large Instance unit itself, but is
documenting changes to the meta data of the unit in Azure. In the case listed, we added
and deleted a tag (see next section).
Deleting tags works the same way as with VMs. Both activities, applying and deleting a
tag will be listed in the activity log of the particular HANA Large Instance unit.
The first few data items, you saw in the overview screen already. But an important
portion of data is the ExpressRoute Circuit ID, which you got as the first deployed units
were handed over. In some support cases, you might get asked for that data. An
important data entry is shown at the bottom of the screenshot. The data displayed is the
IP address of the NFS storage head that isolates your storage to your tenant in the
HANA Large Instance stack. This IP address is also needed when you edit the Configure
Azure Application Consistent Snapshot tool.
As you scroll down in the property pane, you get additional data like a unique resource
ID for your HANA Large Instance unit, or the subscription ID which was assigned to the
deployment.
As you are pressing the restart button, you are asked whether you really want to restart
the unit. As you confirm by pressing the button "Yes", the unit will restart.
7 Note
In the restart process, you will experience a small time where the state of the unit
changes to Starting to move into the state of Started. Being in the state of Started
means that the OS is starting up or that the OS has been started up completely. As
a result, after a restart of the unit, you can't expect to immediately log into the unit
as soon as the state switches to Started.
) Important
Dependent on the amount of memory in your HANA Large Instance unit, a restart
and reboot of the hardware and the operating system can take up to one hour
In order to get the service of SAP HANA Large Instances listed in the next screen, you
might need to select 'All Services" as shown below
In the list of services, you can find the service SAP HANA Large Instance. As you choose
that service, you can select specific problem types as shown:
Under each of the different problem types, you are offered a selection of problem
subtypes you need to select to characterize your problem further. After selecting the
subtype, you now can name the subject. Once you are done with the selection process,
you can move to next step of the creation. In the Solutions section, you are pointed to
documentation around HANA Large Instances, which might give a pointer to a solution
of your problem. If you can't find a solution for your problem in the documentation
suggested, you go to the next step. In the next step, you are going to be asked whether
the issue is with VMs or with HANA Large Instance units. This information helps to direct
the support request to the correct specialists.
As you answered the questions and provided additional details, you can go the next
step in order to review the support request and the submit it.
Next steps
How to monitor SAP HANA (large instances) on Azure
Monitoring and troubleshooting from HANA side
High availability setup in SUSE using the
fencing device
Article • 02/10/2023
In this article, we'll go through the steps to set up high availability (HA) in HANA Large
Instances on the SUSE operating system by using the fencing device.
7 Note
This guide is derived from successfully testing the setup in the Microsoft HANA
Large Instances environment. The Microsoft Service Management team for HANA
Large Instances doesn't support the operating system. For troubleshooting or
clarification on the operating system layer, contact SUSE.
The Microsoft Service Management team does set up and fully support the fencing
device. It can help troubleshoot fencing device problems.
Prerequisites
To set up high availability by using SUSE clustering, you need to:
Setup details
This guide uses the following setup:
If you're an existing customer with HANA Large Instances already provisioned, you can
still get the fencing device set up. Provide the following information to the Microsoft
Service Management team in the service request form (SRF). You can get the SRF
through the Technical Account Manager or your Microsoft contact for HANA Large
Instance onboarding.
Server name and server IP address (for example, myhanaserver1 and 10.35.0.1)
Location (for example, US East)
Customer name (for example, Microsoft)
HANA system identifier (SID) (for example, H11)
After the fencing device is configured, the Microsoft Service Management team will
provide you with the SBD name and IP address of the iSCSI storage. You can use this
information to configure fencing setup.
Follow the steps in the following sections to set up HA by using the fencing device.
7 Note
This section applies only to existing customers. If you're a new customer, the
Microsoft Service Management team will give you the SBD device name, so skip
this section.
iqn.1996-04.de.suse:01:<Tenant><Location><SID><NodeNumber>
Microsoft Service Management provides this string. Modify the file on both nodes.
However, the node number is different on each node.
2. Modify /etc/iscsi/iscsid.conf by setting node.session.timeo.replacement_timeout=5
and node.startup = automatic . Modify the file on both nodes.
4. Run the following command on both nodes to sign in to the iSCSI device.
iscsiadm -m node -l
rescan-scsi-bus.sh
The results should show a LUN number greater than zero (for example: 1, 2, and so
on).
6. To get the device name, run the following command on both nodes.
fdisk –l
In the results, choose the device with the size of 178 MiB.
2. Set up the cluster by using either the ha-cluster-init command or the yast2
wizard. In this example, we're using the yast2 wizard. Do this step only on the
primary node.
d. The expected value is the number of nodes deployed (in this case, 2). Select
Next.
Manually copy the file key_hagroup to all members of the cluster after it's
created. Be sure to copy the file from node1 to node2. Then select Next.
j. In the default option, Booting was Off. Change it to On, so the pacemaker
service is started on boot. You can make the choice based on your setup
requirements.
k. Select Next, and the cluster configuration is complete.
modprobe softdog
2. Use the following command to update the file /etc/sysconfig/sbd on both nodes.
modprobe softdog
4. Use the following command to ensure that softdog is running on both nodes.
5. Use the following command to start the SBD device on both nodes.
/usr/share/sbd/sbd.sh start
6. Use the following command to test the SBD daemon on both nodes.
8. On the second node (node2), use the following command to check the message
status.
10. Use the following command to start the pacemaker service on the primary node
(node1).
ha-cluster-join
If you receive an error during joining of the cluster, see the section Scenario 6: Node2
can't join the cluster later in this article.
crm_mon
You can also sign in to hawk to check the cluster status: https://\<node IP>:7630 .
The default user is hacluster, and the password is linux. If needed, you can change
the password by using the passwd command.
Cluster bootstrap
Fencing device
Virtual IP address
1. Create the cluster bootstrap file and configure it by adding the following text.
sapprdhdb95:~ # vi crm-bs.txt
# enter the following to crm-bs.txt
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
3. Configure the fencing device by adding the resource, creating the file, and adding
text as follows.
# vi crm-sbd.txt
# enter the following to crm-sbd.txt
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15"
4. Add the virtual IP address for the resource by creating the file and adding the
following text.
# vi crm-vip.txt
primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
operations $id="rsc_ip_HA1_HDB10-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.35.0.197"
2. Stop the pacemaker service on node2, and resources fail over to node1.
Here's the status before failover:
Troubleshooting
This section describes failure scenarios that you might encounter during setup.
2. Go to yast > Software > Software Management > Dependencies, and then select
Install recommended packages.
7 Note
Perform the steps on both nodes, so that you can access the yast2 graphical
view from both nodes.
5. Select Next.
7. Use the following commands to install the libqt4 and libyui-qt packages.
zypper -n install libqt4
1. Go to Yast2 > Software > Software Management. Then select Software > Online
Update.
To fix it, delete the following line from the file /usr/lib/systemd/system/fstrim.timer:
Persistent=true
Scenario 6: Node2 can't join the cluster
The following error appears if there's a problem with joining node2 to the existing
cluster through the ha-cluster-join command.
To fix it:
This article walks through the steps to do an operating system (OS) file-level backup and
restore. The procedure differs depending on parameters like Type I or Type II, Revision 3
or above, location, and so on. Check with Microsoft operations to get the values for
these parameters for your resources.
This review will prepare you to run backup regularly via crontab as described in Back up
using Azure Application Consistent Snapshot tool.
Restore a backup
The restore operation cannot be done from the OS itself. You'll need to raise a support
ticket with Microsoft operations. The restore operation requires the HANA Large
Instance (HLI) to be in powered off state, so schedule accordingly.
Managed OS snapshots
Azure can automatically take OS backups for your HLI resources. These backups are
taken once daily, and Azure keeps up to the latest three such backups. These backups
are enabled by default for all customers in the following regions:
West US
Australia East
Australia Southeast
South Central US
East US 2
East US
North Europe
West Europe
The frequency or retention period of the backups taken by this facility can't be altered. If
a different OS backup strategy is needed for your HLI resources, you may opt out of this
facility by raising a support ticket with Microsoft operations. Then configure Microsoft
Snapshot Tools for SAP HANA to take OS backups by using the instructions provided
earlier in the section, Take a manual backup.
Next steps
Learn how to enable kdump for HANA Large Instances.
This document describes the steps to perform an operating system file level backup and
restore for the Type II SKUs of the HANA Large Instances of Revision 3.
) Important
This article does not apply to Type II SKU deployments in Revision 4 HANA Large
Instance stamps. Boot LUNS of Type II HANA Large Instance units which are
deployed in Revision 4 HANA Large Instance stamps can be backed up with storage
snapshots as this is the case with Type I SKUs already in Revision 3 stamps
7 Note
zypper in xfsdump
xfsdump -l 0 -f /data1/xfs_dump /
3. Important: Save a copy of backup in NFS volumes as well, in the scenario where
data1 partition also gets corrupted.
cp /data1/xfs_dump /osbackup/
4. For excluding regular directories and files from dump, please tag files with chattr.
chattr -R +d directory
chattr +d file
Run xfsdump with “-e” option
Note, It is not possible to exclude nfs filesystems [ntfs]
How to restore a backup?
7 Note
3. Mount data1 (or nfs volume, wherever the dump is stored) partition in read/write
mode.
5. Restore Filesystem.
reboot
If any post checks fail, please engage the OS vendor and Microsoft for console access.
Network is up.
NFS volumes are mounted.
mdadm -D /dev/md126
3. Ensure that RAID disks are synced and the configuration is in a clean state.
RAID disks take sometime in syncing; sync may continue for a few minutes
before it is 100% synced.
hdbinfo
6. If any post checks fail, please engage OS vendor and Microsoft for console access.
Azure Large Instances high availability
for SAP on RHEL
Article • 02/10/2023
7 Note
This article contains references to the terms blacklist and slave, terms that Microsoft
no longer uses. When the term is removed from the software, we’ll remove it from
this article.
In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate
an SAP HANA database failover. You need to have a good understanding of Linux, SAP
HANA, and Pacemaker to complete the steps in this guide.
The following table includes the host names that are used throughout this article. The
code blocks in the article show the commands that need to be run, as well as the output
of those commands. Pay close attention to which node is referenced in each command.
...
SELINUX=disabled
...
SELINUX=disabled
4. Reboot the servers and then use the following command to verify the status of
selinux.
5. Configure NTP (Network Time Protocol). The time and time zones for both cluster
nodes must match. Use the following command to open chrony.conf and verify
the contents of the file.
a. The following contents should be added to config file. Change the actual values
as per your environment.
vi /etc/chrony.conf
chronyc tracking
Stratum : 3
chronyc sources
====================================================================
===========
a. First, install the latest updates on the system before you start to install the SBD
device.
b. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of
the resource-agents-sap-hana package installed, as documented in Support
Policies for RHEL High Availability Clusters - Management of SAP HANA in a
Cluster
subscription-manager repos
--enable=rhel-sap-hana-for-rhel-7-server-rpms
8. Install the Pacemaker, SBD, OpenIPMI, ipmitool, and fencing_sbd tools on all
nodes.
yum install pcs sbd fence-agent-sbd.x86_64 OpenIPMI
ipmitool
Configure Watchdog
In this section, you learn how to configure Watchdog. This section uses the same two
hosts, sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.
1. Make sure that the watchdog daemon is not running on any systems.
2. The default Linux watchdog, that will be installed during the installation, is the
iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore,
this watchdog must be disabled.
iTCO_wdt 13480 0
c. To make sure the driver is not loaded during the next system boot, the driver
must be blocklisted. To blocklist the iTCO modules, add the following to the end
of the 50-blacklist.conf file:
sollabdsm35:~ # vi /etc/modprobe.d/50-blacklist.conf
blacklist iTCO_wdt
blacklist iTCO_vendor_support
e. Test if the ipmi service is started. It is important that the IPMI timer is not
running. The timer management will be done from the SBD pacemaker service.
sollabdsm35:~ # ls -l /dev/watchdog
sollabdsm35:~ # vi /etc/sysconfig/ipmi
IPMI_SI=yes
DEV_IPMI=yes
IPMI_WATCHDOG=yes
IPMI_WATCHDOG_OPTIONS="timeout=20 action=reset nowayout=0
panic_wdt_timeout=15"
IPMI_POWEROFF=no
IPMI_POWERCYCLE=no
IPMI_IMB=no
Now the IPMI service is started and the device /dev/watchdog is created – But the
timer is still stopped. Later the SBD will manage the watchdog reset and enables
the IPMI timer.
SBD configuration
In this section, you learn how to configure SBD. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.
1. Make sure the iSCSI or FC disk is visible on both nodes. This example uses an FC-
based SBD device. For more information about SBD fencing, see Design Guidance
for RHEL High Availability Clusters - SBD Considerations and Support Policies for
RHEL High Availability Clusters - sbd and fence_sbd
multipath -ll
3600a098038304179392b4d6c6e2f4b62 dm-5 NETAPP ,LUN C-Mode
size=1.0G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 8:0:1:2 sdi 8:128 active ready running
| `- 10:0:1:2 sdk 8:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 8:0:3:2 sdj 8:144 active ready running
`- 10:0:3:2 sdl 8:176 active ready running
4. Creating the SBD discs and setup the cluster primitive fencing. This step must be
executed on first node.
vi /etc/sysconfig/sbd
SBD_DEVICE="/dev/mapper/3600a09803830417934d6c6e2f4b62"
SBD_PACEMAKER=yes
SBD_STARTMODE=always
SBD_DELAY_START=no
SBD_WATCHDOG_DEV=/dev/watchdog
SBD_WATCHDOG_TIMEOUT=15
SBD_TIMEOUT_ACTION=flush,reboot
SBD_MOVE_TO_ROOT_CGROUP=auto
SBD_OPTS=
UUID : ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce
SBD_DEVICE="/dev/mapper/3600a098038304179392b4d6c6e2f4b62"
## Type: yesno
Default: yes
# Whether to enable the pacemaker integration.
SBD_PACEMAKER=yes
Cluster initialization
In this section, you initialize the cluster. This section uses the same two hosts,
sollabdsm35 and sollabdsm36 , referenced at the beginning of this article.
passwd hacluster
Username: hacluster
Password:
sollabdsm35.localdomain: Authorized
sollabdsm36.localdomain: Authorized
WARNINGS:
Stack: corosync
2 nodes configured
0 resources configured
No resources
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/disabled
8. If one node is not joining the cluster check if the firewall is still running.
10. Stop the cluster restart the cluster services (on all nodes).
Active: active (running) since Wed 2021-01-20 01:43:41 EST; 9min ago
13. Restart the cluster (if not automatically started from pcsd).
15. Check the new cluster status with now one resource.
pcs status
Stack: corosync
2 nodes configured
1 resource configured
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
sbd: active/enabled
[root@node1 ~]#
16. Now the IPMI timer must run and the /dev/watchdog device must be opened by
sbd.
0 sollabdsm35 clear
1 sollabdsm36 clear
19. For the rest of the SAP HANA clustering you can disable fencing by setting:
The default and supported way is to create a performance optimized scenario where the
database can be switched over directly. Only this scenario is described here in this
document. In this case we recommend installing one cluster for the QAS system and a
separate cluster for the PRD system. Only in this case it is possible to test all
components before it goes into production.
Log Description
Replication
Mode
Synchronous Synchronous (mode=sync) means the log write is considered as successful when
the log entry has been written to the log volume of the primary and the
secondary instance. When the connection to the secondary system is lost, the
primary system continues transaction processing and writes the changes only to
the local disk. No data loss occurs in this scenario as long as the secondary
system is connected. Data loss can occur, when a takeover is executed while the
secondary system is disconnected. Additionally, this replication mode can run
with a full sync option. This means that log write is successful when the log
buffer has been written to the log file of the primary and the secondary instance.
In addition, when the secondary system is disconnected (for example, because of
network failure) the primary systems suspends transaction processing until the
connection to the secondary system is reestablished. No data loss occurs in this
scenario. You can set the full sync option for system replication only with the
parameter [system_replication]/enable_full_sync). For more information on how
to enable the full sync option, see Enable Full Sync Option for System
Replication.
Log Description
Replication
Mode
Asynchronous Asynchronous (mode=async) means the primary system sends redo log buffers
to the secondary system asynchronously. The primary system commits a
transaction when it has been written to the log file of the primary system and
sent to the secondary system through the network. It does not wait for
confirmation from the secondary system. This option provides better
performance because it is not necessary to wait for log I/O on the secondary
system. Database consistency across all services on the secondary system is
guaranteed. However, it is more vulnerable to data loss. Data changes may be
lost on takeover.
* su - hr2adm
VALUE
"normal"
b. SAP HANA system replication will only work after initial backup has been
performed. The following command creates an initial backup in the /tmp/
directory. Select a proper backup filesystem for the database.
ls -l /tmp
total 2031784
-rw-r----- 1 hr2adm sapsys 155648 Oct 26 23:31 backup_databackup_0_1
done.
hdbnsutil -sr_state
online: true
mode: primary
site id: 1
site name: DC1
Host Mappings:
~~~~~~~~~~~~~~
Site Mappings:
~~~~~~~~~~~~~~
DC1 (primary/)
Tier of DC1: 1
done.
su – hr2adm
b. For SAP HANA2.0 only, copy the SAP HANA system PKI SSFS_HR2.KEY and
SSFS_HR2.DAT files from primary node to secondary node.
scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
scp
root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
su - hr2adm
done.
hdbnsutil -sr_state
~~~~~~~~~
System Replication State
online: true
mode: syncmem
site id: 2
Host Mappings:
Site Mappings:
DC1 (primary/primary)
|---DC2 (syncmem/logreplay)
Tier of DC1: 1
Tier of DC2: 2
done.
~~~~~~~~~~~~~~
3. It is also possible to get more information on the replication status:
~~~~~
hr2adm@node1:/usr/sap/HR2/HDB00> python
/usr/sap/HR2/HDB00/exe/python_support/systemReplicationStatus.py
mode: PRIMARY
site id: 1
For more information about log replication mode, see the official SAP documentation .
To ensure that the replication traffic is using the right VLAN for the replication, it must
be configured properly in the global.ini . If you skip this step, HANA will use the Access
VLAN for the replication, which might be undesired.
The following examples show the host name resolution configuration for system
replication to a secondary site. Three distinct networks can be identified:
Network for internal SAP HANA communication between hosts at each site:
192.168.1.*
For more information, see Network Configuration for SAP HANA System Replication .
For system replication, it is not necessary to edit the /etc/hosts file, internal ('virtual')
host names must be mapped to IP addresses in the global.ini file to create a
dedicated network for system replication. The syntax for this is as follows:
global.ini
[system_replication_hostname_resolution]
<ip-address_site>=<internal-host-name_site>
SAP HANA startup on boot is disabled on all cluster nodes as the start and stop
will be managed by the cluster
SAP HANA system replication and takeover using tools from SAP are working
properly between cluster nodes
SAP HANA contains monitoring account that can be used by the cluster from both
cluster nodes
Both nodes are subscribed to 'High-availability' and 'RHEL for SAP HANA' (RHEL
6,RHEL 7) channels
In general, please execute all pcs commands only from on node because the CIB
will be automatically updated from the pcs shell.
Steps to configure
1. Configure pcs.
2. Configure corosync. For more information, see How can I configure my RHEL 7
High Availability Cluster with pacemaker and corosync .
cat /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: hana
transport: udpu
nodelist {
node {
ring0_addr: node1.localdomain
nodeid: 1
node {
ring0_addr: node2.localdomain
nodeid: 2
quorum {
provider: corosync_votequorum
two_node: 1
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
SID SAP System Identifier (SID) of SAP HANA installation. Must be the same
for all nodes.
Resource status
Clone: SAPHanaTopology_HR2_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_HR2_00 (class=ocf provider=heartbeat
type=SAPHanaTopology)
Attributes: InstanceNumber=00 SID=HR2
Operations: monitor interval=60 timeout=60
(SAPHanaTopology_HR2_00-monitor-interval-60)
start interval=0s timeout=180
(SAPHanaTopology_HR2_00-start-interval-0s)
stop interval=0s timeout=60
(SAPHanaTopology_HR2_00-stop-interval-0s)
Primary: SAPHana_HR2_00-primary
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Resource: SAPHana_HR2_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=false DUPLICATE_PRIMARY_TIMEOUT=7200
InstanceNumber=00 PREFER_SITE_TAKEOVER=true SID=HR2
Operations: demote interval=0s timeout=320 (SAPHana_HR2_00-demote-
interval-0s)
monitor interval=120 timeout=60 (SAPHana_HR2_00-monitor-
interval-120)
monitor interval=121 role=Secondary timeout=60
(SAPHana_HR2_00-monitor-
interval-121)
monitor interval=119 role=Primary timeout=60
(SAPHana_HR2_00-monitor-
interval-119)
promote interval=0s timeout=320 (SAPHana_HR2_00-promote-
interval-0s)
start interval=0s timeout=180 (SAPHana_HR2_00-start-
interval-0s)
stop interval=0s timeout=240 (SAPHana_HR2_00-stop-
interval-0s)
crm_mon -A1
....
2 nodes configured
5 resources configured
Active resources:
.....
Node Attributes:
* Node node1.localdomain:
+ hana_hr2_clone_state : PROMOTED
+ hana_hr2_remoteHost : node2
+ hana_hr2_roles : 4:P:primary1:primary:worker:primary
+ hana_hr2_site : DC1
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : PRIM
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node1
+ lpa_hr2_lpt : 1540866498
+ primary-SAPHana_HR2_00 : 150
* Node node2.localdomain:
+ hana_hr2_clone_state : DEMOTED
+ hana_hr2_op_mode : logreplay
+ hana_hr2_remoteHost : node1
+ hana_hr2_roles : 4:S:primary1:primary:worker:primary
+ hana_hr2_site : DC2
+ hana_hr2_srmode : syncmem
+ hana_hr2_sync_state : SOK
+ hana_hr2_version : 2.00.033.00.1535711040
+ hana_hr2_vhost : node2
+ lpa_hr2_lpt : 30
+ primary-SAPHana_HR2_00 : 100
6. Create Virtual IP address resource. Cluster will contain Virtual IP address in order to
reach the Primary instance of SAP HANA. Below is example command to create
IPaddr2 resource with IP 10.7.0.84/24.
Attributes: ip=10.7.0.84
7. Create constraints.
After each pcs resource move command invocation, the cluster creates location
constraints to achieve the move of the resource. These constraints must be removed to
allow automatic failover in the future. To remove them you can use the command
following command.
demoted host:
result:
Promoted host:
\q to quit
hdbsql HR2=>
DB is online
With option the AUTOMATED_REGISTER=false , you cannot switch back and forth.
Consider setting this option to true to automate the registration of the demoted host.
References
1. Automated SAP HANA System Replication in Scale-Up in pacemaker cluster
2. Support Policies for RHEL High Availability Clusters - Management of SAP HANA in
a Cluster
3. Setting up Pacemaker on RHEL in Azure - Azure Virtual Machines
4. Azure HANA Large Instances control through Azure portal - Azure Virtual
Machines
kdump for SAP HANA on Azure Large
Instances
Article • 02/10/2023
In this article, we'll walk through enabling the kdump service on Azure HANA Large
Instances (HLI) Type I and Type II.
Configuring and enabling kdump is needed to troubleshoot system crashes that don't
have a clear cause. Sometimes a system crash cannot be explained by a hardware or
infrastructure problem. In such cases, an operating system or application may have
caused the problem. kdump will allow SUSE to determine the reason for the system
crash.
Supported SKUs
Hana Large Instance type OS vendor OS package version SKU
Prerequisites
The kdump service uses the /var/crash directory to write dumps. Make sure the
partition corresponding to this directory has sufficient space to accommodate
dumps.
Setup details
The script to enable kdump can be found in the Azure sap-hana-tools on GitHub
7 Note
This script is made based on our lab setup. You will need to contact your OS vendor
for any further tuning. A separate logical unit number (LUN) will be provisioned for
new and existing servers for saving the dumps. A script will take care of configuring
the file system out of the LUN. Microsoft won't be responsible for analyzing the
dump. You will need to open a ticket with your OS vendor to have it analyzed.
Run this script on your HANA Large Instance by using the following command:
7 Note
Bash
If the command's output shows kdump is successfully enabled, reboot the system
to apply the changes.
If the command's output shows an operation failed, then the kdump service isn't
enabled. Refer to a following section, Support issues.
Test kdump
7 Note
The following operation will trigger a kernel crash and system reboot.
Bash
After the system reboots successfully, check the /var/crash directory for kernel
crash logs.
If the /var/crash has a directory with the current date, kdump is successfully
enabled.
Support issues
If the script fails with an error, or kdump isn't enabled, raise a service request with the
Microsoft support team. Include the following details:
HLI subscription ID
Server name
OS vendor
OS version
Kernel version
Next steps
Learn about operating system upgrades on HANA Large Instances.
This article describes the details of operating system (OS) upgrades on HANA Large
Instances (HLI), otherwise known as BareMetal Infrastructure.
7 Note
This article contains references to the terms blacklist and slave, terms that Microsoft
no longer uses. When the term is removed from the software, we’ll remove it from
this article.
7 Note
During HLI provisioning, the Microsoft operations team installs the operating system.
You're required to maintain the operating system. For example, you need to do the
patching, tuning, upgrading, and so on, on the HLI. Before you make major changes to
the operating system, for example, upgrade SP1 to SP2, contact the Microsoft
Operations team by opening a support ticket. They will consult with you. We
recommend opening this ticket at least one week before the upgrade.
For the support matrix of the different SAP HANA versions with the different Linux
versions, see SAP Note #2235581 .
Known issues
There are a couple of known issues with the upgrade:
On SKU Type II class SKU, the software foundation software (SFS) is removed
during the OS upgrade. You'll need to reinstall the compatible SFS after the OS
upgrade is complete.
Ethernet card drivers (ENIC and FNIC) are rolled back to an older version. You'll
need to reinstall the compatible version of the drivers after the upgrade.
rpm -e <old-rpm-package>
modinfo enic
modinfo fnic
Steps for eNIC/fNIC drivers installation during OS upgrade
Upgrade OS version
Remove old rpm packages
Install compatible eNIC/fNIC drivers as per installed OS version
Reboot system
After reboot, check the eNIC/fNIC version
Execution Steps
7 Note
Execution Steps
Check whether the EDAC modules are enabled. If an output is returned from the
following command, the modules are enabled.
blacklist sb_edac
blacklist edac_core
A reboot is required for the changes to take place. After reboot, execute the lsmod
command again and verify the modules aren't enabled.
Kernel parameters
Make sure the correct settings for transparent_hugepage , numa_balancing ,
processor.max_cstate , ignore_ce , and intel_idle.max_cstate are applied.
intel_idle.max_cstate=1
processor.max_cstate=1
transparent_hugepage=never
numa_balancing=disable
mce=ignore_ce
Execution Steps
grub2-mkconfig -o /boot/grub2/grub.cfg
Next steps
Learn to set up an SMT server for SUSE Linux.
Set up SMT server for SUSE Linux
Set up SMT server for SUSE Linux
Article • 02/10/2023
In this article, we'll walk through the steps of setting up SMT server for SAP HANA on
Azure Large Instances, otherwise known as BareMetal Infrastructure.
Large Instances of SAP HANA don't have direct connectivity to the internet. As a result,
it isn't straightforward to register such a unit with the operating system provider and to
download and apply updates. A solution for SUSE Linux is to set up an SMT server in an
Azure virtual machine (VM). You'll host the virtual machine in an Azure virtual network
connected to the HANA Large Instance (HLI). With the SMT server in place, the HANA
Large Instance can register and download updates.
For more information on SUSE, see their Subscription Management Tool for SLES 12
SP2 .
Prerequisites
To install an SMT server for HANA Large Instances, you'll first need:
2. Install a SUSE Linux VM in the Azure virtual network. To deploy the virtual machine,
take an SLES 12 SP2 gallery image of Azure (select BYOS SUSE image). In the
deployment process, don't define a DNS name, and don't use static IP addresses.
The deployed virtual machine has the internal IP address in the Azure virtual
network of 10.34.1.4. The name of the virtual machine is smtserver. After the
installation, check connectivity to the HANA Large Instances. Depending on how
you organized name resolution, you might need to configure resolution of the
HANA Large Instances in etc/hosts of the Azure virtual machine.
3. Add a disk to the virtual machine. You'll use this disk to hold the updates; the boot
disk itself could be too small. Here, the disk is mounted to /srv/www/htdocs, as
shown in the following screenshot. A 100-GB disk should suffice.
4. Sign in to the HANA Large Instances; maintain /etc/hosts. Check whether you can
reach the Azure virtual machine that will run the SMT server over the network.
5. Sign in to the Azure virtual machine that will run the SMT server. If you're using
putty to sign in to the virtual machine, run this sequence of commands in your
bash window:
cd ~
echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc
8. After the virtual machine is connected to the SUSE site, install the SMT packages.
Use the following putty command to install the SMT packages.
You can also use the YAST tool to install the SMT packages. In YAST, go to
Software Maintenance, and search for smt. Select smt, which switches
automatically to yast2-smt.
9. After the installation completes, go to the SMT server configuration. Enter the
organizational credentials from the SUSE Customer Center you retrieved earlier.
Also enter your Azure virtual machine hostname as the SMT Server URL. In this
example, it's https://smtserver.
10. Now test whether the connection to the SUSE Customer Center works. As you see
in the following screenshot, in this example, it did work.
11. After the SMT setup starts, provide a database password. Because it's a new
installation, you should define that password as shown in the following screenshot.
At the end of the configuration, it might take a few minutes to run the
synchronization check. After the installation and configuration of the SMT server,
you should find the directory repo under the mount point /srv/www/htdocs/. There
are also some subdirectories under the repo.
13. Restart the SMT server and its related services with these commands.
rcsmt restart
systemctl restart smt.service
systemctl restart apache2
2. Start the initial copy of the select packages to the SMT server you set up. This copy
is triggered in the shell by using the command, smt-mirror.
The packages should be copied into the directories created under the mount point
/srv/www/htdocs. This process can take an hour or more, depending on how many
packages you select. As this process finishes, move to the SMT client setup.
It's possible that the load of the certificate from the server by the client succeeds. In this
example, however, the registration fails, as shown in the following screenshot.
If the registration fails, see SUSE support document , and run the steps described
there.
) Important
For the server name, provide the name of the virtual machine (in this case,
smtserver), without the fully qualified domain name.
After running these steps, run the following command on the HANA Large Instance:
SUSEConnect –cleanup
7 Note
Wait a few minutes after that step. If you run clientSetup4SMT.sh immediately, you
might get an error.
If you find a problem you need to fix based on the steps of the SUSE article, restart
clientSetup4SMT.sh on the HANA Large Instance. Now it should finish successfully.
You configured the SMT client of the HLI to connect to the SMT server installed on the
Azure VM. Now take "zypper up" or "zypper in" to install OS updates to HANA Large
Instances, or install other packages. You can only get updates that you previously
downloaded on the SMT server.
Next steps
Learn about migrating SAP HANA on Azure Large Instance to Azure Virtual Machines.
This article describes possible Azure Large Instance deployment scenarios and offers
planning and migration approach with minimized transition downtime.
Overview
Azure Large Instances for SAP HANA (HLI) were first announced in September 2016.
Since then, many have adopted this hardware as a service for their in-memory compute
platform. Yet in recent years, the Azure virtual machine (VM) size extension and support
of HANA scale-out deployment has exceeded most enterprise customers’ ERP database
capacity demand. Many are expressing an interest in migrating their SAP HANA
workload from physical servers to Azure VMs.
Assumptions
This article makes the following assumptions:
Deployment scenarios
You can migrate to Azure VMs for all HLI scenarios. Common deployment models for
HLI are summarized in the following table. To benefit from complementary Azure
services, you may have to make minor architectural changes.
5 HSR with fencing Yes No preconfigured SBD for target VMs. Select and
for high availability deploy a fencing solution. Possible options: Azure
Fencing Agent (supported for both RHEL, SLES,
and SBD.
7 Host auto failover Yes Use Azure NetApp Files (ANF) for shared storage
(1+1) with Azure VMs.
Is the new VM migration target placed in the existing virtual network with IP
address ranges already permitted to connect to the HLI? Then no further
connectivity update is required.
Is the new Azure VM placed in a new Microsoft Azure Virtual Network, perhaps in
another region, and peered with the existing virtual network? Then you can use the
ExpressRoute service key and Resource ID from the original HLI provisioning to
allow access for this new virtual network IP range. Coordinate with Microsoft
Service Management to enable the virtual network to HLI connectivity.
7 Note
Backing up the HLI content is critical. It's also prudent to have full backups of the SAP
landscape readily accessible in case a rollback is needed.
Destination planning
Careful planning is essential in deploying a new infrastructure to take the place of an
existing one. Ensure the new addition will fulfill your needs in the larger scheme of
things. Here are some key points to consider.
Virtual network
Do you want to run the new HANA database in an existing virtual network or create a
new one? The primary deciding factor is the current networking layout for the SAP
landscape. Also, when the infrastructure goes from one-zone to two-zones deployment
and uses PPG, it imposes architectural change. For more information, see the article
Azure PPG for optimal network latency with SAP application.
Security
Whether the new SAP HANA VM runs on a new or existing vnet/subnet, it's a new
service critical to your business. It deserves safeguarding. Ensure access control
compliant with your company's security policy.
VM sizing recommendation
This migration is also an opportunity to right size your HANA compute engine. You can
use HANA system views with HANA Studio to understand the system resource
consumption, which allows for right sizing to drive spending efficiency.
Storage
Storage performance is one of the factors that will affect your SAP application user
experience. There are minimum storage layouts published for given VM SKUs. For more
information, see SAP HANA Azure virtual machine storage configurations. We
recommend reviewing these specs and comparing against your existing HLI system
statistics to ensure adequate IO capacity and performance for your new HANA VM.
Will you configure PPG for the new HANA VM and its associated severs? Then submit a
support ticket to inspect and ensure the co-location of the storage and the VM. Since
your backup solution may need to change, also revisit the storage cost to avoid
operational spending surprises.
If members of your HANA system are deployed in more than one Azure Zone, you
should be aware of the latency profile of the chosen zones. Place SAP system
components to lessen distance between the SAP application and the database. The
public domain Availability zone latency test tool helps make the measurement easier.
Backup strategy
Many of our customers are already using third-party backup solutions for SAP HANA on
HLI. If you are, then only added protected VM and HANA databases need to be
configured. Ongoing HLI backup jobs can be unscheduled if the machine is being
decommissioned after the migration.
Azure backup for SAP HANA on VM is now generally available. For more information on
SAP HANA backup in Azure VMs, see Backup, Restore, and Manage.
DR strategy
If your service level goals accommodate a longer recovery time, backup can be easy. A
backup to blob storage and restore in place or restore to a new VM is the simplest and
least expensive DR strategy.
On the large instance platform, HANA DR is typically done with HSR. On an Azure VM,
HSR is also the most natural and native SAP HANA DR solution. Whether the source
deployment is single-instance or clustered, a replica of the source infrastructure is
required in the DR region. This DR replica will be configured after the primary HLI to VM
migration is complete. The DR HANA DB will register to the primary SAP HANA on VM
instance as a secondary replication site.
Migration strategy
In this document, we cover only the HANA System Replication approach for the
migration from HLI to Azure VM. Depends on the target storage solution deployed, the
process differs slightly. The high-level steps are described below.
) Important
Copying and data transfer can take hours depending on the HANA database size
and network bandwidth. The bulk of the copy process should be done in advance
of the primary HANA database downtime.
Locate existing application servers and the new HANA VM optimally. Then you won't
need to build new application servers, unless you want greater capacity.
When you build a new infrastructure to enhance service availability, your existing
application servers may become unnecessary. They can be shut down and deleted. If the
target VM hostname changed, and differs from the HLI hostname, adjust SAP
application server profiles to point to the new host. If only the HANA database IP
address has changed, update the DNS record to lead incoming connections to the new
HANA VM.
Acceptance test
Migration from HLI to VM makes no material change to the database content as
compared to a heterogeneous migration. Still, we recommend checking the
performance of the new setup.
Cutover plan
Although this migration is straightforward, it does involve the decommissioning of an
existing database. Careful planning to preserve the source system with its content and
backup images are critical in case fallback is necessary. Good planning does offer a
speedier reversal.
Post migration
The migration job isn't done until we've safely decoupled any HLI-dependent services
and connectivity to ensure data integrity. Also, we recommend shutting down
unnecessary services. This section calls out a few of the more important items.
Next steps
Plan your SAP deployment.