Professional Documents
Culture Documents
20465C Designing A Data Solution With Microsoft® SQL Server®
20465C Designing A Data Solution With Microsoft® SQL Server®
20465C
L E A R N I N G
P R O D U C T
O F F I C I A L
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not
responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
2014 Microsoft Corporation. All rights reserved.
These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its
affiliates) and you. Please read them. They apply to your use of the content accompanying this agreement which
includes the media on which you received it, if any. These license terms also apply to Trainer Content and any
updates and supplements for the Licensed Content unless other terms accompany those items. If so, those terms
apply.
BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.
If you comply with these license terms, you have the rights below for each license you acquire.
1.
DEFINITIONS.
a. Authorized Learning Center means a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, or such other entity as Microsoft may designate from time to time.
b. Authorized Training Session means the instructor-led training class using Microsoft Instructor-Led
Courseware conducted by a Trainer at or through an Authorized Learning Center.
c.
Classroom Device means one (1) dedicated, secure computer that an Authorized Learning Center owns
or controls that is located at an Authorized Learning Centers training facilities that meets or exceeds the
hardware level specified for the particular Microsoft Instructor-Led Courseware.
d. End User means an individual who is (i) duly enrolled in and attending an Authorized Training Session
or Private Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.
e. Licensed Content means the content accompanying this agreement which may include the Microsoft
Instructor-Led Courseware or Trainer Content.
f.
Microsoft Certified Trainer or MCT means an individual who is (i) engaged to teach a training session
to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) currently certified as a
Microsoft Certified Trainer under the Microsoft Certification Program.
g. Microsoft Instructor-Led Courseware means the Microsoft-branded instructor-led training course that
educates IT professionals and developers on Microsoft technologies. A Microsoft Instructor-Led
Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business Group courseware.
h. Microsoft IT Academy Program Member means an active member of the Microsoft IT Academy
Program.
i.
Microsoft Learning Competency Member means an active member of the Microsoft Partner Network
program in good standing that currently holds the Learning Competency status.
j.
MOC means the Official Microsoft Learning Product instructor-led courseware known as Microsoft
Official Course that educates IT professionals and developers on Microsoft technologies.
k. MPN Member means an active Microsoft Partner Network program member in good standing.
l.
Personal Device means one (1) personal computer, device, workstation or other digital electronic device
that you personally own or control that meets or exceeds the hardware level specified for the particular
Microsoft Instructor-Led Courseware.
m. Private Training Session means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led Courseware.
These classes are not advertised or promoted to the general public and class attendance is restricted to
individuals employed by or contracted by the corporate customer.
n. Trainer means (i) an academically accredited educator engaged by a Microsoft IT Academy Program
Member to teach an Authorized Training Session, and/or (ii) a MCT.
o. Trainer Content means the trainer version of the Microsoft Instructor-Led Courseware and additional
supplemental content designated solely for Trainers use to teach a training session using the Microsoft
Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint presentations, trainer
preparation guide, train the trainer materials, Microsoft One Note packs, classroom setup guide and Prerelease course feedback form. To clarify, Trainer Content does not include any software, virtual hard
disks or virtual machines.
2.
USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one copy
per user basis, such that you must acquire a license for each individual that accesses or uses the Licensed
Content.
2.1
Below are five separate sets of use rights. Only one set of rights apply to you.
vii. you will only use qualified Trainers who have in-depth knowledge of and experience with the
Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware being taught for
all your Authorized Training Sessions,
viii. you will only deliver a maximum of 15 hours of training per week for each Authorized Training
Session that uses a MOC title, and
ix. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer resources
for the Microsoft Instructor-Led Courseware.
c.
ii.
You may customize the written portions of the Trainer Content that are logically associated with
instruction of a training session in accordance with the most recent version of the MCT agreement.
If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private Training
Sessions, and (ii) all customizations will comply with this agreement. For clarity, any use of
customize refers only to changing the order of slides and content, and/or not using all the slides or
content, it does not mean changing or modifying any slide or content.
2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may not
separate their components and install them on different devices.
2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above, you may
not distribute any Licensed Content or any portion thereof (including any permitted modifications) to any
third parties without the express written permission of Microsoft.
2.4 Third Party Notices. The Licensed Content may include third party code tent that Microsoft, not the
third party, licenses to you under this agreement. Notices, if any, for the third party code ntent are included
for your information only.
2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also
apply to your use of that respective component and supplements the terms described in this agreement.
3.
a. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version of
the Microsoft technology. The technology may not work the way a final version of the technology will
and we may change the technology for the final version. We also may not release a final version.
Licensed Content based on the final version of the technology may not contain the same information as
the Licensed Content based on the Pre-release version. Microsoft is under no obligation to provide you
with any further content, including any Licensed Content based on the final version of the technology.
b. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly or
through its third party designee, you give to Microsoft without charge, the right to use, share and
commercialize your feedback in any way and for any purpose. You also give to third parties, without
charge, any patent rights needed for their products, technologies and services to use or interface with
any specific parts of a Microsoft technology, Microsoft product, or service that includes the feedback.
You will not give feedback that is subject to a license that requires Microsoft to license its technology,
technologies, or products to third parties because we include your feedback in them. These rights
survive this agreement.
c.
Pre-release Term. If you are an Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed Content on
the Pre-release technology upon (i) the date which Microsoft informs you is the end date for using the
Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the commercial release of the
technology that is the subject of the Licensed Content, whichever is earliest (Pre-release term).
Upon expiration or termination of the Pre-release term, you will irretrievably delete and destroy all copies
of the Licensed Content in your possession or under your control.
4.
SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:
access or allow any individual to access the Licensed Content if they have not acquired a valid license
for the Licensed Content,
alter, remove or obscure any copyright or other protective notices (including watermarks), branding
or identifications contained in the Licensed Content,
publicly display, or make the Licensed Content available for others to access or use,
copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property laws
and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the
Licensed Content.
6.
EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations.
You must comply with all domestic and international export laws and regulations that apply to the Licensed
Content. These laws include restrictions on destinations, end users and end use. For additional information,
see www.microsoft.com/exporting.
7.
SUPPORT SERVICES. Because the Licensed Content is as is, we may not provide support services for it.
8.
TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail
to comply with the terms and conditions of this agreement. Upon termination of this agreement for any
reason, you will immediately stop all use of and delete and destroy all copies of the Licensed Content in
your possession or under your control.
9.
LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible for
the contents of any third party sites, any links contained in third party sites, or any changes or updates to
third party sites. Microsoft is not responsible for webcasting or any other form of transmission received
from any third party sites. Microsoft is providing these links to third party sites to you only as a
convenience, and the inclusion of any link does not imply an endorsement by Microsoft of the third party
site.
10.
ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11.
APPLICABLE LAW.
a. United States. If you acquired the Licensed Content in the United States, Washington state law governs
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws
principles. The laws of the state where you live govern all other claims, including claims under state
consumer protection laws, unfair competition laws, and in tort.
b. Outside the United States. If you acquired the Licensed Content in any other country, the laws of that
country apply.
12.
LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws
of your country. You may also have rights with respect to the party from whom you acquired the Licensed
Content. This agreement does not change your rights under the laws of your country if the laws of your
country do not permit it to do so.
13.
14.
LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP
TO US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL,
LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque : Ce le contenu sous licence tant distribu au Qubec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en franais.
EXONRATION DE GARANTIE. Le contenu sous licence vis par une licence est offert tel quel . Toute
utilisation de ce contenu sous licence est votre seule risque et pril. Microsoft naccorde aucune autre garantie
expresse. Vous pouvez bnficier de droits additionnels en vertu du droit local sur la protection dues
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties
implicites de qualit marchande, dadquation un usage particulier et dabsence de contrefaon sont exclues.
Elle sapplique galement, mme si Microsoft connaissait ou devrait connatre lventualit dun tel dommage. Si
votre pays nautorise pas lexclusion ou la limitation de responsabilit pour les dommages indirects, accessoires
ou de quelque nature que ce soit, il se peut que la limitation ou lexclusion ci-dessus ne sappliquera pas votre
gard.
EFFET JURIDIQUE. Le prsent contrat dcrit certains droits juridiques. Vous pourriez avoir dautres droits
prvus par les lois de votre pays. Le prsent contrat ne modifie pas les droits que vous confrent les lois de votre
pays si celles-ci ne le permettent pas.
Revised July 2013
xi
Acknowledgments
Microsoft Learning wants to acknowledge and thank the following for their contribution toward
developing this title. Their effort at various stages in the development has ensured that you have a good
classroom experience.
Martin Ellis is a Microsoft SQL Server subject matter expert and professional content developer at Content
Master, a division of CM Group Ltd. Martin is an MCSE and worked for many years as a Microsoft Certified
Trainer (MCT). He has been working with SQL Server since version 7.0, as a DBA, consultant, and trainer,
and has developed a wide range of technical collateral for Microsoft Corp., including several SQL Server
training courses for Microsoft Learning.
Graeme Malcolm is a Microsoft SQL Server subject matter expert and professional content developer at
Content Mastera division of CM Group Ltd. As a Microsoft Certified Trainer, Graeme has delivered
training courses on SQL Server since version 4.2; as an author, Graeme has written numerous books,
articles, and training courses on SQL Server; and as a consultant, Graeme has designed and implemented
business solutions based on SQL Server for customers all over the world.
Christian Bolton is the Technical Director for Coeo Ltd., a leading provider of SQL Server consulting and
managed support services in the UK and Europe. Christian is a Microsoft Certified Architect, Master and
MVP for SQL Server, the lead author of Wrox Professional SQL Server 2008 Internals and Troubleshooting
and contributor to Wrox Professional SQL Server 2005 Performance Tuning.
Contents
Module 1: Introduction to Enterprise Data Architecture
Module Overview
1-1
1-2
1-5
1-9
1-12
2-1
2-2
2-9
2-11
2-15
3-1
3-2
3-5
3-12
3-20
3-23
4-1
4-2
4-18
4-22
5-1
5-2
5-6
5-10
5-13
xiii
6-1
6-2
6-8
6-13
6-19
6-23
7-1
7-2
7-6
7-11
7-15
7-18
8-1
8-2
8-6
8-13
8-17
8-22
9-1
9-2
9-11
9-22
9-26
Module 10: Clustering with Windows Server and SQL Server 2014
Module Overview
10-1
10-2
10-11
10-22
10-26
11-1
11-2
11-9
11-23
11-28
11-34
12-1
Lesson 1: High Availability and Disaster Recovery with SQL Server 2014
12-2
12-12
12-17
12-22
13-1
13-2
13-16
13-20
13-23
L01-1
L02-1
L03-1
L04-1
L05-1
L06-1
L07-1
L08-1
L09-1
L10-1
L11-1
L12-1
L13-1
xv
This section provides a brief description of the course, audience, suggested prerequisites, and course
objectives.
Course Description
The focus of this five-day instructor-led course is on planning and implementing database solutions by
using SQL Server 2014. It describes how to consolidate SQL Server workloads, work with both on-premises
and Microsoft Azure cloud-based solutions, and how to plan and implement high availability and disaster
recovery solutions.
Audience
This course is intended for database professionals who need who plan, implement, and manage database
solutions. Primary responsibilities include:
Student Prerequisites
In addition to their professional experience, students who attend this training should already have the
following technical knowledge:
Managing databases
Some basic knowledge of Azure technologies and concepts around cloud computing
Course Objectives
After completing this course, students will be able to:
Describe the considerations for consolidating workloads with SQL Server 2014.
Describe high availability technologies in SQL Server 2014 and implement log shipping.
Describe Windows Server Failover Clustering and Implement an AlwaysOn Failover Cluster
Instance.
Course Outline
The course outline is as follows:
Module 1, Introduction to Enterprise Data Architecture
Module 2, Multi-Server Configuration Management
Module 3, Monitoring SQL Server 2014 Health
Module 4, Consolidating Database Workloads with SQL Server 2014
Module 5, Introduction to Cloud Data Solutions
Module 6, Introduction to Microsoft Azure
Module 7, Microsoft Azure SQL Database
Module 8, SQL Server in Microsoft Azure Virtual Machines
Module 9, Introduction to High Availability in SQL Server 2014
Module 10, Clustering with Windows Server and SQL Server 2014
Module 11, AlwaysOn Availability Groups
Module 12 Planning High Availability and Disaster Recovery
Module 13, Replicating Data
Course Materials
Course Handbook: a succinct classroom learning guide that provides the critical technical
information in a crisp, tightly-focused format, which is essential for an effective in-class learning
experience.
ii
Lessons: guide you through the learning objectives and provide the key points that are critical to
the success of the in-class learning experience.
Labs: provide a real-world, hands-on platform for you to apply the knowledge and skills learned
in the module.
Module Reviews and Takeaways: provide on-the-job reference material to boost knowledge
and skills retention.
Course Companion Content: searchable, easy-to-browse digital content with integrated premium
online resources that supplement the Course Handbook. You can download this content from the
http://www.microsoft.com/learning/companionmoc site
Modules: include companion content, such as questions and answers, detailed demo steps and
additional reading links, for each lesson. Additionally, they include Lab Review questions and
answers and Module Reviews and Takeaways sections, which contain the review questions and
answers, best practices, common issues and troubleshooting tips with answers, and real-world
issues and scenarios with answers.
Resources: include well-categorized additional resources that give you immediate access to the
most current premium content on TechNet, MSDN, or Microsoft Press.
Student Course files: includes the Allfiles.exe, a self-extracting executable file that contains all
required files for the labs and demonstrations. You can download these files from the
http://www.microsoft.com/learning/companionmoc site.
Course evaluation: at the end of the course, you will have the opportunity to complete an online
evaluation to provide feedback on the course, training facility, and instructor.
This section provides the information for setting up the classroom environment to support the business
scenario of the course.
Role
20465C-MIA-DC
20465C-MIA-SQL
iii
Virtual machine
Role
20465C-MIA-FCI-CLUST1
20465C-MIA-FCI-CLUST2
20465C-MIA-FCI-CLUST3
20465C-MIA-AG-CLUST1
20465C-MIA-AG-CLUST2
20465C-MIA-AG-CLUST3
MSL-TMG1
Software Configuration
The following software is installed:
Course Files
iv
The files associated with the labs in this course are located in the D:\Labfiles folder on the 20465C-MIASQL, 20465C-MIA-FCI-CLUST1, and 20465C-MIA-AG-CLUST1 virtual machines.
Classroom Setup
Each classroom computer will have the same virtual machines configured in the same way.
Processor: 64 bit Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) processor
(2.8 Ghz dual core or better recommended)
Hard Disk: Dual 500 GB hard disks 7200 RPM SATA or faster (striped)
RAM: 16 GB or higher
Network Adapter
In addition, the instructor computer must be connected to a projection display device that supports SVGA
1024 x 768 pixels, 16 bit colors.
Module 1
Introduction to Enterprise Data Architecture
Contents:
Module Overview
1-1
1-2
1-5
1-9
1-12
Module Overview
As organizations grow to enterprise scale, their IT infrastructure requirements become more complex and
the network environment often includes an increasing number of servers, client computers, network
segments, and other components. Because data is fundamental to most IT operations, careful thought
must be given to the provisioning and management of databases across the enterprise.
Objectives
After completing this module, you will be able to:
Describe key considerations for data storage and management in an enterprise infrastructure.
Use the Microsoft Assessment and Planning MAP Toolkit to assess an existing data infrastructure.
Lesson 1
1-2
When planning or assessing an enterprise infrastructure architecture, you must consider data storage and
management for all the applications and services that support the enterprise.
Lesson Objectives
After completing this lesson, you will be able to:
Dedicated data centers. In a very small organization, it is not uncommon for IT infrastructure, such
as servers or network switches, to be located in the general office environment, often under the desk
of the person responsible for IT administration. As organizations grow to a medium size and require
more IT services, server hardware is usually stored in a server room (often a closet). In large
enterprises, these server rooms are often replaced by dedicated data centers with redundant power
supply and specialist cooling capabilities to keep multiple racks of servers at optimal operating
temperature.
1-3
Greater requirements for compliance and standardization. Typically, large organizations have
greater requirements for compliance to internal, industry, and legal policies. This includes the need to
manage and audit data access, data retention, and data storage location. Additionally, the large
number of servers and computers in use generally leads organizations to standardize client and server
configuration to improve manageability and simplify provisioning of new computers.
High availability and disaster recovery. Application availability is a priority for all organizations, but
at the enterprise level an application may support thousands of employees or customers performing
core business operations. In some cases, each second of downtime might have a significant financial
impact on the business. Ensuring that business-critical services are available is a major concern for IT,
and particularly for database system architects and administrators. Additionally, in the event of a
failure, the database must be recoverable as completely and quickly as possible within the agreed
business SLA.
Security and compliance. All organizations should take data security seriously, but in large
enterprises there are often tightly-defined policies for physical and virtual access authentication and
authorization. Additionally, large organizations often need to adhere to compliance policies that
require data access auditing and encryption of sensitive data.
Common Challenges
Managing database services in a large enterprise
presents significant challenges, many of which are
not applicable in smaller organizations. These
challenges include:
1-4
Inconsistent database software and versions. The unmanaged proliferation of database servers
often leads to an environment that includes multiple database management systems, with multiple
versions of each system installed and varying levels of updates applied. A great aid to ensuring
manageability across the enterprise is to enforce consistency in terms of database management
software, version, and configurationand to ensure that a managed regime for applying server
updates is in place to maintain this consistency.
Geographic distribution of server resources. Some large organizations span the globe, and servers
and other hardware resources may be deployed where the business operation that uses them is
located. This can make remote management of server resources challenging, and lead to difficulties in
planning maintenance periods in out of hours times because of the international time zones in
which different sites are located.
Application ownership. In larger organizations, it can be easy to lose track of who is responsible for
departmental applications that were developed outside of IT control. For example, if an employee
develops an application and subsequently leaves the organization or moves to a different department
or role, it is important to ensure that ownership of the application is formally passed on to someone
else.
Application security. Unfortunately, it is not uncommon for applications that are deployed outside
of IT control to have poor security configurations. This can include applications that use the sa SQL
Server login to connect to databases, particularly if the password is left blank or is not sufficiently
complex.
Lesson 2
The starting point for planning changes to an enterprise infrastructure is to discover what servers and
applications already exist, and evaluate how best to adapt the infrastructure to support the required
business operations while standardizing and consolidating existing servers and network resources.
Lesson Objectives
After completing this lesson, you will be able to:
Use the MAP Toolkit to gather information about existing database servers.
Plan server and client upgrades and migrations to virtualized or public cloud environments.
Reference Links: You can download the MAP Toolkit from www.microsoft.com/map.
1-5
Details of any Windows or Linux computers on which Oracle databases are running.
Usage statistics for all instances of SQL Server that require user licensing.
Collecting Data
1-6
To perform the data collection process, the MAP Toolkit provides a wizard in which you must select the
specific information to be collected and provide details of the environment to be searched and credentials
to be used when interrogating the servers that are discovered.
Server Discovery
The MAP Toolkit can use the following techniques to discover servers:
To use Active Directory Domain Services, you must specify domain credentials that can be used to browse
the directory.
Computer Interrogation
After the MAP Toolkit has discovered one or more servers, it interrogates each one to obtain details about
the applications installed on it. When searching for details in SQL Server and Oracle database servers, you
must specify credentials that can be used to connect to the database server. This includes Windows
credentials for SQL Server instances that use Integrated Windows authentication, and native credentials
for Oracle and SQL Server instances that use SQL Server authentication. You can specify multiple
credentials and define the order in which they should be tried when interrogating a discovered server.
SQL Server Usage Tracker: This includes details of all instances of SQL Server for which software
licenses are required, together with license information obtained from the SQL Server instance
configuration. The report also lists the number of unique users and client devices that have
opened connections to the instances.
1-7
Detailed SQL Server reports. This option includes the standard SQL Server reports, and an additional
report that shows details of the individual databases that are hosted in each instance of SQL Server.
For each database, the report includes details of its size, the number of tables it contains, as well as its
owner, compatibility level, files, and filegroups.
Azure VM readiness report. You can use this option to generate a report that indicates the
readiness of each database server to be migrated to a virtual machine hosted in Windows Azure.
Oracle report. You can use this option to generate a report that shows details of each Oracle
instance discovered on Windows and Linux servers in your environment.
Generate reports.
Demonstration Steps
Install the MAP Toolkit
1.
Ensure that the MSL-TMG, 20465C-MIA-DC, and 20465C-MIA-SQL virtual machines are running, and
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
Click the link to download the latest version of the MAP Toolkit.
4.
Follow the instructions to download and install the MAP Toolkit on the MIA-SQL server.
5.
On the Start screen, type MAP and then click Microsoft Assessment and Planning Toolkit.
6.
In the Microsoft Assessment and Planning Toolkit dialog box, in the Create or select a database
area, in the Name field, type MAPData, and then click OK.
2.
In the Inventory and Assessment Wizard dialog box, on the Inventory Scenarios page, select SQL
Server with Database Details. Then click Next.
3.
On the Discovery Methods page, ensure that only Use Active Directory Domain Services (AD DS)
is selected, and click Next.
4.
On the Active Directory Credentials page, enter the following details and click Next:
5.
Domain: adventureworks.msft
Password: Pa$$w0rd
On the Active Directory Options page, ensure that Find all computers in all domains, containers,
and organizational units is selected, and click Next.
6.
1-8
On the All Computers Credentials page, click Create. Add the following account for WMI and SQL
windows technologies and click Save. Then click Next:
o
Password: Pa$$w0rd
7.
8.
On the Summary page, click Finish and wait for data collection to complete. Then click Close.
Generate Reports
1.
In the MAP Toolkit, on the Database tab, view the information in the SQL Server Products tile.
2.
Click the SQL Server Products tile, and view the summary details that are displayed.
3.
In the Options area, click Generate SQL Server Database Details Reports.
4.
When the reports have been generated, click Close and view the contents of the reports folder.
5.
Open each of the reports in Microsoft Excel and view the details they contain.
1-9
You have been asked to review the existing database server infrastructure of Adventure Works Cycles. You
plan to do this using the MAP Toolkit.
Objectives
After completing this lab, you will be able to:
In this exercise, you will use the MAP Toolkit to discover database servers in the adventureworks.msft
domain.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Install the MAP Toolkit
3. Collect Inventory Data
4. View the Results
Ensure that the MSL-TMG, 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
On MIA-SQL, start Internet Explorer, browse to www.microsoft.com/map, and find the latest version
of the MAP Toolkit.
2.
Start the Microsoft Assessment and Planning Toolkit. If you are prompted to create a database, create
an inventory database named MAPData.
2.
Use the Active Directory Domain Services (AD DS) discovery method.
Domain: adventureworks.msft
Password: Pa$$w0rd
Configure the inventory and ensure that Find all computers in all domains, containers, and
organizational units is selected.
Use the following information to create computer credentials for WMI and SQL Windows
technologies:
Password: Pa$$w0rd
In the MAP Toolkit, on the Database tab, review the information in the SQL Server Products tile.
2.
On the SQL Server Products tile, view the summary details that are displayed.
3.
Results: At the end of this exercise, you will have used the MAP Toolkit to discover details of SQL Server
instances in the domain.
You have used the MAP Toolkit to collect inventory information about the database servers in your
environment. Now you must examine the reports generated by the MAP Toolkit to determine the various
versions, editions, and components of SQL Server that your environment contains.
The main tasks for this exercise are as follows:
1. Review the SQL Server Assessment Report
2. Review the SQL Server Database Details Report
3. Review the SQL Server Usage Tracker Report
2.
View the information on the Summary worksheet, noting the number of instances of each major SQL
Server component that was found in the Adventure Works infrastructure.
3.
View the information on the DatabaseInstances worksheet, noting the various versions, service pack
levels, and editions of the database engine that were found.
4.
View the information on the Components worksheet, noting the various versions, service pack levels,
and editions of other SQL Server components that were found.
1-11
1.
2.
View the information on the Overview worksheet, noting the number of instances and databases that
were found in the Adventure Works infrastructure.
3.
View the information on the SQLServerSummary worksheet, noting the number of instances and
databases that were found in each server.
4.
View the information on the DatabaseSummary worksheet, noting the details of the individual
databases that were found.
5.
View the information on the DBInstanceSummary worksheet, noting the details of the database
engine that were found.
2.
View the information on the Overview worksheet, noting the number of instances of each licensed
SQL Server product that were found in the Adventure Works infrastructure.
3.
View the information on the SQL Server License Tracking worksheet, noting the license details that
were found for each server.
4.
View the information on the SQL Server Summary worksheet, noting the details for each SQL Server
product that was found.
5.
View the information on the SQL Server Instance Details worksheet, noting the details of the SQL
Server instances that were found.
6.
View the information on the Client Access Summary worksheet, noting the details of SQL Server
access by users and devices that were found.
7.
Results: At the end of this exercise, you will have examined MAP Toolkit SQL Server reports.
Question: How might the information in the reports generated by the MAP Toolkit be useful
to an enterprise data architect?
In this module, you have learned about some of the key characteristics that differentiate enterprise
infrastructure environments from those of small to medium organizations. All organizations vary, and as a
database professional, you must understand the specific environment in which you work, and the data
infrastructure services to the business that you are responsible for supporting.
Review Question(s)
Question: Based on your own experience, what challenges face a database administrator or
data architect in a large enterprise?
Module 2
Multi-Server Configuration Management
Contents:
Module Overview
2-1
2-2
2-9
2-11
2-15
Module Overview
By using the MAP Toolkit, you can discover which database servers are present in the enterprise, and how
theyre configured. To ensure compliance and manageability across all these database servers, it is
important to standardize and enforce configuration settings. SQL Server 2014 includes the Policy-Based
Management tool to enable you to achieve this. This module describes Policy-Based Management, and
explains how you can use it, together with enterprise configuration management tools in Microsoft
System Center, to aid enterprise database server management.
Objectives
After completing this module, you will be able to:
Describe how Microsoft System Center can be used to monitor and manage SQL Server.
Lesson 1
Policy-Based Management
This lesson describes the considerations for planning and implementing Policy-Based Management.
Lesson Objectives
After completing this lesson, you will be able to:
2-2
Create policies on a single server and use them to evaluate local and remote servers for compliance.
Centrally manage and monitor servers to easily identify servers that are not in compliance with
policies.
Use pre-defined best-practice policies to accelerate and simplify the process of policy implementation
and take the guesswork out of policy creation.
Facets
A facet is an object that contains a group of related
properties. The properties in a facet describe the
characteristics of SQL Server components, such as
databases and filegroups, and their configuration.
Some facets refer to groupings of features or
functionality, such as Server Configuration and
Surface Area Configuration, rather than to
specific SQL Server components.
2-3
When you create and configure policies, you use facets to specify the settings and behaviors that you
want to manage. For example, to create a policy that governs server security, you could use the Server
Security facet, and specify facet properties such as LoginMode and XPCmdShellEnabled. All facets are
system- defined; you cannot create user-defined facets.
Conditions
You create conditions to define the settings that you want to manage. Each property in a facet has its own
set of configurable values. For example, for the LoginMode property of the Server Security facet, the
possible values are Normal, Integrated, Mixed, and Unknown. When you create a condition, you select
a facet and create an expression that defines the value of a property of that facet. You can include
multiple properties and multiple facets in each condition.
Policies
A policy contains a list of conditions and targets against which the conditions will be evaluated. It also
defines a mode of evaluation. There are several pre-defined system policies, and you can also create your
own as required.
Targets
Targets are the entities against which policies are evaluated. Targets include SQL Server Database Engine
instances, databases, tables, and indexes. When you create a policy, you define the targets as a
hierarchical list called a target set. For example, a target set could include all the application roles in every
database.
Policy categories
You can group policies together into categories that have two functions:
They can be used to define the scope of policies that relate to databases. When you create a policy
category, you can use the Mandate option to apply the policy category (and the policies it contains)
to all databases. When you do not use the Mandate option, you can subscribe individual databases
to a policy category.
Evaluation
You can evaluate targets against policies by using four different evaluation modes, which you define as
part of the policy configuration. These modes are:
2-4
On change: prevent. This mode rolls back any changes that would cause the target to be in violation
with the policy conditions.
On change: log only. This mode records changes that violate policy conditions, but it does not roll
back the changes.
On schedule. This mode evaluates polices according to a schedule defined in a SQL Server Agent job.
On demand. This mode evaluates polices when a user manually starts the evaluation process.
Note: Not all facets support On change: prevent and On change: log only evaluation
modes.
You can create alerts to inform designated operators when policy violations occur. Each evaluation mode
has a message number associated with it. You can create SQL Server Agent alerts that monitor the event
log for these message numbers and forward a message to the appropriate operator.
Reference Links: For more information about configuring alerts for policy violations, see
the topic Configure Alerts to Notify Policy Administrators of Policy Failures in SQL Server Books
Online.
Evaluation modes
When choosing an evaluation mode, try to avoid
placing unnecessary extra demands on your system.
Note: Not all evaluation modes will be
available for every policy. The available evaluation
modes will vary, depending on which conditions
and targets you have selected.
In particular, scheduled automatic evaluation can adversely affect system performance. You should ensure
that you configure the schedule so that it minimizes the impact on the rest of the system. If your servers
are locked down and very few administrators can make high-level changes, then there is little point in
scheduling frequent policy validation, unless this is required by your organizations compliance rules.
Best-practice policies
2-5
SQL Server includes a set of pre-configured best-practice polices that you can use to quickly and
efficiently ensure compliance with a wide range of common requirements. You can view these bestpractice policies in the C:\Program Files (x86)\Microsoft SQL
Server\120\Tools\Policies\DatabaseEngine\1033 folder, assuming that you used the default folder
locations settings during installation of the SQL Server Database Engine. The best-practice policies are not
configured by default, but by using SQL Server Management Studio (SSMS), you can select the policies
that you require, and import and configure them according to your needs.
The built-in best-practice polices for the SQL Server Database Engine include:
Guest Permissions.
Trustworthy Database.
When you create a policy, SQL Server stores it in the msdb system database. Consequently, when you
make changes to policies, you should ensure that you back up msdb. You can view the policy data in the
msdb database by using the built-in system views, including: syspolicy_conditions, syspolicy_policies,
and syspolicy_policy_execution_history.
Reference Links: For more information about Policy-Based Management system views, see
the Policy-Based Management Views (Transact-SQL) topic in SQL Server Books Online.
Demonstration Steps
View Best-Practice Policies
1.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running, and then log on
to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL, and then click
Connect.
4.
In Object Explorer, expand Management, expand Policy Management, right-click Policies, and then
click Import Policy.
5.
In the Import dialog box, next to the Files to import box, click the ellipsis button.
6.
In the Select Policy dialog box, double-click SQL Server Best Practices, double-click
DatabaseEngine, and then double-click 1033.
7.
8.
9.
2-6
1.
2.
In the New Condition dialog box, in the Name field, enter a name for the condition.
3.
In the Facet list, select the facet or facets that you want the condition to reference.
4.
In the Expression box, construct the expression to specify the value of the property facet or facets.
You can add multiple expressions, and use the AND and OR operators to make them cumulative or
exclusive.
5.
Click OK.
You can also use the sp_syspolicy_add_condition stored procedure in the msdb database to create
conditions.
Creating policies
You can create policies by using SSMS. When you create a policy, you need to enable it before you can
use it, as this is not a default. To create a new policy, perform the following actions:
1.
In Object Explorer, expand the Management node, expand Policy Management, right-click
Policies, and then click New Policy.
2.
In the New Policy dialog box, in the Name field, enter a name for the policy.
3.
Check the Enabled check box if you want to enable the policy immediately.
4.
In the Check condition list, select the conditions that you want the policy to evaluate.
5.
In the Against targets box, select the target types for the policy.
6.
In the Evaluation Mode box, select the required evaluation mode for this policy. If you select the On
schedule evaluation mode, you can choose an existing schedule to use or create a new one.
7.
In the restriction field, select a condition that limits the servers to those where the policy will apply,
or leave the configured value as None if no restriction is required.
8.
Click OK.
You can also use the sp_syspolicy_add_policy stored procedure in the msdb database to create policies.
In Object Explorer, expand the Management node, expand Policy Management, right-click
Policies, and then click Import Policy.
2.
3.
In the Select Policy dialog box, double-click SQL Server Best Practices, double-click
DatabaseEngine, and then double-click 1033.
4.
Click the policy that you want to import, and then click Open.
5.
In the Import dialog box, in the Policy State list, click the state that you want to use for the policy.
6.
2-7
You can also initiate evaluation for an individual target by right-clicking the targetsuch as a database
in Object Explorer, clicking Policies, and then clicking Evaluate. You can then select one or more of the
policies that apply to the server to evaluate, and analyze the results as described above.
2-8
After the central management server is defined, you can register individual servers to be managed with it.
You can also define server groups under the central management server, and use them to group together
servers with similar management requirements.
You can evaluate multiple policies against multiple servers by using the Registered Servers window in
SSMS. Evaluating policies against the central management server applies the policy to all registered
servers under that server. Alternatively, you can evaluate policies for server groups and individual servers
under the central management server.
Note: When a server contains multiple instances of SQL Server, a local server group is
created automaticallyeven if you have not designated a central management server. You can
apply policies to this local group using the same technique as for server groups under a central
management server.
Lesson 2
2-9
Microsoft System Center is a suite of technologies that enables enterprise IT administrators to manage
multiple, distributed IT services using automated provisioning, configuration, and monitoring capabilities.
System Center is increasingly integrated with Microsoft Windows Server, and together the two products
form the foundation of the Microsoft platform for IT infrastructure both on-premises and in hybrid
environments that combine virtualized private and public cloud services.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the key features and capabilities of Microsoft System Center 2012 R2.
Virtual Machine Manager a management tool for virtualized data centers and private clouds that
contain virtual machines and networks.
App Controller a virtual machine provisioning and management tool for virtual machines in private
and public cloud.
Data Protection Manager a centralized tool for managing data backup and recovery, including
data on the file system, in SQL server databases, and in Exchange Server mailboxes.
Services Manager a platform for automating IT incident and problem resolution processes, as well
as managing change control.
System Center combines these components in different ways to provide the following capabilities:
Infrastructure provisioning. You can use System Center to provision physical, virtual, and cloudbased infrastructure.
Infrastructure monitoring. System Center provides a centralized solution for monitoring servers
across the enterprise to ensure reliable operations.
Application performance monitoring. You can use System Center to monitor applications and
provide diagnostic information to help troubleshoot and resolve operational and performance issues.
IT service management. With System Center, IT departments can centralize IT services and manage
processes to ensure a consistent service to the business that meets SLA requirements and ensures
efficient management of service requests.
To manage SQL Server instances, System Center requires the installation of the System Center
Management Pack for SQL Server, which you can download from the Microsoft website. The management
pack provides support for monitoring SQL Server 2005, 2008, 2008 R2, and 2012 instances, and includes
dashboard and diagram views for performance data collection reports and health monitoring.
2-11
Managers at Adventure Works want to streamline the management and standardize the configuration of
SQL Server instances. You have been tasked with implementing this for the instance that hosts the
HumanResources database. To achieve this, you plan to use policy-based management, taking
advantage of the best-practice policies where possible to achieve your goals.
You will plan which policies to use, implement policy-based management, and then test the SQL Server
instances for compliance.
Objectives
After completing this lab, you will be able to:
You have gathered a list of configuration requirements that you want to implement by using policy-based
management. To implement these requirements, you plan to use the built-in best-practice policies
wherever possible, so you will browse them to identify the appropriate ones to configure. Also, because
there is no best-practice policy that meets your needs, you will identify which policies you should create
manually.
The list below outlines the configuration requirements:
Minimization of downtime
o
All database data files and log files should be on separate drives.
SQL Server should not automatically reclaim unused space from database files.
Security
o
Standardization
All stored procedures should be named with a usp prefix. If the supplied stored procedure name does
not comply with this requirement, SQL Server should prevent the creation of the stored procedure.
1.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
2.
Determine the best practice policies that you should import, and identify any requirements that you
cannot satisfy by using the best-practice policies.
3.
Start SQL Server Management Studio, connect to the Database Engine on the MIA-SQL instance by
using Windows Authentication, and then import the required Best Practices policies.
4.
5.
Value
Name
HR Database
Facet
Database
Field
@Name
Operator
Field name
Value
2.
Value
HumanResources
Value
Name
Stored Procedure
Name
Facet
Stored Procedure
Field
@Name
Operator
LIKE
Value
usp%
2-13
Create a new policy called Stored Procedure Names in Human Resources DB that uses the Stored
Procedure Name condition to evaluate the HR Database target. Use the On Demand evaluation
mode.
Having configured the policies, you will now test the MIA-SQL and MIA-SQL2 instances for compliance.
The main tasks for this exercise are as follows:
1. Test a Policy
2. Change the Policy to Prevent Non-compliance
3. Test Policy Compliance Across Multiple Instances
4. Remove Policies
2.
3.
View the detailed results, noting the name of the stored procedure that does not comply with the
policy.
Change the Stored Procedure Names in Human Resources DB policy to force any attempts to
create stored procedures with non-compliant names to fail.
2.
3.
In the Results pane, review the message that explains why the create procedure Transact-SQL
statement failed.
2.
Create a new server group named Adventure Works DB Servers that includes the MIA-SQL and
MIA-SQL\SQL2 database engine instances.
3.
Evaluate the following policies for the Adventure Works DB Servers server group:
Guest Permissions
Trustworthy Database
Guest Permissions
Trustworthy Database
Review Question(s)
Question: What configuration requirements do you have for SQL server instances and
databases in your organization?
Question: Do you use the built-in best-practice policies in your workplace? If so, which ones
have you imported?
2-15
Module 3
Monitoring SQL Server 2014 Health
Contents:
Module Overview
3-1
3-2
3-5
3-12
3-20
3-23
Module Overview
The Microsoft SQL Server database engine is capable of running for very long periods without the
need for administrator intervention. However, it is important to monitor SQL Server instances to identify
potential problems early, and to help with ongoing planning. For a single instance of SQL Server,
monitoring is a relatively straightforward process. However, in larger organizations which may have many
SQL Server instances distributed across a large and varied infrastructure, monitoring can be a much more
complex affair. SQL Server includes built-in features that aid administrators in monitoring large numbers
of instances; additionally, System Center includes features and add-ons that you can use to monitor SQL
Server instances. This module describes Data Collector and the SQL Server Utility Control Point (UCP), two
features of SQL Server 2014 that enable you to perform in-depth health monitoring across the enterprise.
Objectives
After completing this module, you will be able to:
Describe the options for multi-server health monitoring in SQL Server 2014.
Lesson 1
3-2
To ensure the ongoing health of your servers and the performance of the applications that depend on
them, it is important to implement a robust monitoring solution. This lesson describes the value of health
monitoring, and the options for implementing a SQL Server-based monitoring system in a multi-server
infrastructure.
Lesson Objectives
After completing this lesson, you will be able to:
The number of SQL Server instances that require monitoring, and the distribution of these instances
and databases across many servers, both virtual and physical. It can be extremely time-consuming to
have to connect to individual servers to assess resource utilizationso there is great value in being
able to centrally monitor server health.
Isolating the course of a problem beyond the level of the server or instance. Whilst it is useful to know
that server CPU utilization is too high, it is more helpful to know which instances or applications are
causing the increase in CPU usage.
Identifying potential problems before they become serious. Its much better to be able to spot that a
database log file is close to running out of space than it is to respond to the problems that will arise
when it has already reached its limits.
Identify areas where you are making inefficient use of resources and reallocate them accordingly.
3-3
The Data Collector engine runs on the server instances for which you want to collect performance data.
Data Collector captures data in one or more data collection sets, and then sends it to the server that hosts
the MDW. A collection set is a defined set of collection items, such as performance counters, SQL Server
Profiler traces, and Transact-SQL statements. Collection sets use SQL Server Agent jobs to gather data.
Each collection set has three schedules that you can define to control the following events:
Frequency of data capture. Collection sets take snapshots of data and cache it locally until they
upload it to the MDW. This schedule determines the frequency of snapshots.
Upload frequency. This schedule determines when data is uploaded from the local cache to the
MDW.
Data retention. This schedule defines how long the data should be retained in the MDW before being
deleted.
You can use the built-in reports in SSMS to view and assess the collected data. The Management Data
Warehouse Overview report is the main one, providing links to detailed reports for each monitored
instance.
SQL Server Utility uses the same underlying mechanisms to gather performance data as the Data
Collector. However, SQL Server Utility stores the collected data in a database called the Utility
Management Data Warehouse (UMDW), which has a different schema to that of the MDW used by Data
Collector. The data that SQL Server Utility collects is less detailed than that in Data Collector, which
reflects the way that you would typically use the two features. Data Collector is useful for looking deep
into statistics to isolate the causes of performance issues. In the MDW, for example, you can examine
query plans for individual queries. SQL Server Utility, on the other hand, is more useful for everyday
monitoring of resource usage at a high level. The dashboard view makes it easy for administrators to
quickly identify problems before they arise, enabling a more proactive approach to health monitoring.
Microsoft System Center 2012 - Operations Manager is a monitoring tool that enables you to centrally
monitor large numbers of servers, devices, services, and applications across your physical and virtual
infrastructure from a single console.
3-4
Operations Manager uses management groups to administer monitored entities and store data. A
management group consists of one or more management servers, an operational database, which
contains the monitoring data on a short-term basis, and a data warehouse, which stores data for historical
analysis.
You can configure Operations Manager to send alerts and also use the Reporting Server component to
create reports about the data that you collect.
System Center Monitoring Pack for SQL Server provides specialist discovery and monitoring functionality
for SQL Server, enabling it to monitor the health of SQL Server instances, databases, services such as SQL
Server Agent, and SQL Server Agent jobs. The monitoring pack supports SQL Server 2014 features such as
Policy-Based Management, database replication and AlwaysOn Availability Groups.
Reference Links: You can download the System Center Monitoring Pack for SQL Server
from the Microsoft download website.
System Center Advisor is a cloud-based service, available through the Software Assurance program, which
is focused on proactive monitoring. The service collects performance and configuration data from your
on-site infrastructure, including SQL Server instances, and uploads it to the cloud. The data is then
analyzed to identify potential problems, and you are provided with recommendations about configuration
changes you can make to improve server health and comply with best practices. System Center Advisor
also enables customer support teams to better handle individual support cases because they have access
to the data and can use it to troubleshoot the problem.
Reference Links: For more information, see the System Center Advisor website.
Lesson 2
Data Collector
3-5
Data Collector enables you to simplify health monitoring by centralizing the storage and analysis of
performance data. It includes a central data warehouse for holding performance data, jobs for collecting
and uploading the data to the data warehouse, and a set of built-in reports that you can use to analyze
the data. Data Collector also includes built-in reports that enable you to analyze the data. You can write
your own custom reports by using SQL Server Reporting Services, using custom reports in SSMS, or
running Transact-SQL queries. This lesson describes how to set up and configure the Data Collector, and
the built-in reports that you can use to analyze the data collected by Data Collector.
Lesson Objectives
After completing this lesson, you will be able to:
Collection sets. Collection sets define the type of data to collect, the frequency of collection, and the
data retention period.
Collection mode. You can configure Data Collector to use either cached or non-cached mode. For
continuous collection, you should use cached mode. For periodic collection, you can use non-cached
mode, which uploads all collected data directly to the MDW.
3-6
SQL Server includes three pre-configured system data collection sets that you can use to cover most
common data collection requirements. These sets define the data that needs to be collected, how often it
should be uploaded to a central repository, and how long it should be retained there. The system data
collection sets include:
Disk Usage. This collection set gathers data about disk usage by database data and log files. It has
two collection items, called Disk Usage Data Files and Disk Usage Log Files. SQL Server uploads
the Disk Usage collection set to the MDW every six hours, and the MDW retains the data for 730
days by default.
Server Activity. This collection set gathers data about the resource usage by SQL Server instances
and the host server. It has two collection items, called Server Activity DMV Snapshots and Server
Activity Performance Counters. SQL Server uploads the cached Server Activity collection set to
the MDW every 15 minutes, and the MDW retains the data for 14 days by default.
Query Statistics. This collection set gathers data about queries, including statistics, query plans, and
individual queries. It has a single collection item called Query Statistics Query Activity. SQL Server
uploads the cached Query Statistics collection set to the MDW every 15 minutes, and the MDW
retains the data for 14 days by default.
Transaction Performance Collection Sets. This collection set is intended for use with the Analyze,
Migrate, and Report (AMR) tool. You can use the AMR tool to help you assess which tables would be
best suited for migration to the new In-Memory online transaction processing (OLTP) feature.
Reference Links: For more information about the AMR tool, see New AMR Tool:
Simplifying the Migration to In-Memory OLTP in the SQL Server Blog on the TechNet website.
In addition to the System Data Collection Sets, you can extend the SQL Server Data Collector by creating
user-defined Data Collection Sets. This functionality enables you to specify the data you wish to collect
and to use the infrastructure provided by the SQL Server Data Collector to collect and centralize the data.
The Data Collector can collect information from several locations:
It can query dynamic management views (DMVs) and dynamic management functions (DMFs) to
retrieve detailed information about the operation of the system.
It can retrieve performance counters that provide metrics about the performance of both SQL Server
and the entire server.
The data that the SQL Server Data Collector collects is stored in the MDW database. The process of
configuring Data Collector includes creating the MDW. The Data Collector provides three standard reports
and a rich set of sub-reports. However, you can also write your own reports based on either the data that
the System Data Collection Sets collect or data that user-defined Data Collection Sets collect.
3-7
You should create the MDW on a separate server to the ones you plan to monitor. This has several
benefits, including:
o
You can access reports that combine information for all server instances in your enterprise.
You can offload the need to hold collected data and to report on it from the production servers.
The MDW can grow quite quickly, although the speed of growth will depend on the frequency of
data sampling and the amount of activity on the monitored servers. You should use a test system to
obtain a realistic estimate of data growth for the MDW, and ensure that the SQL Server instance that
hosts the MDW in the production environment has adequate storage.
Configure data upload and retention schedules to suit your requirements to ensure that you do not
collect too much or too little data. If you need to retain data beyond the retention period, you can
extract it to another location.
2.
Configure Data Collector. You can use the Configure Data Collection wizard to configure each
server instance to collect and upload the required data. The only processes that are run on the local
instances are the jobs used to collect and upload the data to the MDW. Some data is collected very
regularly, cached locally and later uploaded using SSIS. Other data is captured infrequently and
uploaded immediately.
The setup process creates the system data collection sets, which you can enable and disable as required.
Using a data collection set, you can specify which data to gather, the frequency of collection and upload,
and the retention period.
3-8
SQL Server 2014 includes database roles that enable you to grant access to both these areas, as described
in the following tables:
Management Data Warehouse Roles
Description
mdw_admin
mdw_writer
mdw_reader
Description
dc_admin
dc_operator
dc_proxy
You can use the Configure Management Data Warehouse wizard to modify membership of these roles. As
well as needing to be a member of the mdw_writer role to upload data, the jobs that collect the data
need the relevant permissions to access the data they are collecting.
Information in msdb
The Data Collector also logs configuration and other information to tables in the msdb database. The
Data Collector calls stored procedures to add the log information and also uses the SSIS logging features
for the SSIS packages that it executes. The data it logs to the msdb database uses the same retention
period settings as the Data Collection Sets to which it relates.
You can access this information by using either the log file viewer or by querying the following functions
and views:
fn_syscollector_get_execution_details()
fn_syscollector_get_execution_stats()
syscollector_execution_log
syscollector_execution_log_full
syscollector_execution_stats
3-9
You can configure which of the three levels of logging to use by calling the
sp_syscollector_update_collection_set system stored procedure. The lowest level of logging records
starts and stops of collector activity, the next level adds execution statistics and progress reports, and the
highest level adds detailed SSIS package logging.
Server Activity, which details CPU, memory, disk, network I/O, SQL Server waits, and SQL Server
activity.
Query Statistics, which lists the most expensive queries ranked by CPU, duration, reads, and writes.
Disk Usage, which shows trends and details of disk and file usage.
You can view the reports directly from this page, and then print or export them for further analysis.
Built-in Reports
You can view the Disk Usage, Server Activity report,
and Query Statistics reports by using SSMS.
The Server Activity Report is based on the Server Activity System Data Collection Set. The collection set
gathers a lot of SQL Server-related statistics such as waits, locks, latches, and memory statistics that are
accessed using DMVs. In addition, the collection set gathers Windows and SQL Server Performance
counters to retrieve information such as CPU and memory usage from the system and from the processes
that are running on it. The collection runs every 60 seconds and is uploaded every 15 minutes by default.
The history is retained for 14 days by default.
This report has a large number of linked sub-reports that provide much deeper information than is given
on the initial summary. The initial report is a dashboard that provides an overview. If you investigate this,
you will find that almost every item displayed is a possible drill-through point. For example, you can click
on a trend line in a graph to find out the values that make up the trend.
Elapsed time
Worker time
Logical reads
Logical writes
Physical reads
Execution count
This report also includes a large number of linked sub-reports that can be used to drill through to further
levels of detail. For example, you can retrieve query plans from the expensive queries that were in
memory at the time the capture was performed.
Demonstration Steps
Configure the MDW
1.
Ensure that the 20465C-MIA-DC, and 20465C-MIA-SQL virtual machines are running, and then log on
to MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3-11
3.
Start SQL Server Management Studio and connect to the MIA-SQL database engine using Windows
authentication.
4.
In Object Explorer, expand Management, right-click Data Collection, click Tasks, and then click
Configure Management Data Warehouse.
5.
6.
On the Configure Management Data Warehouse Storage page, in the Server name field, ensure
that MIA-SQL is displayed, click New, in the New Database dialog box, in the Database name field,
type MDW, click OK, and then click Next.
7.
On the Map Logins and Users page, click Next, and then on the Complete the Wizard page, click
Finish.
8.
9.
In Object Explorer, expand Databases and note that a database named MDW has been created.
In Object Explorer, expand Management, right-click Data Collection, click Tasks, and then click
Configure Data Collection.
2.
3.
On the Setup Data Collection Sets page, to the right of the Server name field, click the ellipsis
button, in the Connect to Server dialog box, type MIA-SQL, click Connect, and then in the
Database name field, click MDW.
4.
In the Select data collector sets you want to enable field, click the System Data Collection Sets
check box, and then click Next.
5.
On the Complete the wizard page, click Finish, and then when configuration is complete, click
Close.
In Object Explorer, under MIA-SQL, under Management, expand Data Collection, expand System
Data Collection Sets, and view the available collection sets.
2.
Right-click Disk Usage, click Collect and Upload Now, and then when the Collect and Upload
Data Set job completes, click Close.
3.
Repeat the previous step for the Query Statistics and Server Activity collection sets.
In Object Explorer, right-click the MDW database, click Reports, point to Management Data
Warehouse, and then click Management Data Warehouse Overview.
2.
On the Management Data Warehouse Overview: MDW page, in the MIA-SQL row, click the link in
the Disk Usage column.
3.
On the Disk Usage Collection Set page, click AdventureWorks, and then on the Disk Usage for
database: AdventureWorks page, review the disk usage statistics.
4.
5.
Lesson 3
SQL Server Utility is a straightforward and easily configurable monitoring solution that enables you to
proactively monitor server health for your SQL Server instances. This lesson describes the key components
of SQL Server Utility, and how to configure and use it.
Lesson Objectives
After completing this lesson, you will be able to:
To configure SQL Server Utility, you must first create a UCPthis is at the heart of the SQL Server Utility.
UCP hosts the UMDW database, which stores the collected data, and hosts the policies that you can use to
specify server health criteria. When you enroll SQL Server instances to the SQL Server Utility, or view the
contents of the UMDW, you do so through the UCP.
Server enrollment
After you create the UCP, you can enroll servers to it. Enrolled servers are referred to as managed
instances. Each managed instance uses SQL Server Agent jobs to collect data into a Utility Collection Set,
which they upload to the UMDW on the UCP. These jobs are very similar to the ones that the Data
Collector uses for the same purpose. Data collection occurs every 15 minutes, and you cannot configure
this. Utility Collection Sets include data about CPU usage and database file storage by instances and datatier applications, as well as CPU usage by the host computer.
Health policies
Health policies define the thresholds above which a resource is regarded as over-utilized or underutilized. You can use the default settings or modify them to specify your own thresholds.
3-13
The processes of creating a UCP and enrolling instances automatically create a set of SQL Server Agent
jobs to carry out the tasks associated with a UCP or a managed instance. You can view these jobs in
Object Explorer under the SQL Server Agent node. You can also configure schedules for these jobs, and
run them manually.
Utility Explorer
Utility Explorer is the tool you use to manage a UCP, including enrolling instances, configuring health
policies, and viewing server health data. It includes a dashboard view that provides an overview of server
health, and also serves as a starting point for drilling down to further investigate potential issues.
The minimum supported SQL Server version is SQL Server 2008 R2. If the instance is running SQL
Server 2012, this must be SQL Server 2012 Enterprise Edition.
The SQL Server Agent account on the instance must have read permission on Active Directory User
objects.
It is recommended that the SQL Server instance to be the UCP is configured to be case-sensitive. If it
is configured to be case-insensitive, all managed instances should also be configured in that way.
If the selected instance has ever previously functioned as a UCP, you should remove all managed
instances and all UCP components before creating the new UCP.
Storage considerations
The UMDW database is created automatically, at the same time as the UCP. You can view it by using
Object Explorer in SSMS; the database name is sysutility_mdw. When planning storage for the UMDW,
consider the following points:
Data collection for the enrolled instances occurs every 15 minutes and the default retention period
for the collected data is one year. You can change the retention period to one month, three months,
six months or two years.
The average annual growth for a UMDW database is 2 GB. You should test the UCP in your own
environment to obtain a more accurate figure.
If you enroll more managed instances, this will increase the amount of storage required. On average,
each managed instance requires 20 MB of storage. Again, you should test this in your own
environment.
Additional considerations
Additional factors to consider include:
If the UCP will monitor servers across multiple Windows domains, the domains must have two-way
trust relationships.
SQL Server Utility does not support data collection for FILESTREAM data.
1.
2.
3.
The wizard then validates the SQL Server instance, and you can view the results and make any
required changes. The validation tests include the version of SQL Server in use, the presence of a
database called sysutility_mdw, and whether or not the server is already part of a SQL Server Utility
configurationeither as a UCP or a managed instance.
4.
The wizard provides a summary of the specified configuration, and you can then create the UCP.
When planning to enroll a SQL Server instance to a UCP, the same basic requirements about the server
version, the SQL Server Agent account, and case sensitivity apply. You can enroll a server to a UCP by
opening the Utility Explorer in SSMS, and running the Enroll instance wizard from the Getting Started
tab. The wizard requires you to perform the following actions:
1.
2.
Specify a Windows domain account to run the utility collection set. You can use a specially-created
account, which is the recommended configuration, or use the SQL Server Agent service account.
3.
The wizard then validates the SQL Server instance, and you can view the results to make any required
changes. The validation tests include the version of SQL Server in use, and whether or not the server is
already enrolled with a UCP.
4.
The wizard provides a summary of the specified configuration, and you can then enroll the instance
to the UCP. Because the wizard runs in the context of a specific UCP, you do not need to specify the
name of the UCP when enrolling an instance.
Health Policies
Health policies specify the thresholds that
determine when a resource is marked as overutilized or under-utilized. You can define global
health policies and policies for individual instances
and data-tier applications (DACs).
Note: A DAC is a pre-packaged object
including all the database and instance objects that
a given application uses. DACs simplify application
development, deployment, and management. For
more information about DACs, see the
Understanding Data-Tier Applications topic in SQL
Server Books Online.
3-15
You can configure global health policies by using the Utility Explorer in SSMS. Global health policies
enable you to define utilization threshold values for CPU and data storage resources by all managed
instances and all DACs. You can specify threshold values for over-utilization and for under-utilization.
When a managed instance or a DAC reaches a certain threshold, it is recorded as over-utilized or underutilized, as appropriate.
For a managed instance, you can configure the following policies:
Database data file over-utilization for each SQL Server managed instance.
Database data file under-utilization for each SQL Server managed instance.
Database log file over-utilization for each SQL Server managed instance.
Database data file under-utilization for each SQL Server managed instance.
Disk space of storage volumes over-utilization for SQL Server managed instances.
Disk space of storage volumes under-utilization for SQL Server managed instances.
Disk space utilization is cumulative; when a threshold value is reached, disk space usage will not shrink
without administrative intervention. On the other hand, CPU utilization is volatile; it will drop and increase
depending on current conditions and workloads. To avoid situations where a single spike in CPU
utilization causes a CPU resource to be labeled as under-utilized or over-utilized, you can define
thresholds for two additional policies.
The first policy defines how frequently CPU over-utilization should reach the threshold before the
resource is marked as over-utilized. You can specify two settings for this policya time window that
defines the evaluation period, and a percentage figure that outlines the proportion of policy evaluations
that are allowed to reach the threshold during the evaluation period, before the resource is marked as
over-utilized.
The second policy works in the same way, but defines under-utilization instead of over-utilization.
Reducing noise
If the settings for volatile resource policy evaluation are not well-planned, you can end up recording
useless and misleading policy violations, sometimes referred to as noise. To reduce unwanted noise,
consider the following options when configuring how frequently processor utilization should be in
violation before it is reported as over-utilized:
Make the evaluation period longer. The default evaluation period is one hour. Data collection occurs
every 15 minutes, so by default there are four data collection events per evaluation period. The
default percentage of evaluations that are allowed to be in violation before the resource is marked as
over-utilized is 20 percent, so a single violation resource is marked as over-utilized. If you increase the
evaluation period to six hours, there will be 24 data collection events per evaluation period. To
exceed the 20 percent threshold would require five violations.
Increase the percentage of evaluations that are allowed to be in violation before the resource is
marked as over-utilized. If this threshold is higher, the system will allow more violations, which will
reduce the number of times a resource is marked as over-utilized.
Do not use policy thresholds for CPU utilization that are too low. If you select 50 percent as the
threshold figure, this will cause more violations than 80 percent.
Consider the following options when configuring how frequently processor utilization should be in
violation before it is reported as under-utilized:
If you do not change the default threshold value, the system will never record under-utilization.
The higher you make the threshold, the more under-utilization events the system will record.
Setting the percentage of evaluations that are allowed to be in violation before the resource is
marked as under-utilized to a high value reduces the number of times under-utilization is recorded.
For example, a value of 80 percent would require that 80 percent of all evaluation events reported
under-utilization.
3-17
Each of these values is indicated as over-utilized, well-utilized, under-utilized, or no data available. The
latter indicator is present if no managed instance (or DAC) is enrolled or if the first data collection event
has not yet occurred. This value will also be displayed if the uploading of data to the UMDW has failed.
You can also view storage utilization history, with options of one day, one week, one month, and one year.
To investigate reported utilization figures in more depth, you can click the specific item, such as Overutilized Database Files, that you want to investigate. This opens up the managed instance or DAC and
displays CPU Utilization, Storage Utilization, Policy Details, and Property Details tabs, which you can use to
investigate the issue further.
Create a UCP.
Demonstration Steps
Create a UCP
1.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running and then log on
to MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
If you did not complete the previous demonstration in this module, in the D:\Demofiles\Mod03
folder, run Setup.cmd as Administrator.
3.
Start SQL Server Management Studio and connect to the MIA-SQL\SQL2 database engine instance
using Windows authentication.
4.
5.
In Utility Explorer, on the Getting Started tab, click Create a Utility Control Point (UCP).
6.
In the Create a Utility Control Point wizard, on the Introduction page, review the information, and
then click Next.
7.
8.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL\SQL2, and then click
Connect.
9.
On the Specify the Instance of SQL Server page, in the Utility Control Point Name box, note that
the default name is Utility, and then click Next.
10. On the Utility Collection Set Account page, select Use the SQL Server Agent service account and
click Next.
11. On the SQL Server Instance Validation page, wait for the validation operations to complete, and
then click Next.
12. On the Summary of UCP Creation page, review the information, and then click Next.
13. On the Utility Control Point Creation page, wait for the creation operations to complete, and then
click Finish.
14. In the Utility Explorer Content tab, note that there is 1 managed instance; but no data has been
collected yet.
15. On the Getting Started tab, note the Enroll instance of SQL Server with a UCP link you can use
this to enroll additional servers in the UCP.
Run Jobs to Collect and Upload Data
1.
In Object Explorer, under MIA-SQL\SQL2, expand SQL Server Agent, and then expand Jobs.
2.
3.
In the Job Properties - sysutility_mi_collect_performance dialog box, on the Schedules page, note
that this job is configured to run every 15 seconds. Then click Cancel.
Note: This job, and several others, is created when a server instance is enrolled in a UCP. The jobs run at
scheduled times to collect and upload health metrics to the UCP server.
4.
5.
In the Start Job MIA-SQL\SQL2 dialog box, wait until the job completes, and then click Close.
6.
7.
In the Start Job on MIA-SQL\SQL2 dialog box, click Start, wait until the job completes, and then
click Close.
8.
9.
In the Start Job on MIA-SQL\SQL2 dialog box, click Start, wait until the job completes, and then
click Close.
In the Utility Explorer pane, right-click Utility (MIA-SQl\SQL2) and click Refresh.
2.
In the Utility Explorer pane, note that the Managed Instance Health chart now shows a single
instance that is well utilized.
3-19
You want to manage multiple instances of SQL Server so that you can easily identify over-utilization and
under-utilization of resources.
Objectives
After completing this lab, you will have:
You want to collect and monitor performance data from servers in your organization. To accomplish this
goal, you plan to use the Data Collector feature of SQL Server.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Configure a Management Data Warehouse
3. Configure Data Collection
4. Upload Data Collection Sets
5. View Management Data Warehouse Reports
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
Run Setup.cmd in the D:\Labfiles\Lab03\Starter folder as Administrator.
Configure a management data warehouse in the MIA-SQL instance of SQL Server. Name the
management data warehouse database MDW.
Configure data collection on the MIA-SQL instance. The server should upload management data to
the MIA-SQL management server, and you should enable the system data collector sets.
Disk Usage
Query Statistic
Server Activity
View the Management Data Warehouse Overview report for the MDW database.
2.
View the detailed report for the Disk Usage collection set.
3-21
Results: At the end of this lab., you will have configured data collection on the MIA-SQL instance of SQL
Server.
You want to monitor server health on SQL Server instances in your organization. To accomplish this goal,
you plan to use the UCP feature of SQL Server.
The main tasks for this exercise are as follows:
1. Create a UCP
2. Collect and Upload Data
3. View the UCP Dashboard
4. Configure a Health Policy
5. Troubleshoot a Health Issue
Create a UCP on the MIA-SQL\SQL2 instance of SQL Server. The UCP should use the SQL Server
Agent service account as the utility collection account.
2.
After the UCP has been created, view the default dashboard and verify that one database instance is
enrolled (MIA-SQL\SQL2 itself is automatically enrolled).
Run the following SQL Server Agent jobs on MIA-SQL\SQL2 to collect and upload health data:
sysutility_mi_collect_performance
sysutility_mi_collect_and_upload
sysutility_get_views_data_into_cache_tables
sysutility_get_cache_tables_data_into_aggregate_tables_hourly
sysutility_get_cache_tables_data_into_aggregate_tables_daily
Refresh the UCP Dashboard and verify that the managed instance is currently well utilized.
2.
Configure the filespace utilization policy for all managed instances of SQL Server so that the data
space of a data file is considered overutilized when it is greater than 50%.
2.
Run the following SQL Server Agent jobs on MIA-SQL\SQL2 to collect and upload health data:
sysutility_mi_collect_performance
sysutility_mi_collect_and_upload
sysutility_get_views_data_into_cache_tables
sysutility_get_cache_tables_data_into_aggregate_tables_hourly
sysutility_get_cache_tables_data_into_aggregate_tables_daily
3.
Refresh the UCP Dashboard and verify that the managed instance is currently overutilized.
4.
View the overutilized database files for managed instances, and identify the specific database file that
has exceeded the threshold set by the health policy.
Results: After completing this exercise, you will have created a UCP on the MIA-SQL\SQL2 instance of SQL
Server.
Question:
If the UCP reports CPU as consistently over-utilized, what steps would you take next to
diagnose and resolve the issue?
The options for monitoring resource usage and server health for SQL Server.
Data Collector.
Review Question(s)
Question: With Data Collector, why is it better to have a central management data
warehouse for data collection, than local installations?
Question:
What challenges have you faced when planning to monitor server health and performing
health monitoring? How have you attempted to overcome these problems?
3-23
Module 4
Consolidating Database Workloads with SQL Server 2014
Contents:
Module Overview
4-1
4-2
4-18
4-22
Module Overview
In previous modules, you have learned about using the Microsoft Assessment and Planning (MAP) toolkit
to discover database server instances across the enterprise. You have seen how to use policy-based
management to apply consistent configuration settings and rules across database servers, and monitoring
server health across the enterprise, including identifying servers where resources are underutilized or
overutilized. However, in many cases, even with tools such as policy-based management and Microsoft
System Center, managing multiple database servers can be time-consuming and problematic.
A common scenario in an enterprise is to counter the effects of database server proliferation by
consolidating database workloads onto fewer servers. Consolidation can be the first step towards creating
a standardized IT service environment, often implemented as a private cloud containing virtualized
servers.
This module explains the options for implementing a consolidated database server infrastructure for SQL
Server 2014. The module also describes the different ways that you can manage a consolidated
infrastructure.
Objectives
After completing this module, you will be able to:
Lesson 1
Lesson Objectives
After completing this lesson, you will be able to:
Why Consolidate?
Consolidation typically involves streamlining IT
infrastructure by using fewer, more powerful servers
to host an organizations applications and services.
In the context of SQL Server, this usually means
grouping databases onto fewer instances, grouping
instances onto fewer Windows servers, and using
virtualization to host multiple Windows servers on a
single physical server.
A well-planned, consolidated infrastructure offers
many advantages over a non-consolidated setup.
These advantages include:
4-2
More efficient use of hardware. It is not unusual for many of an organizations servers to have spare
resources (such as CPU and memory) that are not utilized by the database applications that the
servers host. By consolidating, you can reclaim these unused or underused resources, which helps
ensure that servers are running more efficiently and closer to their maximum capacity.
Centralized, standardized, and simplified management. Databases that are distributed across a
variety of different servers can be difficult to manage because of inconsistent configuration, security
requirements, and hardware. Consolidation brings greater standardization, reduces inconsistencies,
and results in more efficient, simplified management, and monitoring.
Reduced power requirements. Because a consolidated infrastructure requires fewer physical servers,
your organization can make substantial savings on the amount of power it consumes.
Reduced amount of space required for physical servers. Many organizations data centers suffer
due to lack of space. Consolidating reduces server numbers and helps to free up space.
4-3
Favorable licensing conditions. Consolidating can help to reduce licensing costs. For example, you
can purchase a SQL Server Enterprise Edition license that covers all the physical CPU cores for a
machine. This will allow you to run SQL Server instances in an unlimited number of virtual machines
on that server.
Reduced costs due to the factors listed earlier. The combined effect of the advantages outlined
earlier is to offer improved operational efficiencies at a reduced cost to the organization.
Note: For more information about SQL Server licensing, view the Microsoft SQL Server 2014
Licensing Guide from the Microsoft download website.
Database-level consolidation
Possibly the simplest way to consolidate is to gather databases onto a single instance of SQL Server. This
scenario provides various benefits, including:
Simplified maintenance. For example, tasks such as patching are easier because, instead of multiple
SQL Server versions or instances that have different patch levels, there is only a single version of SQL
Server to consider.
Resource management through Resource Governor. Because this strategy uses only a single
instance of SQL Server, you can use Resource Governor to optimize performance and minimize
contention.
Database resource monitoring through Utility Control Point. If you register databases as data-tier
applications, you can monitor resource utilization for each database, and easily identify overutilization
and underutilization of resources.
Simplified security. You only need to create and manage a single set of server logins and a single
SQL Server Service Master Key for Transparent Database Encryption.
Instance-level consolidation
Consolidating multiple instances on a single Windows server enables you to better isolate database
applications. This can be useful in addressing various requirements that you cannot achieve by
consolidating on a single instance, including:
4-4
The need to isolate security for databases. By hosting databases on separate instances, you can
ensure that each database has its own dedicated security components, such as server logins and
service master keys. This can help to minimize security lapses, such as users wrongly being able to
access a database because of incorrect security configuration, and helps prevent all databases from
becoming compromised in the event of a security failure in one database.
The need to maintain servers at different patch levels. Every instance has its own set of SQL Server
binaries, so you can apply updates separately for each one. This can be useful if, for example, you
need to support client applications that are not compatible with certain updates.
When you consolidate at the instance level, the instances still have a common dependency on the
Windows operating system that hosts them.
Virtualization
When you consolidate by using virtualization, each database has its own dedicated instance, and each
instance has its own dedicated virtual machine with an individual operating system. This approach
provides the greatest degree of isolation for consolidated databases because each virtual machine has its
own local security accounts, and you can configure file system permissions independently on each virtual
machine. In a virtualized infrastructure, you can use the Hyper-V management tool to allocate CPU and
memory resources to each virtual machine.
Virtualization also enables you to develop a more flexible approach to resource and database
management, making it possible to respond to changes more quickly. For example, you can use Hyper-V
Live Migration to migrate virtual machines between physical servers and disks in dedicated storage
solutions without interrupting user access to databases. You can use the SQL Server Sysprep tool to build
a pre-prepared library of virtual machines, complete with SQL Server databases you can provision on
demand. To centrally manage your virtual infrastructure, you can use System Center Virtual Machine
Manager.
Hybrid strategies
You can combine the approaches outlined earlier to suit your specific requirements. For example, you can
implement instance-level consolidation, but host multiple databases on some instances; or you can deploy
multiple virtual machines, some of which have a single SQL Server instance and others with multiple
instances.
A hybrid approach is the most common strategy, and many enterprises have already made the move to
predominantly virtualized server infrastructure, in which virtual machines host one or more instances of
SQL Server.
4-5
To help avoid bottlenecks caused by insufficient hardware resources, a typical consolidation server should
have multiple CPU cores, large amounts of RAM and fast storage, and multiple network cards. You should
plan your server hardware based on the requirements of the unconsolidated database applications. For
example, if an application currently uses 4 GB of RAM, then, in general, that would be the minimum
amount you would allocate to the application on the consolidation server. You should also assess CPU
usage carefully. It is not unusual to find that applications underuse CPU resources. If this is the case with
your applications, you may be able to use fewer CPUs on the consolidation server than you previously did
on the unconsolidated servers combined.
As discussed in the previous topic, you can consolidate at the database or instance level, or you can use
virtualization. You can also create a hybrid consolidation infrastructure that utilizes elements of each of
these. The factors that will influence your choice of consolidation strategy include:
Security
Resource management
Tempdb
Application density
Maintenance schedules
Compatibility issues
Security
The three consolidation strategies differ in the degree of security isolation that they provide for SQL
Server databases. Virtualization provides the greatest degree of isolation. With this strategy, each
database and instance has its own dedicated operating system; databases do not share SQL Server
binaries, system databases, SQL logins, local Windows accounts, or Windows operating system files.
Instance-level consolidation, where a single operating system hosts multiple SQL Server instances, enables
isolation only at the SQL Server instance levelso databases do not use common SQL Server logins or
share SQL Server binaries. With database-level consolidation, all databases use the same SQL Server
binaries and logins to control security. Additionally, if you use Transparent Database Encryption (TDE), all
databases share the same service master key and all database encryption certificates are stored in the
master database of the SQL Server instance. You can take advantage of standard SQL Server security
features, such as SQL Server permissions and SQL Server Audit, with all the consolidation strategies. You
4-6
should ensure that the consolidation strategy you choose meets the security compliance requirements for
your organization.
Resource management
You need to ensure that databases have adequate resources available to them to support periods of peak
utilization and to enable growth. The way that you manage resource allocation and contention depends
on the consolidation strategy you use. With virtualization, you can allocate memory and CPU resources for
each virtual machine; with instance-level consolidation, you can allocate resources by using SQL Server
CPU affinity and SQL Server maximum memory settings; with database-level consolidation, you can
allocate resources by using Resource Governor. You will learn more about resource management in
consolidated infrastructures in the next topic.
Tempdb
With database-level consolidation, applications that have a high degree of dependency on tempdb can
be in direct competition with each other, and this can create a bottleneck that will affect performance. If
you have applications that use tempdb heavily, consider instance-level consolidation or virtualization.
Application density
Density refers to the number of database applications that a single machine can support. When you use
database-level consolidation, with just a single SQL Server instance, you do not have the overheads
associated with maintaining multiple instances or virtual machines. Consequently, you can potentially host
more applications on a specific server than you could if you used multiple instances or virtual machines.
You can use SQL Server replication, log shipping, and AlwaysOn Availability Groups in all three
consolidation infrastructures to provide high availability at the database level. You can also use SQL Server
AlwaysOn Failover Cluster Instances (FCIs) to provide instance level high availability.
Note: SQL Server 2014 supports database mirroring, but because this feature is now
deprecated, you should avoid using it in new implementations.
Live Migration enables you to protect the entire virtual machine in a virtualized infrastructure. By using
Live Migration, you can move running virtual machines between physical server hosts without loss of
connectivity for clients. Your choice of consolidation strategy might be limited by the high availability
solution that will be used. For example, if you plan to use AlwaysOn FCIs, overall instance health will be
the trigger for failover. Consequently, database-level consolidation might not be appropriate, because
failure of an individual database application might not trigger the required failover.
Maintenance schedules
Databases and servers with conflicting maintenance requirements might not be good candidates for
consolidation. For example, a database that must be backed up during a specific window might cause
performance issues for other databases that are particularly busy at this timeif all these databases were
hosted on the same server. When assessing candidates for consolidation, you should consider all aspects
of workloads and maintenance schedules to avoid potential conflicts of this type.
Compatibility
It is fairly common that some database applications must maintain a specific patch level so that they
remain compatible with third-party applications. Similarly, a database application might depend on a
particular feature that is not supported in all versions of SQL Server. For example, an application that runs
on SQL Server 2005 and is built around Notification Services would not function on SQL Server 2008 and
later SQL Server versions. This is because Notification Services is not supported in these later versions. You
should carefully assess issues of compatibility when identifying candidates for consolidation, and include
Windows versions and patch levels in your considerations, as well as versions and patch levels of SQL
Server.
4-7
Lesson 2
4-8
To help ensure that your consolidated database server infrastructure delivers optimal performance, you
need to balance resource allocation and usage. The way you do this depends upon the method of
consolidation that you implement. This lesson explains the options for managing server resources in three
common consolidation scenarios.
Lesson Objectives
After completing this lesson, you will be able to:
Explain how to manage resources for a single instance of SQL Server by using Resource Governor.
Resource pools
You create resource pools to define specific limits for resource usage. You can use the CREATE RESOURCE
POOL statement with the MAX and MIN options to define these limits. For example, you can create a
resource pool that has a maximum of 20 percent of the available memory, 25 percent of the available
CPU resources, and a maximum of 100 I/O operations per second (IOPS) per volume.
Note: The ability to manage I/O by using Resource Governor is a new feature in SQL Server
2014. You can use the MIN_IOPS_PER_VOLUME and MAX_IOPS_PER_VOLUME settings when you
create a resource pool to control disk I/O on a per volume basis.
4-9
In addition to maximum values, you can also define minimum values for memory, CPU, and IOPS.
Maximum values do not represent hard limits because the limit applies only when there is contention for
resources. When there is no contention, the pool can use as much of the memory or CPU resources as
there is available. For CPU resources, you can use the CAP option to specify a hard limit that will apply at
all times, regardless of whether contention for CPU resources exists. Although the CAP option is more
limiting than using the MAX, it has the advantage of making query response times more predictable
because the same resources are available at all times.
An instance of SQL Server includes two built-in resource pools:
Default. The Default resource pool defines the resources that are available to client connections
belonging to the Default workload group. The next section describes workload groups.
Internal. The internal pool defines the resources available to SQL Server for its own internal
processes.
Note: The memory values that Resource Governor manages relate to the working memory
for queries only; Resource Governor does not manage the in-memory buffer pool. This is because
the data in the buffer pool isnt owned by a connection, so you cant control how much memory
SQL Server will use to service queries from any particular connection.
Workload groups
A workload group is a logical container for client connections. Each workload group has an associated
resource pool that defines the resources that are available to the workload group, and by extension, to
the client connections in that group. You can associate multiple workload groups with a single resource
pool, or map workload groups to resource pools on a one-to-one basis. When you plan to allocate
multiple workload groups to a common resource pool, you can use the IMPORTANCE keyword in the
CREATE WORKLOAD GROUP statement to prioritize each workload group. You can set IMPORTANCE to
LOW, MEDIUM, or HIGH.
An instance of SQL Server includes two built-in workload groups:
Default. The Default workload group is where SQL Server allocates incoming connections that are not
allocated to a different pool by the classifier function.
Internal. The internal workload group is a logical container for SQL Servers internal processes.
Classifier function
The classifier function is a user-defined function that allocates incoming client connections to workload
groups. You can use the following system functions to classify connections: HOST_NAME, APP_NAME,
SUSER_NAME, SUSER_SNAME, IS_SRVROLEMEMBER, IS_MEMBER, LOGINPROPERTY, and
ORIGINAL_DB_NAME. You can only associate one classifier function with Resource Governor, so you need
to account for all the different types of connections in one function.
The following code example creates a classifier function that uses the ORIGINAL_DB_NAME system
function to classify incoming connections based on the name of the database in the connection string.
Creating a classifier function
CREATE FUNCTION dbo.ClassifyWorkloads()
RETURNS SYSNAME WITH SCHEMABINDING
AS
BEGIN
DECLARE @Group SYSNAME
IF ORIGINAL_DB_NAME() = 'ResellerSales'
SET @Group = 'ResellersGroup'
ELSE IF ORIGINAL_DB_NAME() = 'Products'
SET @Group = 'ProductsGroup'
ELSE SET @Group = 'Default'
RETURN @Group
END;
GO
Resource Governor works best with workloads that have similar characteristics. When you need to
handle mixed workloads, consider using other methods of resource management, such as CPU
affinity.
When you reconfigure CPUs and memory, for example by adding or removing CPUs, modify resource
pools to take account of the new configuration.
Ensure that you create a classifier function that is efficient. For example, avoid using lookup tables in
the classifier function because this can lead to time-outs. Always test the classifier function thoroughly
before using it in production.
Resource Governor is only available with SQL Server 2014 Enterprise edition.
You can only use Resource Governor to manage the resources used by the database engine for a
single SQL Server instance. You cannot balance resource usage across instances of SQL Server by
using Resource Governor. Furthermore, you cannot use Resource Governor to manage Reporting
Services, Analysis Services.
Reference Links: For more information about Resource Governor and best practices for
configuring it, download the white paper Using the Resource Governor from the MSDN library.
IOPS Governance
Competition for disk resources can be a significant
limiting factor for workload performance, and it has
historically been difficult for administrators to
balance the needs of competing workloads for
physical disk resources on a SQL Server instance.
SQL Server 2014 Resource Governor provides
administrators with the ability to manage the
read/write demands of competing workloads by
limiting the allowed IOPS for individual resource
pools.
4-11
Resource Governor allocates minimum and maximum IOPS values to resource pools for each disk volume.
A disk volume is a portion of a disk to which Windows has assigned a file system; each disk volume is
typically identified by a drive letter. You can use the new MIN_IOPS_PER_VOLUME and
MAX_IOPS_PER_VOLUME settings with the CREATE RESOURCE POOL and ALTER RESOURCE POOL
Transact-SQL statements to configure IOPS management:
MIN_IOPS_PER_VOLUME. This setting specifies the minimum IOPS to be allocated to a given resource
pool per volume. You can specify values between 0 and 2,147,483,647 for this setting. A value of 0
means that there is no configured minimum value for the resource pool. The default value is 0.
The following code example configures a maximum IOPS value for the IOPS_Pool resource pool:
MAX_IOPS_PER_VOLUME
ALTER RESOURCE POOL IOPS_Pool
WITH (MAX_IOPS_PER_VOLUME = 300);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
You should configure a MAX_IOPS_PER_VOLUME value greater than 0 on each resource pool for which
you want Resource Governor to manage IOPS. Any resource pool that does not have a
MAX_IOPS_PER_VOLUME defined does not have its IOPS consumption managed by Resource Governor,
and is able to consume IOPS with restriction, which can lead to unexpected results. For example, imagine
that you have two resource pools, and you define MIN_IOPS_PER_VOLUME and MAX_IOPS_PER_VOLUME
values for one of them but do not define a MAX_IOPS_PER_VOLUME for the other pool. In this case, you
might expect Resource Governor to honor the minimum IOPS value for the first resource pool when there
is competition for IOPS with the second pool. However, this will not happen because the second resource
pool is not restricted by Resource Governor and can consume IOPS regardless of the minimum IOPS
setting for the first resource pool. To avoid this kind of scenario, you could set the value for the second
pool to a suitable maximum value. Alternatively, you could set the MAX_IOPS_PER_VOLUME for the
second pool to the maximum value (2,147,483,647). This ensures that the resource pool is managed and
restricted by Resource Governor, but enables the pool to consume unlimited IOPS when it is not
competing with other resource pools.
In this demonstration, you will see how to use Resource Governor to manage resource allocation between
two databases on an instance of SQL Server.
Demonstration Steps
Create Resource Pools
1.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, that you
have logged on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd,
and that you have run Setup.cmd in the D:\Demofiles\Mod04 folder as Administrator.
2.
Start SQL Server Management Studio, and connect to the MIA-SQL database engine using Windows
authentication.
3.
4.
In the query window, select the code under the comment Create resource pools and click Execute.
This code creates two resource pools, named Low Priority and High Priority.
In the query window, select the code under the comment Create workload groups and click
Execute. This code creates a workload group named DemoDB1WG that uses the Low Priority
resource pool, and a workload group named DemoDB2WG that uses the High Priority resource
pool.
2.
Select the code under the comment Reconfigure resource governor and click Execute. This
reconfigures Resource Governor, enabling the resource pools and workload groups you have created.
In the query window, select the code under the comment Create classifier function and click
Execute. This code creates a function that returns the appropriate workload group name for the
current session, based on the name of the database to which the connection has been made.
2.
Select the code under the comment Add classifier function to resource governor and click
Execute. This reconfigures Resource Governor so that the function you created is used as the classifier
function for all future connections.
3.
2.
In Computer Management, in the pane on the left, expand Performance, expand Monitoring Tools,
and then click Performance Monitor.
3.
If any counters are listed under the chart, select them and press Delete so that the chart is blank.
4.
5.
4-13
In the Add Counters dialog box, in the list of objects, expand the SQLServer: Resource Pool Stats
object, and then click CPU control effect %. Hold the CTRL key and click the following counters:
6.
In the Instances of selected object list click High Priority, hold the Ctrl key and click Low Priority,
and then click Add. This adds the counters you selected for both resource pool instances.
7.
In the Add Counters dialog box, in the list of objects, expand the SQLServer:Workload Group Stats
object, and then click CPU usage %.
8.
If the Instances of selected object list is empty, click CPU usage % again. In the Instances of
selected object list, click DemoDB1WG, hold the Ctrl key and click DemoDB2WG, and then click
Add.
9.
In the Add Counters dialog box, click OK. Note that Performance Monitor displays the counter
values. Click any of the counters under the chart and press Ctrl+H, and note that this highlights the
currently selected counter in the graph.
10. Wait for the red line (which indicates the current time) to return to the beginning of the chart, and
then in the D:\Demofiles\Mod04 folder, double-click DemoDB1Query.cmd to start a user query
workload.
11. Observe the values of the counters in Performance Monitor until the red bar is approximately a third
of the way across the chart.
12. With the DemoDB1Query.cmd command still running, in the D:\Demofiles\Mod04 folder, doubleclick DemoDB2Query.cmd to start the help desk workload. Observe the values of the counters in
Performance Monitor until the red bar is approximately two thirds of the way across the chart.
13. Close the console window for the DemoDB2 query and observe the values of the counters in
Performance Monitor until the red bar is almost all the way across the chart. Then in Performance
Monitor, click the Freeze Display button before the red line reaches the end of the chart.
14. Close the console window for the DemoDB1Query query.
15. View the counters in the chart, and note the following:
The CPU control effect % counter shows the extent to which Resource Governor influenced CPU
utilization.
The CPU target and actual usage for the DemoDB1 query was noticeably reduced during the
period when the DemoDB2 query was running.
Neither resource group required its full allocation of memory the workloads were CPUintensive, but they were not memory-intensive.
The decimal value is a representation of a binary bit mask. On a server with up to eight CPU cores, you
use a one-byte (eight-bit) mask. For systems with up to 16 cores, you use a two-byte mask; for systems
with up to 24 cores, you use a three-byte mask; and systems with up to 32 cores, you use a four-byte
mask. For systems with 33-64 cores, you should additionally configure the affinity64 mask server
configuration option. For each position in an affinity mask, a value of 1 indicates that the instance can use
a specific processor, and a value of 0 indicates that it cannot. By converting the resulting binary value, you
can obtain the decimal value to use with sp_configure. For example, the binary bit mask 01100011 would
indicate that an instance can use four specific CPU cores. By converting 01100011 to decimal results in 99,
you can use this value with sp_configure to configure the mask.
The following code example configures a bit mask of 01100011.
Configuring a CPU affinity mask by using sp_configure
EXEC sp_configure show advanced options, 1;
RECONFIGURE;
GO
EXEC sp_configure affinity mask, 99;
RECONFIGURE;
GO
Note: After running sp_configure, you should run the RECONFIGURE command to ensure
that SQL Server implements the changes.
Best Practice: By default, the CPU affinity mask is not enabled, and a SQL Server instance
can use all the available CPUs on a server as required. In most cases, this configuration is optimal,
and you will not need to change affinity settings. To avoid negatively affecting performance, you
should only configure CPU affinity and memory settings after very thorough consideration and
testing.
4-15
An affinity I/O mask creates a binding between SQL Server disk I/O and one or more CPUs, enabling SQL
Server to offload I/O completion activities to a dedicated CPU. The affinity I/O mask was created to
improve performance for 32-bit servers with limited memory resources, and you should not enable it for
64-bit servers.
You can configure affinity I/O mask from the Processors page of the properties of a SQL Server instance
or by using sp_configure.
You can define the memory available to a SQL Server instance by using the Server Memory Configuration
options. You can use the min server memory and max server memory settings to configure the
minimum and maximum memory that an instance can use. By default, SQL Server dynamically configures
this, requesting memory from the operating system and releasing it again as required. Setting a minimum
amount ensures that SQL Server has a certain amount of memory available to it because it will not release
memory below the configured amount. The total of the minimum memory settings for all the instances on
a server should be no more than 1 to 2GB less than its total memory. Setting a maximum amount
prevents SQL Server from taking too much memory, which preserves memory for use by competing
applications and instances. You can configure Server Memory Configuration options from the Memory
page of the properties of a SQL Server instance or by using sp_configure.
Note: Establishing the maximum server memory setting too low can result in poor
performance.
Demonstration Steps
Configure Server Settings
1.
Ensure that you have completed the previous demonstration in this module.
2.
In SQL Server Management Studio, in Object Explorer, on the Connect drop-down list, click
Database Engine. When prompted, use Windows authentication to connect to the MIA-SQL\SQL2
instance of the database engine.
3.
4.
In the Server Properties MIA-SQL\SQL2 dialog box, on the Memory page, change the Maximum
server memory (in MB) value to 768.
5.
On the Processors page, clear the Automatically set processor affinity mask for all processors
check box, and then expand All, expand NumaNode0, and select the Processor Affinity check box
for CPU0. Then click OK.
6.
In Object Explorer, on the Connect drop-down list, click Database Engine. When prompted, use
Windows authentication to connect to the MIA-SQL\SQL3 instance of the database engine.
7.
8.
In the Server Properties MIA-SQL\SQL3 dialog box, on the Memory page, change the Maximum
server memory (in MB) value to 512.
9.
On the Processors page, clear the Automatically set processor affinity mask for all processors
check box, and then expand All, expand NumaNode0, and select the Processor Affinity check box
for CPU1. Then click OK.
10. Close SQL Server Management Studio without saving any files.
Monitor Instance Resource Utilization
1.
Maximize Computer Management, and on the Performance Monitor node, under the chart, click the
first counter, hold SHIFT and click the last counter, and press Delete.
2.
3.
In the Add Counters dialog box, in the list of objects, expand the Processor object, and then click %
Processor Time. If the Instances of selected object list is empty, click % Processor Time again, and
then click the 0 instance, hold the CTRL key and click the 1 instance, and then click Add.
4.
In the Add Counters dialog box, in the list of objects, expand the MSSQL$SQL2: Memory Manager
object, and then click Total Server Memory (KB). Then click Add.
5.
In the Add Counters dialog box, in the list of objects, expand the MSSQL$SQL2: Resource Pool
Stats object, and then click CPU usage %. If the Instances of selected object list is empty, click CPU
usage % again, and then select the default instance and click Add.
6.
Repeat the previous two steps to add the same counters for the equivalent MSSQL$SQL3 objects.
7.
8.
In Performance Monitor, click the Unfreeze Display button. Note that Performance Monitor displays
the counter values.
9.
In the D:\Demofiles\Mod04 folder, double-click QueryMIA-SQL2.cmd, and then view the counters in
Performance Monitor. Note the following:
The Total Server Memory (KB) counter for the MSSQL$SQL2 instance rises to the 768 MB limit
you set previously.
The % Processor Time counter for instance 0 rises in correlation with the CPU usage % counter
for the MSSQL$SQL2 instance.
11. In the D:\Demofiles\Mod04 folder, double-click QueryMIA-SQL3.cmd, and then view the counters in
Performance Monitor. Note the following:
The Total Server Memory (KB) counter for the MSSQL$SQL3 instance rises to the 512 MB limit
you set previously.
The % Processor Time counter for instance 1 rises in correlation with the CPU usage % counter
for the MSSQL$SQL3 instance.
12. Close the command window for the MIA-SQL\SQL3 query and close Computer Management.
13. In the D:\Demofiles\Mod04 folder, run CleanUp.cmd as Administrator.
Hyper-V Manager
You can use Hyper-V manager to define memory
and processor settings for virtual machines. Options
for configuring memory include:
4-17
Memory buffer. This percentage value defines the preferred amount of total memory that the virtual
machine will try to reserve. However, this value does not guarantee a specific percentage; the exact
amount depends on the current demand from other virtual machines.
Memory weight. You can specify a weighting for each virtual machine that prioritizes access to
memory relative to other virtual machines.
Number of logical processors. This specifies the number of the physical host servers CPU cores that
the virtual machine can use.
Resource control. You can use these to balance CPU resources between virtual machines. You can
set reserve percentage (effectively a dynamic minimum value) and maximum percentage values, and
specify a relative weight to prioritize access to CPU resources relative to other virtual machines.
System Center Virtual Machine Manager is a tool for managing the entire virtual infrastructure. You can
use it to perform a wide variety of tasks, including:
Rapidly provisioning new servers by deploying virtual machines from a library of pre-configured
templates.
Migrating virtual machines between host servers without losing client connectivity by using Live
Migration.
Managing and monitoring resource usage for each virtual machine in the infrastructure.
Reference Links: For more information about System Center 2012 R2 Virtual Machine
Manager, visit the Virtual Machine Manager page on the TechNet website.
Objectives
After completing this lab, you will have:
The InternetSales database experiences a large amount of write activity, and you want to use Resource
Governor to manage disk contention on the MIA-SQL instance between connections to the InternetSales
database and the ResellerSales database. You will create two resource pools, one for each database, and
apply resource utilization limits to ensure that the InternetSales workload can continue to run without
performance being compromised when the ResellerSales workload runs. You will create two workload
groupsone for each resource pooland create a classifier function that uses the ORIGINAL_DB_NAME
function to allocate user sessions to the workload groups based on the name of the connection string
database. You will then use sample workloads to test Resource Governor to ensure that the classifier
function works correctly.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Configure Resource Governor
3. Observe Workload Performance
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
Run Setup.cmd in the D:\Labfiles\Lab04\Starter folder as Administrator. This can take a few minutes to
complete.
In the MIA-SQL database engine instance, create a resource pool named Low Priority with the
following settings:
2.
3.
4.
Create a second resource pool named High Priority with the following settings:
4-19
Create a workload group named ResellerSalesWG that uses the Low Priority resource pool. The
workload group should have the following settings:
Maximum requests: 10
Importance: Low
Create a second workload group named InternetSalesWG that uses the High Priority resource pool.
The workload group should have the following settings:
Importance: High
5.
Reconfigure resource governor to enable that the resource pools and workload groups.
6.
Create a classifier function named dbo.fn_clasify_dbs that classifies connections into workloads
based on the ORIGINAL_DB_NAME function. The function should classify connections as follows:
7.
Reconfigure resource governor to use the dbo.fn_clasify_dbs classifier function you created.
Use the Performance Monitor interface in the Computer Management administrative tool to monitor
the following SQLServer: Resource Pool Stats counters for the High Priority and Low Priority SQL
Server resource pool instances:
CPU usage %
2.
3.
With the reseller workload still running, run InternetSales.cmd in the D:\Labfiles\Lab04\Starter folder
to simulate the Internet Sales workload and observe the effects on resource pool utilization.
4.
5.
The CPU target and actual usage for the Reseller workload was noticeably reduced during the
period when the Internet workload was running.
Disk write IO was throttled for the low priority workgroup was throttled, but not for the high
priority resource group.
Neither resource group required its full allocation of memory the workloads were CPUintensive and required considerable IO to write data, but they were not memory-intensive.
4-21
The InternetSales database is hosted on the MIA-SQL instance. To ensure that the InternetSales
database has the resources it requires, you will configure processor affinity and memory settings. You will
first configure the MIA-SQL\SQL2 and MIA-SQL\SQL3 instances to use only one CPU each, and configure
MIA-SQL to use two CPU cores. You will then configure the MIA-SQL\SQL2 and MIA-SQL\SQL3 instances
with a maximum memory setting of 1024 MB.
The main tasks for this exercise are as follows:
1. Configure CPU Affinity
2. Configure Maximum Server Memory
3. Test CPU Affinity and Memory Settings
Configure CPU affinity for the MIA-SQL, MIA-SQL\SQL2, and MI-SQL\SQL3 SQL Server instances as
shown in the table below:
Instance
Use
MIA-SQL
MIA-SQL\SQL2
CPU0
MIA-SQL\SQL3
CPU1
Use the sp_configure stored procedure to configure the max server memory setting to 1024 MB
for the MIA-SQL\SQL2 and MIA-SQL\SQL3 instances.
In the Performance Monitor tool in Computer Management, remove the existing counters and add
the appropriate counters to monitor the % Processor Time for each CPU core, and to monitor the
Total Server Memory for each SQL Server instance.
2.
3.
Close all three console windows, close Computer Management, close SQL Server Management Studio,
and then in the D:\Labfiles\Lab04\Starter folder, run CleanUp.cmd as Administrator.
Results: After completing this exercise, you will have configured processor affinity and memory settings.
Question: Did you agree that the consolidation solution presented in the lab was the most
appropriate one for the scenario? Would you have done anything differently, and if so, what?
Review Question(s)
Question: Have you been involved in planning or implementing a consolidation initiative, or
in managing SQL Server in a consolidated environment? If so, did the consolidation initiative
succeed in delivering the intended benefits? How has consolidation affected administration?
If you have not been involved in consolidation in any way, how do you think your
organization might benefit from consolidating its SQL Server infrastructure?
Module 5
Introduction to Cloud Data Solutions
Contents:
Module Overview
5-1
5-2
5-6
5-10
5-13
Module Overview
Cloud computing has risen to prominence very rapidly within the world of IT. Many organizations have
implemented, or are planning to implement, cloud-based solutions that encompass all or part of their
infrastructure. This module describes some of the fundamental concepts of cloud computing and the
benefits it brings, before focusing on how to build a private cloud infrastructure for SQL Server.
Objectives
After completing this module, you will be able to:
Explain the fundamental concepts behind cloud computing, and describe the technologies that
underpin Microsoft cloud solutions.
Describe how to provide SQL Server-based data services in a private cloud infrastructure.
Lesson 1
5-2
This lesson provides an overview of cloud computing, as well as the Microsoft technologies and services
that are available to organizations. The lesson is not explicitly about SQL Server 2014, but provides
essential background information about Microsoft cloud technologies that can support SQL Server
services, such as Windows Server, System Center, and Microsoft Azure.
Lesson Objectives
After completing this lesson, you will be able to:
Explain the meaning of the terms public cloud, private cloud, and hybrid cloud.
Describe the Microsoft technologies that support on-premises and hosted, cloud-based
infrastructures.
Regardless of the specific technologies that organizations use to implement cloud computing solutions,
the National Institute of Science and Technology have identified that they exhibit the following five
characteristics:
On-demand self-service. Cloud services are generally provisioned as they are required, and need
minimal infrastructure configuration by the consumer. This enables users of cloud services to quickly
set up the resources they want, typically without having to involve IT specialists.
Broad network access. Cloud services are generally accessed over a network connection, usually
either a corporate network or the Internet.
Resource pooling. Cloud services use a pool of hardware resources that are shared across
consumers. A hardware pool consists of hardware from multiple servers that are arranged as a single
logical entity.
Rapid elasticity. Cloud services scale dynamically to obtain additional resources from the pool as
workloads intensify, and release resources automatically when they are no longer needed.
5-3
Measured service. Cloud services generally include some sort of metering capability, making it
possible to track relative resource usage by the users of the services, who are generally referred to as
subscribers.
Cloud Services
Cloud services generally fall into one of the
following three categories:
Software as a Service
Platform as a Service
PaaS offerings consist of cloud-based services that provide resources on which developers can build their
own solutions. Typically, PaaS encapsulates fundamental operating system (OS) capabilities, including
storage and compute, as well as functional services for custom applications. Usually, PaaS offerings
provide application programming interfaces (APIs), as well as configuration and management user
interfaces. Azure provides PaaS services that simplify the creation of solutions such as web and mobile
applications. PaaS enables developers and organizations to create highly-scalable custom applications
without having to provision and maintain hardware and OS resources.
Infrastructure as a Service
IaaS offerings provide virtualized server and network infrastructure components that can be easily
provisioned and decommissioned as required. Typically, IaaS facilities are managed in a similar way to onpremises infrastructure, and provide an easy migration path for moving existing applications to the cloud.
A key point to note is that an infrastructure service may be a single IT resourcesuch as a virtual server
with a default installation of Windows Server 2012 R2 and SQL Server 2014or it might be a completely
pre-configured infrastructure environment for a specific application or business process. For example, a
retail organization might empower departments to provision their own database servers to use as data
stores for custom applications. Alternatively, it might define a set of virtual machine and network
templates that can be provisioned as a single unit to implement a complete, pre-configured infrastructure
solution for a branch or store, including all the required applications and settings.
Types of Cloud
Cloud services can be provided in a public cloud, a
private cloud, or in a combined hybrid cloud
environment.
Public Cloud
Public cloud services are hosted in external data
centers that are managed by a cloud provider. In
some cases, you can consume public cloud services
directly from the cloud source, while in other
circumstances, cloud services may be accessed
through a third-party Internet service supplier. In
either example, the services are hosted on servers
that are external to your organization. Usually, you
share the data center with other customers in a multi-tenant solution (though typically a degree of
isolation is provided to ensure security and confidentiality).
In most public cloud solutions, you subscribe to one or more services and only pay for what you use.
Private Cloud
5-4
Private cloud services are hosted on an organizations own infrastructure, which is abstracted by using
virtualization technology. The organizations IT department manages the data center as a shared pool of
server and network resources that can be used for applications and business processes on an on-demand
basis. Business units and individuals in the organization can provision and consume services in the private
cloud in a similar way to public cloud services.
Hybrid Cloud
In many organizations, some services are provided through both public and private cloud platformsand
often these need to be integrated. This can then provide a single, consistent experience for users, in which
the actual location of the services being consumed is not a factor that they need to consider. Hybrid cloud
solutions enable organizations to migrate IT infrastructure to public cloud services at their own pace,
retaining full, on-premises control of IT resources for applications and data containing sensitive data or
that have restrictive compliance requirements. Hybrid cloud environments also enable infrastructure and
application architectures that take advantage of public cloud solutions for backup and high availability of
on-premises applications.
5-5
Microsoft Azure. Microsoft Azure is a complete cloud platform that offers PaaS and IaaS services.
Enterprises can build their own applications on Azure services, and can provision and manage virtual
machines that are hosted in Azure data centers.
Hyper-V. Hyper-V is Microsofts virtualization technology, providing the foundation for its public
and private cloud platform. Azure is based on Hyper-V, and enterprises can use the same
virtualization capabilities to host their own private cloud services.
Windows Server 2012 R2. Windows Server 2012 R2 is the latest version of Windows Server. It
includes scalability, security, and resource management features that make it ideal for cloud
scenarios. Windows Server 2012 R2 can be used as both a Hyper-V host that supports virtual
machines, and as a guest OS in a virtual machine.
System Center 2012 R2. System Center is a suite of products that enables enterprises to provision
and manage private and public cloud services, and to provide self-service infrastructure provisioning
and metered chargeback capabilities for enterprise IT services.
The Azure Pack. The Azure pack builds on Windows Server 2012 R2 and System Center 2012 R2 to
provide Azure PaaS services in a private cloud. The consistent portal interface makes the provisioning
and management of cloud services reliable, whether using the Azure public cloud or a Windows
Server and System Center-based private cloud.
Lesson 2
5-6
When most people consider cloud solution, they think first about public cloud services such as Microsoft
Office 365 or public cloud platforms like Azure. However, in enterprise organizations, many IT
departments are taking advantage of the benefits provided by cloud-based approaches to
infrastructurebut hosting those cloud technologies on the organizations own, on-premises servers.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the key features of a private cloud built on Windows Server technologies.
Describe how to prepare a virtual machine template for a SQL Server database server.
Flexibility of scale. Many enterprise workloads have a constant level of demand, meaning that, in a
traditional IT infrastructure, the servers supporting the workload are configured with the resources
required to support peak workloads, even during quieter periods. In a private cloud scenario, servers
can be configured to consume on-demand resources, and release them back to the pool when no
longer required.
Simplified management and provisioning. Because most servers and network components in a
private cloud environment are virtual, they can be managed from a single interface with less
requirement to deal with physical hardware failures than would be present in a traditional, physical
infrastructure. Additionally, provisioning new servers is a much simpler process, especially when the
organization has defined standard server configurations as virtual machine templates that can be
used to create new virtual machines.
5-7
Greater resiliency. Because there is less physical hardware than in a traditional environment, there is
less chance of physical server failure. Additionally, the physical servers used to host the virtual
machines can be clustered. Should a host need to be taken offline for scheduled maintenance, its
virtual machines can be moved from one server to another, with no downtime, using a process called
live migration. When combined with services in Azure, live migration can be used to fail over an
entire datacenter from one geographic location to another by moving virtual machines from hosts in
one site to hosts in another.
Reduced overall cost. The key benefit for most organizations considering a move to a private cloud
is the cost savings it can bring. In addition to the reduced capital expenditure on server hardware,
private cloud solutions tend to require less physical space, power, and cooling than the equivalent
physical infrastructure. Also, the cost of managing a private cloud is generally lower than that of a
traditional environment. Additionally, software licensing agreements for Microsoft technologies make
virtualization an attractive proposition. They allow multiple installations of enterprise products on a
single physical server, and can be moved from one server to another at no cost should virtual
machines need to be migrated.
Windows Server Failover Clustering to provide high availability for Hyper-V hosts.
Windows Server Storage Spaces and Storage Pools to provide resilient, high-performance storage of
virtual machine files on commodity disks.
System Center AppCenter to integrate management of multiple clouds, including private clouds
based on Windows Server, as well as public clouds in Azure.
System Center Orchestrator to automate workflows used to provision and manage private cloud
services.
System Center Configuration Manager and Operations Manager to automate the management of
virtual servers and applications in the private cloud.
5-8
The P2V migration wizard automates the process of migrating an existing physical server to a virtual
machine by creating a virtual copy of the source computer. However, you should be careful to fully
evaluate the physical server configuration before starting the migration, to ensure that it is in a fit state to
be migrated successfully.
The Microsoft Assessment and Planning (MAP) Toolkit includes the ability to assess physical servers for
their readiness to be migrated to virtual machines in a private cloud, and can generate reports and
recommendations to help you plan your migration.
When the organization opens a new store, the business group responsible can use a System Center portal
to request a new instance of the inventory management system infrastructure. When the request is
approved by IT, System Center automatically creates the necessary virtual machines and network from the
templates, ensuring that all utilization of these resources is charged to the cost center associated with the
new store.
5-9
System Center Virtual Machine Manager enables you to define templates for virtual machines, and include
them in services that can be requested by business users. Templates are essentially virtual hard disks
(VHDs) containing the required software. In the case of a Windows server, the template contains an
installation of Windows that has been sysprepped to remove any individual server identity. When the
template is used to provision a new server, the installation is completed using settings provided in a
configuration file or generated automatically, based on properties set in the System Center Virtual
Machine Manager resource library.
SQL Server Setup supports a technique that is commonly known as SQL Server Sysprep. With this
technique, you can use the SQL Server Setup wizard to install a prepared instance of SQL Server that
includes all the required SQL Server program files, but which do not contain any server-specific settings
such as an instance name or administrator password. When a prepared instance has been installed, you
can complete installation later, either by running SQL Server Setup interactively or by using a
configuration file to run SQL Server Setup from the command line. This enables you to create a simple
script that will complete the installation of a previously prepared instance.
System Center Virtual Machine Manager enables you to create a virtual machine template that includes a
prepared instance of SQL Server, and to specify a configuration file that will be used to complete the
installation when the template helps provision a new server. Additionally, you can specify a Transact-SQL
script file to be run after the setup is complete, enabling you to automate the creation of application
databases or perform post-installation configuration tasks.
You are working with a System Center administrator to create a template for a SQL Server virtual machine
in a private cloud. The template must include a prepared instance of SQL Server, and a configuration file
that can be used to complete installation when the template is used to provision a new server.
Objectives
After completing this lab, you will be able to:
A System Center expert is creating a virtual machine template, and has already installed Windows Server.
You must add a prepared instance of SQL Server to the virtual machine.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Install a Prepared Instance of SQL Server
Start the 20465C-SQL-VM-Template virtual machines, and then log on as Administrator with the
password Pa$$w0rd.
Run the SQL Server Setup program in C:\SQLServer2014-x64-ENU, and use the Advanced option to
prepare an image of a stand-alone instance of SQL Server.
2.
3.
Results: At the end of this exercise, you will have installed a prepared instance of SQL Server.
5-11
You have installed a prepared image in the Virtual Machine template. Now you must generate and test
the configuration file that will be used to complete installation when the template is used to provision a
new server.
The main tasks for this exercise are as follows:
1. Create a Configuration File
2. Test the Configuration File
Run the SQL Server Setup program, and select the Advanced option to complete an image of a
stand-alone instance of SQL Server.
2.
Use the following settings to generate a ConfigurationFile.ini file for the installation, but do not
proceed past the Ready to Complete Image page of the wizard:
3.
On the Ready to Complete Image page, note the Configuration file path, which indicates the
location where the ConfigurationFile.ini file has been generated, and then cancel the installation
and close the SQL Server Installation Center.
4.
Add a semi-colon character (;) in front of the UIMODE="Normal" statement so that it resembles
the following:
;UIMODE="Normal"
2.
Open a command prompt and enter the following command to run SQL Server Setup from the
command line with the configuration file generated by the wizard:
C:\SQLServer2014-x64-ENU\Setup.exe /ConfigurationFile=C:\ConfigurationFile.ini
/IAcceptSQLServerLicenseTerms
3.
When setup is complete, use SQL Server Configuration Manager to verify that a default instance of
SQL Server (MSSQLSERVER) has been installed and is running.
4.
Results: After completing this exercise, you will have a configuration file that completes the installation of
the prepared SQL Server instance.
5-13
In this module, you learned about the principles and benefits of cloud computing, and how SQL Server
database servers can be used in a private cloud, based on Windows Server Hyper-V and System Center
technologies.
Review Question(s)
Question: What benefits would a private cloud infrastructure bring to organizations that you
have worked with?
Module 6
Introduction to Microsoft Azure
Contents:
Module Overview
6-1
6-2
6-8
6-13
6-19
6-23
Module Overview
Azure is a cloud platform from Microsoft that provides Platform as a Service (PaaS) and Infrastructure as a
Service (IaaS) solutions for enterprises. This module provides an overview of Azure, and describes how you
can use Azure storage with an on-premises SQL Server instance.
Objectives
After completing this module, you will be able to:
Use Azure storage for SQL Server database files and backups.
Lesson 1
Azure Overview
Azure provides a platform for developers and IT professionals to create enterprise applications and
infrastructure solutions. This lesson explains how to get started with Azure, and describes some of the
services it offers.
Lesson Objectives
After completing this lesson, you will be able to:
Introduction to Azure
Azure provides a comprehensive set of services for
building cloud-based applications and
infrastructure solutions. With Azure, developers can
create applications and services that can be
consumed through web browsers and client apps
on computers, tablets, smartphones, and other
devices. IT administrators can use Azure to create
cloud infrastructure solutions that include networks,
virtual machines, Active Directory services, and
other infrastructure components that support
applications and business operations.
App Services
Azure includes the following app services:
6-2
Media Services. Azure Media Services enable developers to create applications that provide live and
on-demand streaming media, with capabilities for converting media formats, encoding content, and
other media-related functionality.
Active Directory. Azure Active Directory provides Active Directory-based authentication and identity
management for custom cloud-based applications built on Azure. You can synchronize Azure Active
Directory with your corporate Active Directory to provide a single sign-on solution for cloud services.
Multi-Factor Authentication. Azure Multi-Factor Authentication enables you to enhance security for
cloud and on-premises applications by including identity verification checks, for example by email
message, phone call, or text message.
Service Bus. The Azure Service Bus provides a message queuing solution for applications that need a
scalable and reliable message-based communication architecture.
Notification Hubs. Notification Hubs provide a solution for push notifications that applications can
broadcast to subscribers.
BizTalk Services. Azure BizTalk Services enables enterprise application integration (EAI) for businessto-business (B2B) scenarios.
Compute Services
Compute services in Azure include:
Cloud Services. Azure Cloud Services provide the foundation for cloud applications and services,
enabling you to deploy and manage custom services easily.
Web Sites. Azure Web Sites provides a scalable cloud infrastructure for web applications.
Mobile Services. Azure Mobile Services enables you to build cloud-based applications for mobile
devices.
Virtual Machines. Azure Virtual Machines provides cloud-based hosting for Hyper-V virtual
machine images, which you can create from a gallery of built-in images or upload.
Data Services
Azure provides the following data services:
6-3
Storage. Azure Storage includes tables, for key-value pair and other NoSQL formats, and a blob store
in which you can create a hierarchy of containers for binary large object (blob) files.
SQL Database. Azure SQL Database is a PaaS database server service based on SQL Server.
Backup. Azure Backup enables you to use backup functionality in Windows Server and System
Center to perform cloud-based backups.
Hyper-V Recovery Manager. Hyper-V Recovery Manager provides cloud-based coordination and
management of virtual machine replication and failover between private cloud data centers.
HDInsight. Azure HDInsight provides a Hadoop cluster service for big data analysis.
Network Services
Azure uses the following services to support networking:
Virtual Network. Azure Virtual Network provides a virtual private network (VPN) solution that you
can use to create virtual networks for Azure-based services and connectivity to your on-premises
networks.
Traffic Manager. Azure Traffic Manager provides network load balancing for Azure services.
In addition to the services described above, Azure provides a marketplace where you can buy and sell
application services and data sets.
Note: New services are added to Azure on a frequent basis.
Azure Subscriptions
You must create an Azure subscription before you
can use any of the services.
Azure Accounts
Azure subscriptions are associated with either a
Microsoft account, or an organizational account.
Microsoft accounts are associated with an
individual, and are the most common way for
people to get started with Azure. Organizational
accounts are associated with a business or
organization that has signed up for Azure, and are
managed by administrators.
Azure Subscriptions
6-4
After you have signed in with either a Microsoft or organizational account, you can create one or more
Azure subscriptions. Initially, most people start with a free trial subscription, which provides full access to
all Azure services for a limited period. When you are ready to start using Azure for production solutions,
you can choose from a range of subscription plans, including:
Pay-as-you-go. With this subscription plan, you pay no upfront fee and can cancel at any time. You
pay for the services you use on a metered basis, which varies by service.
Pay-monthly. You can sign up to a pay-monthly plan for six or 12 months. With these plans, the perunit cost of Azure services is lower than on the pay-as-you-go plan, so typically this approach works
out less expensive if you know you are going to continue using Azure services for the entire plan
period.
Pre-Pay. As an alternative to the pay-monthly plans, you can pre-pay for six or 12 months use of
Azure. This approach commits you to the plan period and includes a monthly quota of service
utilization at a lower per-unit price than the equivalent pay-monthly plan.
Azure Storage
Azure Storage enables applications to store data in
tables or blobs, or to read and write data to queues
for asynchronous workflows. Many Azure services
have a dependency on Azure Storage.
To use Azure Storage, you must create an Azure
Storage account in your Azure subscription. The
Azure Storage account name forms part of the URL
used to access the data you have stored.
All Azure Storage services include data redundancy
across three replicas within the Azure data center
where the storage account is created. By default, all
Azure storage is also geo-replicated across multiple,
geographically-distributed data centers.
6-5
Azure Tables provide a storage mechanism for structured and semi-structured data. Although the name
table can give the impression that Azure Tables are a relational form of data storage, they have no fixed
schema and are used to store entities that are defined as a collection of name-value pair properties. All
entities have PartitionKey, RowKey, and Timestamp properties, which are used by Azure Storage to
identify individual rows. In addition, each entity can have up to 252 custom properties. Note that entities
in the same table do not need to have the same properties. For example, a table named Products might
contain multiple customer entities, each with its own set of properties, as shown in the following table:
Products
PartitionKey = 1
RowKey = 1
Timestamp = 01012012
ProductName = Mountain 2012
Price = 1099.00
Size = 42
PartitionKey = 1
RowKey = 2
Timestamp = 01012012
ProductName = Cycling Helmet
Price = 1099.00
Color = Red
The URL for a table is in the form <Storage_Account_Name>.table.core.windows.net.
Azure Blob Storage provides a storage solution for unstructured binary files. When you create an Azure
Storage account, you also receive a Blob Storage host at <Storage_Account_Name>.blob.core.windows.net.
You can create a hierarchy of containers in this location in a similar way to the use of folders to organize
files in a file system.
6-6
Blob storage containers provide a storage location for many applications and infrastructure services. For
example, Azure virtual machines use an Azure Storage blob container as a file system for the storage of
virtual hard disk files.
When you create an Azure Storage account, it is assigned two 512-bit access keys that applications can
use to authenticate with the service. To access data in Azure, applications can specify the URL for the data
they need and use the access key to be authenticated.
In some scenarios, basic authentication with an access key does not provide sufficient security, so Azure
storage also supports an authentication mechanism called shared access signatures (SAS). To use SAS, you
must create a shared access policy on the storage location to which you want to grant access, and use the
Azure REST APIs, the Azure SDK, or a third-party utility to generate a shared access signature. You can
then use the shared access signature to access specific Azure storage items.
Demonstration Steps
Open the Azure Management Portal
1.
Ensure that the MSL-TMG1, 20465C-MIA-DC, and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
Sign in using the Microsoft account that is associated with your Azure subscription.
2.
3.
Use the Quick Create option to create a storage account with a unique name in any available region.
Note that you can create Azure services in specific regional data centers.
4.
View the running operations pane at the bottom of the page until it disappears, and note that the
animated icon indicates that an operation is still in progress.
5.
Wait for the status of the new storage account to indicate that it is online. Then click the operations
icon at the bottom of the screen to verify that the operation has completed.
6.
7.
With the new storage account selected, at the bottom of the portal, click Manage Access Keys.
8.
Note that you can copy access keys to the clipboard from this dialog box. Then click OK.
9.
Click the arrow next to the storage account name, and note that each service has a set of pages that
enable you to manage it.
10. View the available links on the home page for the storage account, and then click the Containers
page.
6-7
11. On the Containers page, click Create a Container. Then enter the name backups, select the access
Private, and OK.
12. When a notification that the operation has completed is displayed at the bottom of the page, click
OK.
13. Click the arrow at the top of the page to return to the top-level Storage page in the Azure
Management Portal.
14. Keep Internet Explorer open for the next demonstration.
Lesson 2
Azure PowerShell
6-8
In addition to managing an organizations Azure infrastructure by using the Azure Management Portal,
you can also manage it by using Azure PowerShell. PowerShell provides a single management tool for all
your on-premises and Azure infrastructure, and helps to standardize management across the enterprise.
Lesson Objectives
After completing this lesson, you will be able to:
You can download Azure PowerShell by using the Web Platform Installer, which is available from the
Azure downloads page at www.microsoft.com.azure.
2.
Before you can run local PowerShell commands that manipulate your Azure services, you must associate
PowerShell on the local computer with your Azure subscription.
The certificate-based option imports a certificate that is valid for a year, and makes it easy to manage
Azure services from PowerShell for long periods of time without the need to re-authenticate. However, it
can be difficult to manage certificates when multiple users will access the same subscription from different
computers.
Azure AD authenticates individual users based on their Microsoft account credentials, and grants access
for a 12-hour period. This approach makes it easier for multiple users to access the subscription, since no
certificate must be downloaded and managed, but can disrupt long-running operations because of the
need to re-authenticate periodically.
Using a Certificate
6-9
To use certificate-based authentication from PowerShell to Azure, download and import the
publishsettings file from your Azure subscription. The publishsettings file includes the credentials
necessary to authenticate you when connecting to Azure. To import this file into an Azure PowerShell
session, run the Get-AzurePublishSettingsFile PowerShell cmdlet. If you do not already have an open
browser session to your Azure account, PowerShell will launch one, prompt you to sign in using your
Microsoft account credentials, and then initiate the download of a certificate file. Save the file in a secure
location on your computer, and then use the Import-PublishSettingsFile cmdlet to import the certificate
into PowerShell.
The following example shows how to download and import the publishsettings certificate file:
Importing an Azure certificate
Get-AzurePublishSettingsFile
# PowerShell opens a browser and prompts you to download the .publishsettings file.
Import-AzurePublishSettingsFile "C:\Downloads\Azure-1-1-2013credentials.publishsettings".
Note: The publishsettings file contains sensitive information. After importing it into your
PowerShell environment, you should delete the file or store it securely.
Using Azure AD
As an alternative to downloading and importing the publishsettings file, you can use the AddAzureAccount cmdlet to connect to your Azure account. When you run the Add-AzureAccount cmdlet,
the Azure PowerShell environment opens a webpage where you can sign in using the Microsoft account
associated with your Azure subscription.
The following example shows how to use Azure AD to authenticate with Azure from PowerShell:
Using Azure AD from PowerShell
Add-AzureAccount
Get-AzureAccount. This cmdlet lists the Azure accounts you have added to PowerShell using AddAzureAccount.
Get-AzureSubscription. This cmdlet lists the Azure subscriptions that are associated with your
PowerShell environment.
Remove-AzureAccount. Use this cmdlet to remove an Azure account that is no longer valid or
required.
Remove-AzureSubscription. Use this cmdlet to remove an Azure subscription that is no longer valid
or required.
When you connect a PowerShell environment to Azure, you can include multiple accounts and
subscriptions. One subscription is designated as the default, and is used when a subscription is not
explicitly specified when running an Azure cmdlet. Most cmdlets include a subscription parameter that
enables you to specify the subscription in which the cmdlet should be run. You can also use the SelectAzureSubscription cmdlet to control the subscription context of your PowerShell scripts.
After creating an account, you should configure a subscription to use it. You can do this by using the
Set_AzureSubscription command.
The following code example configures the Azure subscription called My Subscription to use the storage
account named my_store:
Set-AzureSubscription
Set-AzureSubscription SubscriptionName My Subscription CurrentStorageAccountName
my_store
After creating a container, you can set permissions on it. To do this, you should first create a shared access
signature (SAS) token. Applications can connect to the storage by using the SAS token, and they will be
able to perform actions based on the tokens permissions. You can obtain an SAS token by using the
NewAzureStorageContainerSASToken command. You must specify the name of the container, the
permissions that the token will have (for example, rw configures read and write permission), and its expiry
time.
The following code example creates a new container SAS token for the blob_container container. It also
configures read and write permission for the token on the container, and specifies an expiry time for the
token.
NewAzureStorageContainerSASToken
NewAzureStorageContainerSASToken Name blob_container Permission rw ExpiryTime 03-022014
Demonstration Steps
Connect PowerShell to an Azure Account
6-11
1.
Ensure that you have completed the previous demonstration in this module.
2.
On the task bar, right-click Windows PowerShell and click Windows PowerShell ISE.
3.
If the Commands pane is not visible, on the View menu, click Show Command Add-on.
4.
In the Commands pane. In the Modules list, select Azure. The pane lists the PowerShell cmdlets in
the Azure library
5.
6.
7.
In the PowerShell ISE, at the command prompt, enter the following command:
Import-AzurePublishSettingsFile "D:\Demofiles\Mod06\credentials.publishsettings"
8.
Enter the following command to verify that your subscription has been added:
Get-AzureSubscription
2.
Enter the following command to verify that the CurrentStorageAccountName property for the
subscription has been set:
Get-AzureSubscription
In the PowerShell ISE, if the Script pane is not visible, on the View menu, click Show Script Pane.
2.
In the script pane, type the following PowerShell code, which creates a container named datafiles
and generates an SAS token for it:
New-AzureStorageContainer datafiles
New-AzureStorageContainerSASToken -Name datafiles Permission rw -ExpiryTime 01-012019
3.
4.
Note of the returned value at the command prompt, which is an SAS token for the datafiles
container created by the script (ignore the question mark at the start).
5.
Minimize the PowerShell ISE window you will use it again in a later demonstration.
6.
In Internet Explorer, click the arrow next to your storage account name and then click the Containers
tab to verify that the datafiles container has been created.
7.
Lesson 3
Lesson Objectives
After completing this lesson, you will be able to:
Backup to Azure.
6-13
Database backups are stored off-site, typically in a geographical location that is remote from the
customer site. This enables organizations to use backups in Azure storage to re-establish database
availability in the event of a disaster affecting their production site.
Backups in Azure Blob storage are protected by the built-in resiliency of Azure Storage, which
replicates the backups to multiple additional locations. By using geo-replication for backup data,
organizations can protect themselves from data loss caused by disaster scenarios such as earthquakes
and floods, which can affect large geographical areas.
Azure Blob storage enables organizations to archive backups for as long as required without worrying
about consuming limited storage space. In addition, archiving backups in Azure Blob storage does
not require the physical transportation of backup media to and from storage facilities, which would
be the case with backup tapes, for example. This can save money, and is safer and more secure.
You can restore backups in Azure Blob storage to either SQL Server instances running in Azure Virtual
Machines, or to on-premises instances of SQL Server.
Azure Blob storage stores database backup files as blobs. To use the Azure Blob storage service for SQL
Server backups, you first need to provision an Azure storage account and create a Blob Storage container.
When performing the backup operation, you must specify the Azure storage location by using the TO URL
keywords in the Transact-SQL backup statement, and then specifying the location of the blob store. You
must also specify a credential to authenticate to the Azure Blob storage service. A credential is an object
that contains the name of the Azure Storage account and a secret key value associated with it.
Use the following steps to back up a database:
1.
Create a credential, specifying the Azure Storage account name as the identity, and either of the
secret keys associated with the Azure Storage account as the secret. The following code example
shows how to use the Transact-SQL CREATE CREDENTIAL statement to create a credential for an
Azure Storage account named AzStoreAct:
CREATE CREDENTIAL AzureStore
WITH
IDENTITY = 'AzStoreAct',
SECRET = 'XXXXXXXXXX-Access Key-XXXXXXXXXX';
2.
Back up the database to the URL for the Azure Storage container where you want to store the
backup, specifying the credential you created in step 1. The following code example shows how to
use the Transact-SQL BACKUP DATABASE statement to back up the MyDB database to a container
named backups in the AzStoreAct Azure Storage account:
BACKUP DATABASE MyDB
TO URL = 'AzStoreAct.blob.core.windows.net/backups/mydb.bak'
WITH CREDENTIAL = 'AzureStore';
Demonstration Steps
Create a Credential from a Storage Account Key
6-15
1.
Ensure that you have completed the previous demonstrations in this module.
2.
In the D:\Demofiles\Mod06 folder, run Setup.cmd as Administrator (this creates a database for the
demonstration).
3.
In Internet Explorer, on the Dashboard page for your storage account, click Manage Access Keys.
4.
Click the copy icon next to the primary access key to copy it to the clipboard. If prompted, click Allow
Access. Then click OK.
5.
Start SQL Server Management Studio. When prompted, connect to the MIA-SQL instance of the
database engine using Windows authentication.
6.
7.
In the query pane, which contains the code below, replace Storage-Account-Name with the name of
your Azure storage account, and replace XXXXX-access-key-XXXXX with the access key you copied to
the clipboard.
USE [master]
GO
CREATE CREDENTIAL AzureStore
WITH IDENTITY = 'Storage-Account-Name',
SECRET = 'XXXXX-access-key-XXXXX';
GO
8.
In SQL Server Management Studio, open Back Up Database.sql in the D:\Demofiles\Mod06 folder.
2.
In the query pane, which contains the code below, replace Storage-Account-Name with the unique
name you specified when creating your Azure storage account.
USE [master]
GO
BACKUP DATABASE DemoDB
TO URL = 'http://Storage-Account-Name.blob.core.windows.net/backups/DmeoDB.bak'
WITH CREDENTIAL = 'AzureStore';
GO
3.
4.
In Object Explorer, in the Connect drop-down list, click Azure Storage. Then enter your storage
account name, paste the access key you copied to the clipboard previously, and click Connect.
5.
Expand Containers and backups in your storage account, and verify that a file named DemoDB.bak
has been created.
1.
2.
Right-click the DemoDB database and click Delete. Then, in the Delete Object dialog box, select
Close existing connections and click OK.
3.
4.
In the query pane, which contains the following code, replace Storage-Account-Name with the name
of your Azure storage account.
USE [master]
GO
RESTORE DATABASE DemoDB
FROM URL = 'http://Storage-Account-Name.blob.core.windows.net/backups/DemoDB.bak'
WITH CREDENTIAL = 'AzureStore';
GO
5.
Click Execute to restore the database, and wait for the restore operation to complete.
6.
In Object Explorer, right-click the Databases folder and click Refresh to verify that the DemoDB
database has been restored.
7.
Keep SQL Server Management Studio open for the next demonstration.
Scalability. Storing data files in Azure storage enables you to capitalize on its massive scalability while
taking full advantage of your investments in on-premises, high-performance servers.
High availability and disaster recovery. If an on-premises SQL Server instance fails, you can quickly
and easily configure a replacement by attaching the data files to a new instance; there is no need to
move the data at all. Also, the files are protected by the built-in redundancy of Azure.
6-17
Security. By using Transparent Database Encryption (TDE), you can ensure that the data in database
files in Azure Blob Storage is highly secure. This is because TDE only decrypts data on the compute
node; in other words, only on the on-premises SQL Server instance. Furthermore, encryption keys are
stored in the master database, which is stored locally in an on-premises instance. All backups of the
master can also be taken and stored locally, so the keys need never leave the organizations premises.
By using TDE in this way, the data remains secure even if the Azure account logon details are
compromised.
Overview of configuration
When configuring the storage of SQL Server data files in Azure, you should consider the following points:
Azure uses blob storage to store SQL Server 2014 database files in Azure storage. You must create a
container for the blobs that will store the database filesthe recommended access setting for the
container is Private.
You must create a shared access policy on the container that includes a shared access signature (SAS).
The SAS acts as a key that enables restricted access to blobs, for example by granting read or write
permission. By creating an SAS to enable access to the storage, you avoid the need to share your
Azure storage account key, which is more secure. To create an SAS, you can use Azure PowerShell,
Azure REST APIs, the Azure SDK, or third-party tools.
You must create a credential to enable authentication to the container that will host the data file
blobs. The credential name must be the same as the file path name of the container. You should
provide the policy name in the WITH IDENTITY clause, and the SAS key name in the SECRET clause.
For example:
CREATE CREDENTIAL [https://storename.blob.core.windows.net/data]
WITH IDENTITY='SHARED ACCESS SIGNATURE', SECRET = 'SAS key value'
You can then create a database by specifying the file locations as URLs, for example:
CREATE DATABASE mydb
ON
( NAME = mydb_dat, FILENAME =
'https://storename.blob.core.windows.net/data/mydb_data.mdf' )
LOG ON
( NAME = mydb_log, FILENAME =
'https://storename.blob.core.windows.net/data/mydb_Log.ldf')
A storage account can store up to 100 TB of data in as many containers as you like. Each data file is
stored as a blob in a container, however, and each blob can be a maximum of 1 TB in size.
You cannot store database files in Azure for databases that include FILESTREAM data. For databases
that include FILESTREAM data, you should store the database files on-premises.
You cannot store data files in Azure Blobs for in-memory OLTP databases because this technology
relies on FILESTREAM.
When you store SQL Server data files in Azure, you must disable geo-replication to avoid a risk of
data corruption when restoring these files.
Reference Links: You will see how to create a database that uses Azure storage for its data
files in the next lesson, Azure PowerShell.
Demonstration Steps
Create a Credential from an SAS Token
1.
Ensure that you have completed the previous demonstrations in this module.
2.
Maximize the PowerShell ISE window, which should be open from a previous demonstration, and
copy the SAS token that was returned by the script you ran previously to the clipboard (do not
include the ? character at the beginning).
3.
In SQL Server Management Studio, open SAS Credential.sql in the D:\Demofles\Mod06 folder.
4.
In the query pane, which contains the code below, replace Storage-Account-Name with the name of
your Azure storage account, and replace XXXXX-SAS-key-XXXXX with the SAS key you copied to the
clipboard. Ensure that the SAS key is pasted on a single line.
USE [master]
GO
CREATE CREDENTIAL [https://storage_account_name.blob.core.windows.net/datafiles
WITH IDENTITY = SHARED ACCESS SIGNATURE',
SECRET = 'XXXXX-SAS-key-XXXXX';
GO
5.
In SQL Server Management Studio, open Create Database.sql in the D:\Demofiles\Mod06 folder.
2.
In the query pane, which contains the code below, replace Storage-Account-Name with the name of
your Azure storage account.
USE [master]
GO
CREATE DATABASE AzureFilesDB
ON
(NAME = AzureFilesDB_data,
FILENAME =
'https://Storage_Account_Name.blob.core.windows.net/datafiles/AzureFilesDB_Data.mdf')
LOG ON
(NAME = AzureFilesDB_log,
FILENAME =
'https://Storage_Account_Name.blob.core.windows.net/datafiles/AzureFilesDB_Log.ldf')
GO
3.
Click Execute to create the database, and wait for the command to complete.
4.
In Object Explorer, right-click the Databases folder and click Refresh. Then right-click AzureFilesDB,
click Properties, click Files, note the location of the data files in Azure, and click Cancel.
5.
In Object Explorer, under your Azure storage account, expand the datafiles container and verify that
the database files are stored there.
6.
Close SQL Server Management Studio, Windows PowerShell ISE, and Internet Explorer without saving
any files.
6-19
Adventure Works has a disaster recovery plan that includes the off-site storage of backup media. While
recognizing the importance of off-site storage, executives at Adventure Works are keen to reduce the
costs associated with it. They have asked you to test the feasibility of using Azure to back up the
companys SQL Server databases.
In this lab, you will create a storage account, and then back up a SQL Server database to Azure.
Objectives
After completing this lab, you will be able to:
Before you can back up a SQL Server database to Azure, you must perform the required configuration
tasks. In this exercise, you will create a storage account and a container.
Note: The Microsoft Azure portal is continually improved, and the user interface may have been updated
since this lab was written. Your instructor will make you aware of any differences between the steps
described in the lab and the current Azure portal user interface.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Sign in to the Azure Management Portal
3. Create a Storage Account
4. Create a Container
Ensure that the MSL-TMG1, 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
If you have not already created a Microsoft Azure trial subscription, follow the instructions in
D:\Creating a Microsoft Azure Trial Subscription.htm to do so.
In Internet Explorer, browse to http://azure.microsoft.com, click Portal and sign in using the Microsoft
account that is associated with your Azure subscription.
2.
Explore the management portal, noting the services that you can provision and manage.
1.
Create a new storage account with a unique name. Ensure that geo-redundant replication is enabled.
2.
When the storage account is online, select it and use the Manage Access Keys icon to view the
access keys that have been generated for it. Note that you can copy a key to the clipboard from here.
Click the arrow in the Name column for your storage account, and add a container named backups
with private access.
2.
Click the arrow next to the container name to verify that it contains no blobs.
Now that you have completed the required configuration tasks, you will test Azures backup capabilities
by performing a backup of a local SQL Server database to Azure. You will then complete the test by
restoring the database locally.
The main tasks for this exercise are as follows:
1. Create a Credential from an Access Key
2. Back up a Database to Azure
3. Restore a Database Backup from Azure
Use SQL Server Management Studio to connect to the MIA-SQL instance of SQL Server using
Windows authentication.
2.
Create a credential named AzureStore for your Azure storage account, specifying the storage
account name and one of its access keys.
Create a Transact-SQL query that uses the BACKUP DATABASE statement to back up the Products
database to the backups container in your Azure storage account.
2.
3.
In Object Explorer, connect to your Azure Storage account and verify that the backup has been
created in the backups container.
In SQL Server Management Studio, delete the Products database from the MIA-SQL instance.
2.
Create a Transact-SQL query that uses the RESTORE DATABASE statement to restore the Products
database from the same URL to which you backed it up, using the AzureStore credential.
3.
Execute the query, and then use Object Explorer in SQL Server Management Studio to verify that the
database has been restored.
6-21
Now that you have completed the testing of the backup capabilities, you will create a database that stores
its data files in Microsoft Azure.
The main tasks for this exercise are as follows:
1. Configure PowerShell
2. Create a Container and a Shared Access Signature
3. Create a SQL Server Credential
4. Create a Database
2.
Use the Get-AzureAccount cmdlet to determine if any Microsoft Azure accounts are currently
associated with PowerShell. If so, use the Remove-AzureAccount cmdlet to unregister them.
3.
Use the Get-AzureSubscription cmdlet to determine if any Microsoft Azure subscriptions are
currently associated with PowerShell. If so, use the Remove-AzureSubscription cmdlet to unregister
them.
4.
Use the Get-AzurePublishSettingsFile cmdlet to download a certificate for your subscription, and
then use the Import-AzurePublishSettingsFile cmdlet to import the certificate.
5.
Use the Get-AzureSubscription cmdlet to verify that your subscription has been added to the
PowerShell environment.
6.
Use the Set-AzureSubscription cmdlet to set the default subscription and current storage account to
your subscription and the storage account you created in the first exercise of this lab.
7.
Use the Get-AzureSubscription cmdlet to verify that the default subscription and storage account
have been set.
Use the New-AzureStorageContainer PowerShell cmdlet to create a new Microsoft Azure storage
container named datafiles.
2.
3.
Copy the shared access signature token (which starts after the question mark at the beginning) to the
clipboard.
In SQL Server Management Studio, create a credential with the following properties:
Name: [https://<your_storage_account_name>.blob.core.windows.net/datafiles].
Secret: The shared access signature token you generated in the previous task.
1.
In SQL Server Management Studio, create a database named Products for which the data and log files
are stored in the datafiles container (https://storage_account_name.blob.core.windows.net/datafiles/).
2.
View the contents of the datafiles container in Object Explorer and verify that the database files have
been created there.
6-23
In this module, you learned about Azure and how you can use Azure Storage from an on-premises SQL
Server instance.
Review Question(s)
Question: What concerns might organizations have about storing data or backups in a cloud
service?
Module 7
Microsoft Azure SQL Database
Contents:
Module Overview
7-1
7-2
7-6
7-11
7-15
7-18
Module Overview
Microsofts cloud platform includes Microsoft Azure SQL Database, in which you can use SQL Server to
host your databases without having to take on the responsibility of managing SQL Server itself, or the
operating system that supports it.
In this module, you will learn about Azure SQL Database, including how to provision it, how to implement
security, how to manage databases, and how to migrate databases to Azure SQL Database.
Objectives
After completing this module, you will be able to:
Lesson 1
7-2
Azure SQL Database is a cloud-based SQL service that provides subscribers with a highly scalable platform
for hosting their databases. By using Azure SQL Database, organizations can avoid the cost and
complexity of managing on-site SQL Server installations, and quickly set up and start using database
applications.
In this lesson, you will learn about the key features of Azure SQL Database and the differences between
Azure SQL Database and SQL Server 2014. You will also learn how to provision a database in Azure SQL
Database.
Lesson Objectives
After completing this lesson, you will be able to:
Explain the similarities and differences between Azure SQL Database and SQL Server.
From the perspective of the SQL Server query writer, SQL Database operates much like a traditional SQL
Server instance, with a few key distinctions, which will be covered later in this lesson. You can write SELECT
queries against tables and views, and invoke functions and stored procedures against databases that are
hosted in SQL Database, just as you would in SQL Server.
Beyond the relational database engine provided by SQL Database, it is necessary to understand the model
behind the Azure platform, so you can set up your own account, provision a server, and create databases.
7-3
There is a relationship between three core objects in SQL Databasethe subscription, the server, and the
database. The following table describes these objects:
Azure object
Description
Azure Subscription
SQL Database
Server
SQL Database
Note: Due to the relationship between subscription, server, and database, operations that
span databases or servers (for example, cross-database queries, replication, or other high
availability setups) are not supported in SQL Database. Therefore, you should carefully evaluate
on-premises applications before migrating to SQL Database.
SQL Server
SQL Database
Server-level security
account management
Configuring
authentication
mechanism
Firewall management
Hardware and
resource
management
7-4
In addition to the differences outlined above, you cannot switch database context from one user database
to another; the USE statement is not supported in SQL Database. Furthermore, SQL Database does not
support all the SQL Server database engine features. For example, SQL Database does not support multidatabase and multi-server capabilities.
Reference Links: For information on the Transact-SQL statements that are supported in
SQL Database, see the article Transact-SQL Support (Azure SQL Database) on the MSDN website.
Creating a Server
You can create a server either as part of the process
of creating a database, or on its own. In scenarios
where you are producing new databases for
applications, you typically create the server as part
of the process of creating the first database.
However, in some cases, you might want to create
7-5
the server without any user databases, and then add databases to it later; for example, by migrating them
from an on-premises SQL Server instance.
When you create a server, you must specify the following information:
A login name and password for the administrative account that you will use to manage the server.
The geographical region where the Azure data center hosting the server should be located.
Whether or not to allow other Azure services in the same subscription to connect to the server.
Enabling access from Azure creates a firewall rule that permits access from the IP address 0.0.0.0.
Each SQL Database server must have a globally unique name. The fully qualified name of the server is in
the form <server_name>.database .windows.net; for example, abcd1234.database.windows.net.
After you create a server, you can configure it to allow access from a specific range of IP addresses and to
enable the Premium database feature. This allows you to reserve storage up to a pre-defined quota for
more predictable database performance.
Creating a Database
When you create a database, you must specify the following information:
The edition of SQL Database you want to use (Web or Business), and the maximum size you want
the database to grow to. Note that pricing for Web and Business editions is based on the storage
capacity that you actually use, not on the maximum size you specify.
The server on which to create the database. You can select an existing server that you have previously
created in the same subscription, or create a new server.
After you have created a database, you can configure its settings to restrict access, based on IP address. If
you have enabled the Premium database service on the server, you can also convert the database to a
Premium database and reserve dedicated resources and storage space. Note that this option will increase
costs, as you must pay for the reserved space, and not just the space actually used by the database.
Lesson 2
7-6
In this lesson, you will learn about the security model in Azure SQL Database, and how to manage firewall
rules, logins, users, roles, and permissions.
Lesson Objectives
After completing this lesson, you will be able to:
To restrict access from specific devices or networks, SQL Database uses a firewall, which by default allows
no external connections. When you create a server, you can optionally grant access from other Azure
services, which are identified by the IP address 0.0.0.0. In the Azure management portal, you can enable
access from the current IP address of the client device being used to access the portal. You can also
specify one or more ranges of IP addresses that should be permitted to access the SQL Database server.
Logins
In a similar way to SQL Server, Azure SQL Database uses logins at the server level to authenticate user
requests. SQL Database does not support Windows integrated authentication, so all logins consist of a
login name and password. Logins are defined in the master database.
7-7
Azure SQL Database provides the following two database roles in the master database, to which you can
assign users, in order to grant them server-level permissions:
Note that this architecture is different to that of SQL Server. A SQL Database server is a conceptual entity
that contains only databases, including the master database. To assign server-level management
privileges to a login, you must create a user for that login in the master database, and then add the user
(not the login) to the role.
At the database level, SQL Database provides an additional layer of firewall protection, as well as the same
security principals as SQL Server.
As well as restricting access to the SQL Database server based on client IP address, you can define
additional firewall rules for individual databases. This enables you to host multiple databases on the same
server while restricting access to each database, based on different ranges of IP address.
Users
Like SQL Server, SQL Database requires that logins be mapped to a user in each database to which they
require access. The system administrator login you create when first provisioning the server is
automatically mapped to the dbo user in all databases.
Database Roles
SQL Database provides the same database roles that you would find in a database in a SQL Server 2014
instance:
db_datareader. This role can read all data from all user tables in the database.
db_datawriter. This role can write data in all user tables in the database.
db_ddladmin. This role can create and manage objects in the database.
db_denydatareader. This role cannot read data from any table in the database.
db_denydatawriter. This role cannot write data in any table in the database.
db_owner. This role can perform all configuration and management tasks in the database.
At the schema and object level, SQL Database uses the same permissions-based authorization model as
SQL Server. You can use GRANT, REVOKE, and DENY statements to assign permissions on database
objects to users and roles in the database.
Allow the current client IP address. This option provides a quick way to add a range of allowed IP
addresses that start and end with the IP address of the computer or device from which you are
currently accessing the Azure management portal.
Specify one or more explicit ranges of allowed address. Each range consists of a unique name, a
starting IP address, and an ending IP address.
You can also manage server firewall rules programmatically through a representational state transfer
(REST) application programming interface (API) or by using the sp_set_firewall_rule and
sp_delete_firewall_rule system stored procedures in the master database. You can view server firewall
settings by querying the sys.firewall_rules system view in the master database.
7-8
Managing Logins
7-9
To create a login, connect to the master database and use the CREATE LOGIN Transact-SQL statement,
specifying a name and password for the login.
The following code sample shows how to create a login named MyLogin with the password Pa$$w0rd:
Creating a Login
CREATE LOGIN MyLogin
WITH PASSWORD = Pa$$w0rd;
After you have created a login, you can change the password by using the ALTER LOGIN statement and
delete the login by using the DROP LOGIN statement.
When connecting to Azure SQL Database, client applications must use SQL Server authentication and
specify the login name and password in the connection string used to establish the connection. When
specifying the login name, you should use the syntax <login_name>@<server_name>. For example, if
your SQL database server is named abcd1234, and your login is named MyLogin, your connection string
should specify the login as MyLogin@abcd1234.
Managing Users
Users are the mechanism by which logins are granted access to databases. To create a user, connect to the
database to which you want to grant access and use the CREATE USER Transact-SQL statement, specifying
the associated login.
The following code sample shows how to create a user named MyUser for the MyLogin login created
previously in this topic:
Creating a User
CREATE USER MyUser
FROM LOGIN MyLogin;
After you have created a user, you can delete it by using the DROP USER statement.
To add a user in the master database to a role with server-level permissions, use the sp_addrolemember
system stored procedure as shown in this example:
Adding a User in the Master Database to a Role with Server-Level Permissions
EXEC sp_addrolemember 'dbmanager', 'MyUser';
At the database level, administrative permissions are encapsulated in database roles defined in each
database, to which you can add users.
To add a user to a database role, use the sp_addrolemember system stored procedure in the appropriate
database as shown in this example:
Adding a User to a Database Role
EXEC sp_addrolemember 'db_datareader', 'MyUser';
Note: The ALTER SERVER ROLE and ALTER ROLE statements are not supported in Azure
SQL Database. You must use the sp_addrolemember system stored procedure to add users to
server roles (in the master database only) and database roles (in all databases).
Managing Permissions
You can use GRANT, REVOKE, and DENY statements to assign explicit permissions that enable users to
perform specific tasks or access particular database objects. In general, the simplest approach to designing
database security is to use role membership to define the base set of permissions that are required, and
only use explicit permissions to extend or override permissions inherited from role membership.
The following example shows how to deny SELECT permission on a specific table, even if the user has
been granted permission through membership of the db_datareader role:
Managing Permissions
DENY SELECT ON dbo.MyTable TO MyUser;
Lesson 3
7-11
Azure SQL Database provides a variety of ways for you to implement databases. In this lesson, you will
learn about the tools you can use to create and manage databases, and how to implement databases,
including migrating them from on-premises servers.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the tools for creating and managing databases in Azure SQL Database.
SQL Server Management Studio. You can use SQL Server Management Studio (SSMS) to connect to
an Azure SQL Database Server and manage it in a similar way to SQL Server instances. The ability to
manage SQL Server instances and SQL Database servers by using the same tool is useful in hybrid IT
environments. However, many of the graphical designers in SSMS are not compatible with SQL
Database, so you must perform most tasks by executing Transact-SQL statements.
SQLCMD. You can use the SQLCMD command-line tool to connect to Azure SQL Database servers
and execute Transact-SQL commands.
Visual Studio. Developers can use Visual Studio to create databases and deploy them directly to
Azure SQL Database.
SQLCLR
Service broker
System tables
Trace flags
Database mirroring
Additionally, some other features of SQL Server have limited support in Azure SQL Database.
Reference Links: For more information about supported features in Azure SQL Database,
see the article Transact-SQL Support (Azure SQL Database) in the Azure documentation, on the
MSDN website.
Client applications can connect to Azure SQL Database in a similar way to SQL Server, and use the same
tabular data stream (TDS) connection to submit queries and retrieve results. However, when developing
client applications in Azure SQL Database, consider the following guidelines:
Avoid designing applications that use multiple databases in a single session as you cannot change
database contexts in a connection to Azure SQL Database.
Handle connection time-out errors by implementing retry logic. Azure SQL Database automatically
disconnects sessions after a period of inactivity.
Export a data-tier application (DAC) from SQL Server and import it into Azure SQL Database.
7-13
Of these two techniques, using a DAC is the simplest way to ensure the correct migration of the database
and all its server-level dependencies. You can export and import the DAC by using the tools in SSMS and
the Azure SQL Database management portal, or you can use a wizard in SSMS to automate the entire
process.
The Export Data-Tier Application wizard in SSMS enables you to specify an Azure Storage account as the
destination for an exported package. The Import Data-Tier Application wizard enables you to specify an
Azure Storage account as the source for a package that you want to import. This makes it easy to migrate
a database from SQL Server to Azure SQL Database in two stages, using Azure Storage as an intermediary
storage location for the DAC package.
Alternatively, you can use the Deploy Database wizard to export a SQL Server database as a DAC package
and import it into an Azure SQL database server in a single operation.
Note: Whichever technique you use to deploy a SQL Server database to Azure SQL
Database, you will need to reconfigure security for the database after migration. Although DAC
packages include logins and maintain mappings to database users, the migration operation does
not include passwords, so you must reset these after the migration completes. Additionally, if the
source database uses Windows authentication, you may need to create new logins and users in
Azure SQL Database because SQL Database does not support Windows authentication.
Self-Service Restore
When you create a database in a Microsoft Azure
SQL Database server, Microsoft Azure automatically
enables self-service restore of database to a
previous state. The available restore points depend
on the edition of Azure SQL Database.
Premium. Premium database can be restored to a specific point in time within a 35 day period.
You can restore databases by using the Azure management portal, or by using Windows PowerShell. You
can restore an existing database to back-out accidental or invalid changes to data. When you restore an
existing database, Azure creates a new database of the same service tier with a name that reflects the date
and time to which the database has been recovered. After youve verified that the recovered database
contains the required data, you can delete the original database and the use ALTER DATABASE statement
to rename the restored database to match the original name.
When you delete an entire database, it remains listed in the portal until its retention period has expired.
You can restore deleted databases to the most recently available recovery point.
7-15
Managers at Adventure Works have asked you to investigate the possibility of migrating some of the
companys existing databases to the cloud. You will create a new database in Azure SQL Database, then
configure the required security settings.
Objectives
After completing this lab, you will have:
To test the provisioning process for Azure SQL Database, you will create a test database. To complete this
lab, you must have an Azure subscription.
Note: The Microsoft Azure portal is continually improved, and the user interface may have been updated
since this lab was written. Your instructor will make you aware of any differences between the steps
described in the lab and the current Azure portal user interface.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Create a SQL Database
3. Manage a SQL Database
Ensure that the 20465C-MIA-DC and 20465C-SQL virtual machines are both running, and then log on
to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
If you have not already created a Microsoft Azure trial subscription, follow the instructions in
D:\Creating a Microsoft Azure Trial Subscription.htm to do so.
Start Internet Explorer, browse to http://azure.microsoft.com, click Portal, and sign in using the
Microsoft account that is associated with your Azure subscription.
2.
Create a new SQL Database named CloudDB. Use the default edition, size, and collation to create the
database on a new SQL Database server with the login name Student and the password Pa$$w0rd.
Specify a suitable region, and allow Azure services to access the server.
3.
When the new database is online, configure the new server and add the current client IP address to
the allowed IP addresses.
1.
In the Azure management portal, on the SQL Databases page, on the Servers tab, select your server
and click the Manage icon. This opens a new browser tab (you may need to allow pop-ups for this
site).
2.
Log on to the server using the administrative credentials you specified when provisioning it.
3.
Design the schema for the CloudDB database, adding a table named Products with the following
columns:
4.
ProductName (nvarchar(50))
Price (money)
5.
ProductName
Price
Product 1
1.99
Product 2
2.99
Close the SQL Database management portal tab, but keep the original Azure management portal
open.
Before storing data in Azure SQL Database, you want to configure security. You will begin this process by
creating a login and a database user. You will then check that the firewall settings are correct, and test the
security settings you have configured.
The main tasks for this exercise are as follows:
1. Create a Login in the Master Database
2. Create a User in the CloudDB Database
3. View Firewall Settings
4. Test SQL Database Security
Start SQL Server Management Studio and connect to your Azure SQL Database server using the
Student login you created when you provisioned it.
2.
Create a new login in the Azure SQL database server named AWLogin with the password Pa$$w0rd.
Create a user for the AWLogin login named AWUser in the CloudDB database.
2.
Add the AWUser user to the db_datareader role in the CloudDB database.
7-17
1.
Query the sys.firewall_rules system view in the master database to determine what server-level
firewall rules are in place.
2.
Query the sys.database_firewall_rules system view in the CloudDB database to verify that there are
no explicit database-level firewall settings.
In Internet Explorer, in the SQL Databases page, on the Servers tab, select your server and click the
Manage icon. This opens a new browser tab.
2.
Try to log on as AWLogin with the password Pa$$w0rd without specifying a database. The
connection should fail because AWLogin does not have an associated user in the master database.
3.
Log on to the CloudDB database as AWLogin with the password Pa$$w0rd. This time the
connection succeeds, because there is a valid user for the AWLogin login in the CloudDB database.
4.
Create a new query and verify that the user can select data from the Products table.
5.
Try to insert data into the Products table, and verify that the user does not have permission to
perform this action.
Review Question(s)
Question: Which databases in your organization might you consider migrating to Azure SQL
Database?
Module 8
SQL Server in Microsoft Azure Virtual Machines
Contents:
Module Overview
8-1
8-2
8-6
8-13
8-17
8-22
Module Overview
Using virtual machines in Azure to host SQL Server instances and databases enables you to take
advantage of the benefits of the cloud whilst retaining greater control over the infrastructure than you
can with Azure SQL Database.
In this module, you will learn about the benefits and considerations for using virtual machines in Azure,
how to create and configure virtual machines in Azure, and how to work with SQL Server databases in
virtual machines in Azure.
Objectives
After completing this module, you will be able to:
Describe the benefits of Azure virtual machines and create an Azure virtual machine.
Lesson 1
8-2
To get the most out of an environment that uses virtual machines in Azure, it is important to understand
the benefits and fundamental concepts that underpin virtual machines in this environment.
In this lesson, you will learn about the main benefits of using virtual machines in Azure, how to use virtual
machine disks and images, and the topology of the Azure service.
Lesson Objectives
After completing this lesson, you will be able to:
Flexibility for hybrid cloud scenarios. You can use virtual networks to extend on-premises data
centers to the cloud, and easily migrate virtual machines between on-premises Hyper-V hosts and
Azure.
Server configuration consistency. Azure enables you to use pre-defined virtual machine template
images for common server configurations. Additionally, you can use the Windows sysprep tool and
SQL Server-prepared instances to create your own virtual machine images, making it easy to provision
consistent infrastructure services.
Microsoft offers virtual machines and virtual networks as an Infrastructure as a Service (IaaS) solution, and
you have full administrative control over the operating system, application, and network configuration.
This offers significantly more flexibility than a Platform as a Service (PaaS) solution such as Azure SQL
Database, but requires more administrative effort for configuration and management.
Disk Storage
8-3
Operating system and custom storage VHDs for virtual machines are stored as page blobs in a blob
container, in an Azure Storage account. You can create empty VHDs in the Azure management portal, and
upload your own VHDs to the Azure blob storage container by using the CSUpload command-line tool.
All VHDs in Azure must use the fixed size format, so if you have created a Hyper-V VHD by using the
dynamic size option, you must convert it to a fixed-size disk when you upload it. The CSUpload utility can
do this for you.
Caching
You can configure VHDs to use host caching to improve performance. The options available for
configuring caching on each disk include:
Read Only. Data is cached for reading, but all writes are performed directly to storage.
Read and Write. Data is cached for reading, and writes are cached in memory before being
committed to permanent storage.
A virtual machine image is a template based on an operating system VHD that has been generalized to
remove any instance-specific metadata such as a computer name. Azure provides a gallery of preconfigured images that you can use to create virtual machines with common server configurations, such
as Windows Server 2012 R2 with SQL Server 2014.
In addition to the platform images that Azure provides in the gallery, you can create your own by using
Windows Sysprep to generalize the operating system. You can also generalize SQL Server installations for
images by using the prepared instance feature of SQL Server setup. When you have generalized the VHD,
you can upload it to Azure and use the Azure management portal to capture it as an image.
Before creating your own image, you should examine the VM Depot in the Azure management portal.
This includes a wide range of community-supplied images that you can import into your Azure storage
account for use as a virtual machine template. Using these templates can save time because you do not
then need to create your own.
8-4
Affinity Group. An affinity group is a conceptual grouping of resources that need to work together.
By grouping resources into an affinity group, you can ensure that they are co-located for minimal
latency and data transfer. Affinity groups are optional.
Azure Virtual Network. A virtual network enables you to specify IP address ranges and DNS settings
for Azure virtual machines. With a virtual network, you can extend your existing data center network
to include virtual machines in Azure, making it easier to implement hybrid cloud solutions that use
Active Directory for single sign-on. Virtual networks are optional, and if you do not define one,
Azure automatically manages TCP/IP settings for your virtual machines. You will learn more about
Azure Virtual Networks in the next lesson.
Cloud Service. A cloud service is a conceptual container for Azure components that together
comprise a cloud computing service for users. A cloud service defines the Internet addressable URL
for the service in the form <service_name>.cloudapp.net. You must create virtual machines within a
cloud service.
Availability Set. You can define availability sets to create redundant servers for high availability.
While this approach is useful for protecting against failure of identical application servers such as web
servers, high availability for SQL Server database servers is usually best accomplished by using SQL
Server AlwaysOn technologies.
Storage Account. You require a storage account for Azure virtual machines. The storage account
enables you to host the VHD files for virtual machines and images in a blob container. Optionally, you
can link a storage account to a cloud service in order to simplify resource monitoring.
Demonstration Steps
Create an Azure Virtual Machine
1.
Ensure that the MSL-TMG1, 20465C-MIA-DC, and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
Start Internet Explorer, browse to www.microsoft.com/azure, click Portal, and sign in using the
Microsoft account that is associated with your Azure subscription.
3.
8-5
4.
Use the From Gallery option and select an image that includes SQL Server 2014 Standard on
Windows Server 2012 R2.
5.
Specify the virtual machine name MIA-SQLVM, select the Standard tier and a size that includes 2
cores, and specify the user name Instructor and the password VMPa$$w0rd.
6.
Select the option to create a new cloud service, changing the default cloud service DNS name to
something unique if necessary; select suitable region, and use an automatically-generated storage
account. Do not create an availability set.
7.
Review the default endpoint configuration, noting that remote desktop and Windows PowerShell
access is enabled.
8.
Complete the wizard, adding the VM Agent but no additional extensions to create the virtual
machine.
9.
Continue the next lesson while you wait for the virtual machine status to change to running.
Lesson 2
Lesson Objectives
After completing this lesson, you will be able to:
8-6
Named instances use a dynamically-assigned port by default, and this can create complexity. If you are
hosting a named instance of SQL Server in an Azure virtual machine, you should generally use SQL Server
Configuration Manager to specify a static port for the instance, and then add an inbound rule in Windows
Firewall to allow access on that port.
The ports and firewall configuration performed in virtual machines enables internal communication within
the Azure cloud service. To enable access from external clients, you must create an endpoint that defines
the public port on which clients will connect to the cloud service, and the private port to which traffic is
forwarded within the cloud service.
Stand-alone Endpoints
8-7
Stand-alone endpoints are used to route communication from external clients to services in virtual
machines. A stand-alone endpoint defines a public port (the port to which external clients connect) and a
private port (the port in a virtual machine to which requests are forwarded). You can enable pre-defined
stand-alone endpoints for commonly-used services, or define your own.
The pre-defined endpoints include one named MSSQL, which is used to map requests to the public port
1433 to the private port 1433 in a virtual machine running SQL Server. If you have multiple virtual
machines running SQL Server, you must create an endpoint for each one. Only one of these endpoints can
use the public port 1433, so you must use an alternative public port number. For example, if your cloud
service contains two virtual machines running a default instance of SQL Server, you could enable the predefined MSSQL endpoint for one virtual machine with public and private ports 1433. You could then
create a second endpoint named MSSQL2 for the other virtual machine with a public port of 1434 and a
private port of 1433. Client applications could then connect to SQL Server in the first virtual machine by
using the connection string <cloud_service_name>.cloudapp.net and to the second one by using the
connection string <cloud_service_name>.cloudapp.net, 1434. (Note that, when the default port is used,
it can be omitted from the connection string.)
Load balancing endpoints are used to map a single public port to the same private port on multiple
virtual machines. This enables you to load balance requests across multiple, identical application servers
for scalability and performance reasons. Generally, this is not a preferred approach for use with virtual
machines running SQL Server.
Authentication
By default, SQL Server supports only Windows
authentication. From the server itself, you can use a
local Windows account, but from remote clients,
Windows authentication requires an Active
Directory domain.
If you need to connect to SQL Server on a virtual machine from another virtual machine or service in the
same cloud service (for example, a web server), you can install Active Directory in a virtual machine in the
cloud service and configure it as a domain controller. You can then join all the virtual machines in the
cloud service to the domain, and use Windows authentication to connect directly from the client
application on one virtual machine to SQL Server on another.
If you need to support Windows authentication to SQL Server in an Azure virtual machine from onpremises clients outside the cloud service, you must create a virtual network that extends your enterprise
data center to include the Azure virtual machine running SQL Server, and add the SQL Server virtual
machine to your corporate Active Directory domain. Because the clients and SQL Server are on the same
network, clients can connect directly to SQL Server using the IP address of the virtual machine.
8-8
Alternatively, you can configure SQL Server to support SQL Server and Windows authentication, enabling
clients to connect to SQL Server through a cloud service endpoint by specifying a login name and
password. This configuration is much easier to set up, but requires that you maintain authentication for
each SQL Server independently from corporate networking credentials.
Authorization
Authorization for access to securables in SQL Server is based on role membership and permissions.
Configuring authorization for SQL Server in an Azure virtual machine is exactly the same as for an onpremises instance of SQL Server.
In this demonstration, you will see how to configure firewall settings, SQL Server security, and endpoints
for an Azure virtual machine.
Demonstration Steps
Configure Firewall Settings
1.
Ensure you have completed the previous demonstration and that the virtual machine you created is
now running.
2.
In the Azure management portal, on the Virtual Machines page, select your virtual machine and
click Connect.
3.
In the message informing you that the portal is retrieving the .rdp file, click OK. In the prompt to
open or save the .rdp file, click Open.
4.
If a message box informs you that the publisher of the remote connection cant be identified, click
Connect.
5.
When prompted to enter your credentials, use the MIA-SQLVM\Instructor account with the
password VMPa$$w0rd.
8-9
6.
If a message box informs you that the identity of the remote computer cant be verified, click Yes.
7.
If you are prompted to find PCs, devices, and content, click Yes.
8.
Wait for Server Manager to start, and view the information on the Local Server page. Then click the
status for Windows Firewall.
9.
10. In Windows Firewall with Advanced Security, select the Inbound Rules page and then in the Actions
pane, click New Rule.
11. In the New Inbound Rule Wizard window, select Port and click Next.
12. On the Protocols and Ports page, ensure that TCP and Specific local ports are selected, and enter
the port number 1433. Then click Next.
13. On the Action page, ensure that Allow the connection is selected, and click Next.
14. On the Profile page, ensure that all profiles are selected and click Next.
15. On the Name page, enter the name SQL Server Port and click Finish.
16. Note that the new firewall rule is created. Also note that a firewall rule for the SQL Server Cloud
Adapter Service already exists.
Configure SQL Server Authentication
1.
In the remote desktop session to the Azure virtual machine, on the Start screen, type SQL Server
Management and then start the SQL Server Management Studio app.
2.
When prompted, connect to the default database engine on the virtual machine by using Windows
authentication.
3.
4.
In the Server Properties dialog box, on the Security page, select SQL Server and Windows
Authentication mode. Then click OK.
5.
When notified that you must restart the service, click OK.
6.
In Object Explorer, right-click the server and click Restart. When prompted to confirm the restart,
click Yes.
7.
In Object Explorer, expand Security. Then right-click Logins and click New Login.
8.
In the Login New dialog box, on the General page, enter the login name Instructor and select
SQL Server authentication.
9.
Enter and confirm the password Pa$$w0rd, and clear the Enforce password expiration and User
must change password at next login check boxes.
10. On the Server Roles page, select sysadmin. Then click OK.
11. Close all open windows and sign out of the remote desktop session.
Configure Endpoints
1.
In the Azure management portal, on the Virtual Machines page, click the name of your virtual
machine to view its details.
2.
3.
Use the wizard to add a stand-alone endpoint. In the Name drop-down list, select the predefined
MSSQL endpoint, which uses the TCP protocol on the public port 1433 and the private port 1433.
4.
Repeat the previous two steps to add a new stand-alone endpoint named SQLCloudSvc with public
and private TCP ports 11435.
5.
When the endpoint has been created, start SQL Server Management Studio in the MIA-SQL local
virtual machine.
6.
When prompted, connect to the default instance of SQL Server in your Azure virtual machine by
specifying the following settings:
o
Login: Instructor
Password: Pa$$w0rd
7.
Verify that you can explore all the objects on the instance in Object Explorer.
8.
You can use the private IP addressing scheme from your on-premises networks for your Azure virtual
machines.
You can configure your Azure virtual machines to use your on-premises Domain Name System (DNS)
servers.
You can enable virtual machines to communicate across cloud service boundaries.
You can create virtual private networks (VPNs) between Azure Virtual Networks and on-premises
networks or individual computers. You can use the Routing and Remote Access (RRAS) service in
Windows Server, or an approved on-premises dedicated hardware device, to implement VPNs.
Reference Links: For a list of approved VPN devices for use with Azure Virtual Network,
see the About VPN Devices for Virtual Network webpage on the MSDN website.
8-11
Azure Virtual Network uses an address pool that you supply when you create a virtual network, and it
allocates the addresses by using Dynamic Host Configuration Protocol (DHCP). The address pool that you
supply must be within one of the following private IP address ranges, which are defined in RFC 1918:
10.0.0.0 /8
172.16.0.0 /12
192.168.0.0 /24
Within an Azure Virtual Network, you can subnet the address spaces to create multiple IP subnets if
required. The DHCP server uses infinite leases, so the IP that a virtual machine uses on a virtual network
are persistent.
Note: Although IP addresses for virtual machines on virtual networks are persistent, a
virtual machine that is in the Stopped (Deallocated) state might have a different IP address when
restarted. You place virtual machines into the Stopped (Deallocated) state when you do not want
to be charged for them. Additionally, if a virtual machine that you configured by using the Azure
management portal undergoes service healing, it may lose its IP address. To prevent this, you can
configure virtual machine on virtual networks by using PowerShell instead.
Unified infrastructure. Incorporate your virtual machines in Azure into your on-premises network so
that the virtual machines are an extension of your on-premises data center. By creating a site-to-site
VPN between the virtual network and the on-premises network, you can enable computers on the
two networks to communicate securely, as if they were on the same physical network. The site-to-site
VPN uses IP Security (IPsec) to ensure that communication between the networks is authenticated
and encrypted.
Distributed applications. You can build distributed applications in which different application tiers
reside in Azure and on-premises. For example, you could host a Web application on an Azure virtual
network and the SQL Server databases that support the application on your on-premises network.
The persistence of IP addresses and the ability to use on-premises DNS servers provides the stable
configuration needed to support distributed applications.
Access for remote users. You can enable remote users to access virtual machines on an Azure Virtual
Network, even when they are not connected to your corporate network, by using a point-to-site VPN
connection. Point-to-site VPN uses Secure Sockets Tunneling Protocol (SSTP).
DNS. When you configure an Azure Virtual Network, you must specify a DNS server address to enable
name resolution between virtual machines on the Azure Virtual Network and the computers on your
on-premises network.
Connectivity. You can connect to multiple Azure Virtual Networks from your on-premises network,
but each Azure Virtual Network can only connect to one external network. You cannot connect one
virtual network in Azure to another.
VPN Configuration. When you configure a site-to-site VPN, you can download a VPN Device
Configuration Script from the Azure management portal to configure the VPN device (including the
Windows RRAS service) that will enable connection to your on-premises network. The script contains
placeholders that you can replace with values to configure security policies and the incoming and
outgoing tunnels.
Planning. Ensure that you plan your Azure Virtual Network configuration carefully before
implementation, because you cannot change some configuration options, such as the IP address
range the virtual network will use, after you add services to a virtual network.
IP version 6 (IPv6). Azure Virtual Network does not support IPv6, so all IP addresses must be IP version
4 (IPv4) addresses.
Reference Links: For more information about configuring Azure Virtual Network, including
configuring a site-to-site VPN, see the Azure Virtual Network Configuration Tasks webpage on
the MSDN website.
Lesson 3
8-13
Many of the considerations and processes involved in creating databases in virtual machines in Azure are
the same or very similar to those for creating databases in SQL Server instances that organizations host on
their own servers. However, there are a few additional guidelines and configuration options to be aware
of in Azure virtual machine infrastructures. This module describes these considerations, as well as the
options for migrating on-premises databases to Azure.
Lesson Objectives
After completing this lesson, you will be able to:
Explain how to migrate databases from SQL Server to virtual machines in Azure, and the
considerations for doing so.
For best performance in databases with high I/O, add at least three data disks and use one of them for log
files. You should then create data files on the other two disks and combine them in a filegroup. Create
tables on this filegroup, and SQL Server will stripe the data across the disks, using a proportional fill
strategy. This approach has been demonstrated to improve I/O performance on Azure virtual machines.
Note: If you use filegroups to stripe data across multiple disks, turn off the geo-replication
feature for the storage account to ensure guaranteed consistency across multiple disk files.
When you add a disk to a virtual machine, caching is disabled for that disk by default. If the workload for
your database consists mostly of read operations, you might be able to improve performance by enabling
Read caching. To ensure transactional integrity, you should not enable Write caching.
Note: Test the disk configuration with caching disabled before enabling Read caching for
production workloads.
Consider the following guidelines for I/O performance optimization when implementing a database on an
Azure virtual machine:
Enabling page compression can improve I/O performance. However, it can also increase CPU
consumption on the virtual machine.
Compress data files when transferring them in and out of Azure storage.
Enable instant file initialization to reduce the time taken to create and expand data files. You can
enable instant file initialization by adding the SQL Server service account to the Perform volume
maintenance tasks policy on the virtual machine.
When you format a new disk that will be used for data files, use a 64-KB allocation unit size.
Technical Requirements. Before migrating, consider your requirements for network connectivity,
Active Directory design, and other technical aspects of your infrastructure. Understanding the
technical requirements to support your cloud-based database server will make it easier to plan a
successful and smooth migration.
Compliance and Security. Are there any government, industry, or corporate policies for data storage
that might affect your ability to move the data or your choice of data center location? How will you
maintain appropriate security of the data when it is stored in an external data center?
8-15
Timeline. What are the key drivers for the migration deadline, and how will you schedule appropriate
time for a trial migration and testing before committing to a complete transition to Azure?
To help with the practical considerations for migration, you can use the Microsoft Assessment and
Planning (MAP) Toolkit, which includes tools to catalog and evaluate your current infrastructure, as well as
estimating resource requirements for servers to be migrated to Azure.
Reference Links: For more information, see the Microsoft Assessment and Planning (MAP)
Toolkit solution accelerator page in TechNet at http://technet.microsoft.com/enus/solutionaccelerators/dd537566.aspx.
When you have decided to migrate an on-premises database to SQL Server in an Azure virtual machine,
you can use one of the following techniques to perform the migration:
Data-tier application. Export the database and its dependencies to a data-tier application .BACPAC
or .DACPAC file. Copy the file to the Azure virtual machine, and import the data-tier application into
SQL Server in the Azure virtual machine. When you use this technique, be aware that a .DACPAC file
contains only the schema for the database, while a .BACPAC file includes both the schema and the
data. If you use a .DACPAC file (perhaps because the database contains an unmanageably large
volume of data to transfer in a single file), you must use another technique to copy the data.
Backup and restore. Back up the on-premises database to Azure storage, and then restore the
backup file to SQL Server in the Azure virtual machine.
Detach and Attach. Detach the database from the on-premises SQL Server instance, copy the
database files to Azure, and attach the database to the SQL Server instance in the Azure virtual
machine.
Generate Transact-SQL scripts. You can generate Transact-SQL scripts from the source database,
including insertion of existing data values, and then run the scripts in the SQL Server instance in
Azure.
SQL Server Import and Export Wizard. Use the SQL Server Import and Export wizard to transfer
data from the on-premises database to a database in an Azure virtual machine. This technique can be
used to transfer data after using a .DACPAC file or Transact-SQL script to recreate the database
schema on the Azure virtual machine.
SQL Server Integration Services. You can use the Transfer Database task in a SQL Server Integration
Services package to transfer an entire database. Alternatively, you can create data flow tasks to copy
data from source tables to tables in a database schema that was created using a .DACPAC file or
Transact-SQL script.
Copy Database Wizard. Use the Copy Database Wizard in SQL Server Management Studio (SSMS) to
copy an on-premises database to SQL Server in an Azure virtual machine. To use this technique, you
must ensure that the SQL Server Agent is running on both servers.
Deploy Database to an Azure VM Wizard. All the database migration techniques described in this
topic so far require you to create and configure an Azure virtual machine before migrating the
database. SSMS for SQL Server 2014 also includes the Deploy a SQL Server Database to an Azure VM
Wizard, which you can use to provision a new virtual machine and migrate a database in a single
procedure.
Note: The Deploy Database to an Azure VM Wizard is discussed in more detail in the next
topic.
1.
2.
Source Settings. In this step, you must connect to the SQL Server instance that contains the source
database and specify a local file system location where the wizard can create a backup of the
database.
3.
Azure Sign-In. In this step, you must specify a management certificate for the Azure subscription
where you want to deploy the database. If you do not have a management certificate, you can sign in
to Azure and generate a new one, which will be downloaded and stored in the local certificate store
for use in subsequent deployments from this workstation.
4.
Deployment Settings. In this step, you can specify the name of a new or existing cloud service and
virtual machine, and select or create an Azure storage account. If you are creating a new cloud service
and virtual machine, you must specify its settings, including a platform image, Administrator
password, size, and location. If you are using an existing virtual machine, you must specify the
Administrator credentials and the SQL Server Cloud Adapter port. You can then specify the SQL
Server instance and name for the destination database.
5.
Summary. This step summarizes the settings you have chosen for the deployment.
6.
Deployment Progress. This step displays the progress of the deployment process.
8-17
As part of the ongoing consolidation effort at Adventure Works, you have identified that it may be more
efficient to use virtual machines in Azure to host some of your SQL Server databases. One of the
databases you have decided to host in Azure is Marketing. Before you create this database, you need to
create a virtual machine in Azure, and configure the disks, storage volumes, and security and connectivity
settings for the new virtual machine.
Objectives
After completing this lab, you will have:
You will begin the process of migrating the HumanResources database by creating the virtual machine
that will host it in Azure. You will then configure the required disks and storage volumes to support the
SQL Server instance.
Note: The Microsoft Azure portal is continually improved, and the user interface may have been updated
since this lab was written. Your instructor will make you aware of any differences between the steps
described in the lab and the current Azure portal user interface.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Create a Virtual Machine
3. Explore the Virtual Machine Infrastructure
4. Add Disks to the Virtual Machine
5. Create Storage Volumes
Ensure that the MSL-TMG1, 20465C-MIA-DC, and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
If you have not already created a Microsoft Azure trial subscription, follow the instructions in
D:\Creating a Microsoft Azure Trial Subscription.htm to do so.
1.
Start Internet Explorer and browse to www.microsoft.com/azure, click Portal, and sign in using the
Microsoft account that is associated with your Azure subscription.
2.
On the Storage page, if you have an existing storage account, note which region it is in.
3.
Create a new Virtual Machine from an image in the gallery, using an image that includes SQL Server
2014 Standard on Windows Server 2012 R2. Use the following guidelines to create the virtual
machine:
Use the user name Student and the password VMPa$$w0rd for the initial administrative
account.
If you have an existing storage account, create the virtual machine in the same region. Otherwise
you can create the virtual machine in any geographical region.
If you have an existing storage account, use it for the virtual machine. Otherwise use an
automatically generated storage account.
Use the default endpoint configuration settings, which enable access for remote desktop and
PowerShell.
In the Azure management portal, view the cloud service and storage account that have been created
for the virtual machine.
2.
View the contents of the storage account, noting that a container named vhds has been created in
which the virtual hard disk file for the virtual machine is stored.
3.
In the Azure management portal, attach the following empty disks to your virtual machine:
A 10 GB disk named Data1 with the cache preference set to Read Only.
A 10 GB disk named Data2 with the cache preference set to Read Only.
8-19
1.
Connect to your virtual machine to establish a remote desktop connection using the MIASQLVM\Student account with the password VMPa$$w0rd.
2.
Use the Disk Management administrative tool to initialize the disks you added in the previous task
and create the following simple volumes, using all available space on each disk:
Disk
Drive Letter
Label
Disk 2
64K
Data 1
Disk 3
64K
Data 2
Disk 4
64K
Logs
Having provisioned the virtual machine, you will now configure the required connectivity and security
options, and then test them to ensure that you can connect to the SQL Server instance.
The main tasks for this exercise are as follows:
1. Configure the Windows Firewall
2. Configure Disk Volume Permissions
3. Configure SQL Server Authentication
4. Configure Endpoints
5. Test Connectivity
In the remote desktop session to the Azure virtual machine, configure Windows Firewall advanced
settings to add the following inbound port rule:
In the remote desktop session to the Azure virtual machine, grant the MSSQLSERVER (NT
Service\MSSQLSERVER) user account full control permission on the M, N, and L volumes you
created in the previous task.
1.
In the remote desktop session to the Azure virtual machine, use SQL Server Management Studio to
configure SQL Server to use SQL Server and Windows Authentication mode and restart the SQL Server
service.
2.
Create a new SQL Server login named Student with the password Pa$$w0rd. Do not enforce
password expiration or a require password change at next login.
3.
4.
In the Azure management portal, edit your virtual machine and create the following stand-alone
endpoints:
Name
Protocol
Public Port
Private Port
MSSQL
TCP
1433
1433
SQLCloudSvc
TCP
11435
11435
Note: The MSSQL endpoint is already defined. You can select it in the Name drop-down
list.
In the MIA-SQL local virtual machine, start SQL Server Management Studio and connect to the
default instance of SQL Server in your Azure virtual machine by specifying the following settings:
o
Login: Student
Password: Pa$$w0rd
8-21
In this exercise, you will create a database called Marketing in a SQL Server instance in a Microsoft Azure
virtual machine.
The main tasks for this exercise are as follows:
1. Create a Database
Use SQL Server Management Studio to create a new database named Marketing in the SQL Server
instance on your Azure virtual machine.
2.
Configure this database to use the following files and filegroups, including a new filegroup named
MarketingData, which should be the default filegroup:
Name
Type
Filegroup
Path
Marketing
Rows
PRIMARY
Default Path
Marketing_Log
Log
N/A
L:\
MarketingData1
Rows
MarketingData
M:\
MarketingData2
Rows
MarketingData
N:\
The benefits of Azure virtual machines and how to create an Azure virtual machine.
Review Question(s)
Question: Do you use virtual machines in Azure to host SQL Server databases, or for any
other purpose? Are there any additional benefits and considerations for using virtual
machines in Azure that we havent considered in this module?
Module 9
Introduction to High Availability in SQL Server 2014
Contents:
Module Overview
9-1
9-2
9-11
9-22
9-26
Module Overview
Objectives
After completing this module, you will be able to:
Describe the core concepts and options for implementing high availability in SQL Server 2014.
Describe how to implement high availability for individual databases by using log shipping.
Lesson 1
9-2
Lesson Objectives
After completing this lesson, you will be able to:
Explain the options for implementing high availability in SQL Server 2014.
Describe considerations for planning a SQL Server 2014 high availability solution.
Planning high availability typically involves assessing the tradeoff between the cost of the potential high
availability solutions and the availability requirements for the system. This is usually stated in terms of the
percentage of time the service needs to remain available. For example, it is common to see high
availability solutions that promise to deliver the five nines of uptime, meaning 99.999 percent availability
during a one-year period. This equates to just over five minutes of downtime per year.
Ideally, a high availability solution should cover all elements of a SQL Server 2014 database infrastructure,
including:
Hardware. Protecting against hardware failure involves duplicating components, such as central
processing units (CPUs) and network cards, and using redundant array of independent disks (RAID)
storage devices. In the event of a component such as a CPU failing, the other CPUs continue to
operate, helping ensure that the service remains available, albeit at a reduced capacity. You can also
duplicate entire servers and networks so that, in the event of a failure, all the workload of a server can
be picked up by a standby server. Generally, this type of redundancy is achieved by employing server
clusters.
9-3
The Windows Server operating system that hosts SQL Server. The operating system can be
affected by corruption to files due to a deliberate attack, user error, or the application of untested
updates. You can use clustering to protect against operating system corruption.
SQL Server instances. SQL Server instances are potentially prone to the same kind of corruption as
the Windows Server operating system. You can protect against this kind of failure by using clustering,
which you will learn about in the next topic.
Individual databases. SQL Server includes several features that enable you to protect individual
databases or groups of databases. You will learn about these in the next topic.
Note: You can use many of the technologies that this module describes to implement both
high availability and disaster recovery. A high availability solution ensures the continuation of
service during an outage caused by, for example, the failure of a server. A disaster recovery
solution enables you to recover from more serious incidents that force services offline for a
longer period, such as fires, extended power outages, or earthquakes.
Database mirroring is conceptually similar to log shipping, but differs in several ways:
9-4
It uses a third server, named the witness server, to enable automatic failover. If you do not require
automatic failover, you can omit the witness server from the configuration and use only manual
failover.
Transactions can be committed synchronously on the principal server and the mirror server, enabling
you to maintain identical copies of a database of the two servers. You could also configure
asynchronous commits, which enables you to gain a performance advantage on the principal server
at the expense of data consistency.
Data on the mirror server is not available for read access. However, you can create a database
snapshot on the mirror server to enable read access to the database.
A principal server can have only one mirror server. In log shipping, a primary server can have multiple
secondary servers.
An AlwaysOn Failover Cluster Instance (FCI) is a clustered instance of SQL Server 2014 installed on a WSFC
cluster, providing high availability at the level of the server instance. A WSFC consists of multiple server
nodes, and the single FCI is installed across them. All the nodes have access to shared storage, such as a
storage area network (SAN). An AlwaysOn FCI provides automatic failover in response to a range of
events, including hardware failure, operating system failures, and service failures. From the clients'
perspective, the cluster appears just the same as a stand-alone instance of SQL Server, and they can access
it in the same way. When failover occurs, there is no need to reconfigure clients because they are
automatically redirected to the new active node.
Note: You will learn more about AlwaysOn Failover Cluster Instances in Module 10 of this
course.
AlwaysOn Availability Groups
The AlwaysOn Availability Groups feature takes advantage of WSFC technology to provide database-level
high availability. Although availability groups are conceptually similar to database mirroring, they offer a
more robust way of protecting databases, and also provide more advanced functionality.
To create an AlwaysOn Availability Group, you need to first create a WSFC and add servers that host SQL
Server 2014 instances as members of the cluster. The WSFC supports the availability group by monitoring
the health of the replicas and managing failover. Note that, even though AlwaysOn Availability Groups
use WSFC, you do not need to install SQL Server as a clustered instance; each member of the availability
group is installed as a stand-alone instance and has its own dedicated storage. There is no requirement
for shared storage with AlwaysOn Availability Groups.
Note: You will learn more about AlwaysOn Availability Groups in module 10 of this course.
Note: You can also use SQL Server replication as a limited high availability solution. By
replicating data to a second server, you can ensure that it remains available should the first server
fail. However, because replication does not provide a failover mechanism, you should only
consider it as a high availability solution in very limited circumstances.
9-5
Typically, the amount of downtime a service is allowed in a given period is stated as a target or set of
targets in a service level agreement (SLA). For example, an SLA may specify that a particular database or
service must be available 99 percent of the time. In a one-year period this equates to just over three-anda-half days of downtime. The agreed amount of allowed downtime usually includes time required for
performing maintenance activities such as upgrading, servicing, and patching, so you should include this
in your planning. For services and databases that have very high uptime requirements, you should
consider using high availability solutions, such as AlwaysOn FCIs, that enable automatic failover and client
redirectionthis will help to minimize downtime.
Another factor that will influence your decision is cost and server utilization. For example, clustering can
offer protection against failure at several levels, including service, operating system, and hardwarebut it
can be relatively expensive to implement. For clustering, it is recommended that you use servers that are
near identical, which can be costly if the one you want to protect is a high-end model. Additionally, it is
good practice to include a node in the cluster that has no active workload; this node can take over the
workload of other cluster nodes that failover to it, without a drop in performance. To achieve this, you will
have to purchase and maintain a powerful server in the cluster that you do not utilize on a regular basis.
Other solutions, such as an AlwaysOn Availability Group, do enable read access to secondary databases,
which makes more efficient use of resources and can help to keep costs down.
Unit of failover
A key consideration is the precise nature of what you need to protect. Database mirroring and log
shipping provide redundancy at the database level, AlwaysOn Availability Groups provide redundancy for
a set of user databases, and an AlwaysOn FCI provides redundancy for the entire instance or service.
The different editions of SQL Server 2014 offer varying degrees of support for high availability features, so
the editions you use to host your databases will have a bearing on the high availability solutions that you
can consider.
Database Mirroring
AlwaysOn
Availability
Groups
AlwaysOn Failover
Cluster Instances
Unit of
failover
Database
Database
Set of
databases
Instance or service
Relative
cost
Medium
Medium
Medium
High
Automatic
failover
No
Yes
Yes
Automatic
client
redirection
on failover
No
No
Yes
Yes
Maximum
number of
replicas /
nodes
No limit
Determined by host
operating system
Readable
replicas
Yes
Yes
No
SQL Server
Edition
All
(except
Express
Edition)
SQL Server
2014
Enterprise
9-6
You can combine some high availability solutions for extra protection. For example, you can combine an
AlwaysOn FCI with an AlwaysOn Availability Group to create a high availability solution for both your
databases and SQL Server 2014 instances. You cannot use database mirroring and AlwaysOn Availability
Groups on the same servers.
Host clustering
The first step to achieving high availability in a
private cloud is to use Windows Failover Clustering
to cluster Hyper-V hosts. Host clustering is achieved
by adding two or more physical servers on which
Hyper-V is installed to a Windows Failover Cluster.
Should the primary host in the cluster fail, a
secondary node will take over, ensuring that the private cloud virtualization fabric remains available.
Windows Server 2012 R2 supports clusters with up to 64 Hyper-V nodes, hosting up to 8,000 virtual
machines.
9-7
Windows Server 2012 and Windows Server 2012 R2 include the Virtual Machine cluster role, which you
can use to add Hyper-V virtual machines to a cluster. To provide high availability for Hyper-V virtual
machines in this way, you need to perform the following high-level steps:
Create a WSFC that includes the required number of physical server nodes.
Create the required virtual machines, using the clusters shared storage for the virtual hard disk files.
Use the Failover Cluster Manager tool to create the Virtual Machine roles to specify the virtual
machines that you want to make highly available.
Live Migration
Live Migration is a feature of Windows Server Hyper-V that enables you to move a virtual machine from
one Hyper-V host to another without taking it offline. This capability is useful in scenarios where the
primary node host needs to be taken offline for maintenance, but you require continuous operation of
the virtual machines that are running on it.
In versions of Windows Server prior to Windows Server 2008 R2, when you implement virtual machines on
a cluster, you must use a separate disk or logical unit number (LUN) on the shared storage for each virtual
machine. This is because the cluster allows one node at a time to access any given LUN on the cluster
storage. In this scenario, when a virtual machine fails over, the new host node must take ownership of the
LUN so it can run the virtual machine. This configuration can result in a very large number of LUNs and
can be relatively complex to set up and administer.
In Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2, you can use Cluster
Shared Volumes (CSVs) instead. A CSV is a disk or LUN that you can use to host the virtual hard disk files
of multiple virtual machines, so you do not need to configure and maintain a separate disk for each one.
You can also create CSVs based on Windows Server Storage Spaces.
9-8
Reduced complexity because there are fewer disks or LUNs to create and manage.
More efficient use of disk space. You can maintain a single pool of additional space on a CSV for all
virtual machines to use instead of having to maintain extra space on every disk that hosts a virtual
machine.
Ownership of the disk does not need to change on failover of a virtual machine. Each virtual machine
can fail over independently of other virtual machines that use the same CSV.
Live Migration is faster because there is no need to transfer the ownership of the disk that hosts the
virtual machines virtual hard disk file.
Guest clusters
With host clustering configured, the virtual machines reside on a physical server cluster. When a node fails
and failover occurs, the entire virtual machine restarts on a new node. The conditions that can cause
failover include hardware failure or problems with the operating system on the virtual machine. However,
if an application that is running on a clustered virtual machine fails, this will not trigger failover. This is
because the host cluster monitors only the health of the virtual machines and not the applications that
run on them. To ensure high availability for applications that run on virtual machines, such as SQL Server
2014, you can also create a guest cluster. This is a WSFC in which all the nodes are virtual machines. You
can then use SQL Server AlwaysOn technologies, such as FCIs and Availability Groups, within the guest
cluster to provide redundancy at the SQL Server virtual machine level.
All the nodes in the guest cluster run on a single physical server. This provides high availability for the
applications, but does not protect the virtual machines from the failure of the host server.
The nodes in the guest cluster run on multiple physical servers. This provides high availability for the
applications, and enables continued access to services in the event of the failure of a host server. If a
host server fails, the other nodes in the cluster detect the failure and bring the cluster roles that were
affected by the failure online.
The guest cluster runs on a physical host cluster. This configuration also provides high availability for
the applications and enables continued access to services if a host server fails. However, it can also
enhance the availability of services after failover. For example, imagine that you configure a two-node
guest cluster in which each node runs on a separate physical server. The physical servers are not
cluster nodes. The guest cluster includes a SQL Server cluster role. If the server hosting the guest
cluster node that owns the SQL role fails, the role will fail over to the other node in the guest
clusterwhich is hosted on a separate physical server. The SQL Server instance will remain available.
However, the SQL Server role is now vulnerable because, if the second physical server fails, there is
nowhere else to fail over. Imagine that you configured a three-node physical cluster instead, and then
configured the two-node guest cluster to run on the host cluster. In this scenario, just as with the last
one, if the guest cluster node that owns the SQL role fails, the role will fail over to the other node in
the guest cluster, so the SQL Server instance will remain available. However, the failure of the first
physical cluster node will additionally have triggered failover within the physical cluster, so the failed
guest cluster node virtual machine will come online on the third physical cluster node. Consequently,
the SQL Server role remains protected against the failure of the second physical node because it can
fail over to the third physical node if required.
9-9
iSCSI Target Server. The iSCSI Target Server feature enables you to present storage on a Windows
Server as iSCSI block storage and to connect to it over a network by using the iSCSI initiator feature
on the servers that will consume the storage. In the context of a guest cluster, you would configure a
file server as an iSCSI Target and on the cluster nodes, use the iSCSI initiator to connect to the iSCSI
Target. The iSCSI Target does not require any specialist hardware.
Shared virtual hard disks. In Windows Server 2012 R2, you can share a virtual hard disk that you create
by using the .vhdx format, so that the nodes in a guest cluster can connect to it and use it as the
cluster shared storage device. Shared virtual hard disks require that both the Hyper-V host and the
guest servers are running Windows Server 2012 R2 or Windows Server 2012. Note that you can only
create shared disks on a Hyper-V cluster, not on stand-alone hosts.
Virtual Fibre Channel. The Virtual Fibre Channel feature enables the nodes in a guest cluster to connect
to a physical fibre channel SAN.
Whichever storage method you use for a guest cluster, you can use the storage to create a CSV, just as
you can with a cluster that uses physical nodes.
Reference Links: For more information about guest clustering, including the storage
options, see Using Guest Clustering for High Availability on the TechNet website.
Built-in replication
These built-in replication mechanisms provide a high degree of resiliency for an organizations virtual
machines. However, Azure is not aware of the health of the services, including SQL Server, that run on the
virtual machines. Also, failover between separate data centers is not automatic and there is no SLA time
limit for this type of failover. For these reasons, you should not rely on Azure alone to satisfy the high
availability requirements of an organizations SLAs. To ensure SQL Server instances and databases are
protected and that your organization can meet the requirements contained in SLAs, you should
implement high availability solutions. The type of high availability solution that you can use depends on
whether the environment is Azure-only or a hybrid.
Azure-only environments
When all your SQL Server databases are in Azure, you can use AlwaysOn Availability Groups for high
availability, with the primary and secondary replicas all located in the same data center. AlwaysOn
Availability Groups leverage the WSFC service, so the servers that host an Availability Group must all be
members of the same domain. Domains cannot span Azure data centers, so you are unable to use
Availability Groups to promote cross-data center resilience. In this scenario, you can use Database
Mirroring with server certificates instead, and place the principal and mirror servers in separate data
centers. However, Database Mirroring is a deprecated feature of SQL Server and may not be supported in
future versions.
Hybrid IT environments
You can use AlwaysOn Availability groups to deliver high availability in hybrid environments by placing
some replicas in Azure and some on-premises. This scenario was described in Module 3 of this course. You
can use a virtual network to connect the two locations, and the servers involved must all be members of
the same domain. When setting up an AlwaysOn Availability Group that includes Azure and on-premises
servers, you should connect the two sites by using a virtual private network (VPN). It is also advisable to
place a domain controller at the secondary site.
As an alternative, you can use Database Mirroring with server certificates. As previously described, this is
useful when all the servers involved are not members of the same domain. Mirroring does not require a
VPN to connect the two sites, making it easier to configure than an AlwaysOn Availability Group.
However, Database Mirroring is a deprecated feature of SQL Server and may not be supported in future
versions. A third option is to use log shipping, with the primary and secondary servers in different
locations.
Reference Links: For more information about planning and configuring high availability in
Azure, see the MSDN article High Availability and Disaster Recovery for SQL Server in Azure
Virtual Machines.
Lesson 2
Log Shipping
9-11
Log shipping is a mature, tried and tested technology that you can use to maintain high availability for
individual databases in SQL Server. This lesson describes how log shipping works, including the role of
SQL Server Agent, and outlines some important factors to consider before implementation.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the tasks that you should perform before implementing log shipping.
Overview
When you configure log shipping, you use a backup
of an existing database from the primary server to
implement a copy of that database on a secondary
server. SQL Server Agent jobs then update the
secondary database by using a three-phase process:
1.
2.
3.
Apply the changes from the transaction log backup to the database on the secondary server.
The initial backup you use to set up log shipping is a full backup. Subsequent backups are all transaction
log backups. The log backups are initiated by SQL Server Agent jobs that you can schedule to run on the
primary server as frequently as required. A second SQL Server Agent job copies the log backups to the
secondary server, and a third job restores them. You can also configure jobs to raise alerts, for example if a
backup job fails to run. The restore operation on the secondary server does not use the RECOVERY option,
so the secondary database remains in a recovering state, ready for the restoration of subsequent
transaction log backups.
Although it is not the newest high availability technology for SQL Server, log shipping can still be useful in
many situations:
Unlike other high availability technologies, log shipping is included in the Enterprise, Standard,
Business Intelligence, and Web editions of SQL Server 2014. This makes it possible to deliver high
availability for all the licensed editions of SQL Server 2014 that your organization might be using.
Note: Log shipping is not available in any of the Express editions of SQL Server 2014.
Log shipping does not require Active Directory to be available. This allows you to use log shipping
in scenarios such as enabling cross-data center high availability in Azure.
Log shipping does not require you to create a WSFC, making it very easy to configure.
Log shipping does not have a fixed maximum number of secondary servers, so you can use as many
as your scenario requires.
You can use a log shipping secondary server to service read-only requests, which can be useful if, for
example, you want to pass on the processing workload caused by reporting applications that access the
database. However, if you are considering using log shipping secondary servers in this way, you should
bear in mind that secondary servers cannot service user connections when they are actually performing
the periodic restore of the transaction log from the primary.
Failover
The process of failing over to a log shipping secondary is not automatic, so you must manually initiate
failover when required. To manually fail over to a secondary server, you must first restore the database to
the required pointincluding tail log backup from the primary server if possibleand then configure the
applications to connect to the new server.
LSBackup_databasename. This job exists on the primary server, and backs up the primary database.
When you configure it, you must supply a backup folder location in the form of a network path. The
backup folder should enable read and write access for the SQL Server service account on the primary
server. You must also configure a schedule for the job that defines how frequently it runs. You can
also specify a retention period for backups, configure an alert that will contact an operator if a
backup fails to occur for a specified number of hours, and specify whether the backups will use
backup compression.
9-13
Note that the names of the jobs given above are the default names. If required, you can supply different
names for when you configure log shipping.
Running the jobs more frequently results in reduced latency, as the secondary server is updated more
often. Typically, it also results in smaller backup files because fewer changes have accumulated, which
can help to reduce network latency and speed backup and restore operations. However, frequent
running of jobs will cause more interruptions for users if the secondary server is in standby mode,
because SQL Server does not allow user access to the database when it is being restored.
Running the jobs less frequently results in greater latency and, generally, in larger backup files.
However, it also results in fewer interruptions in user access to the secondary database in standby
mode.
Alerts
Log shipping also includes alert jobs that contact an operator if a job fails to run. An alert job on the
primary server called LS_Alert_primaryservername is responsible for monitoring the
LSBAckup_databasename job. An alert called LSAlert_secondaryservername is responsible for
monitoring the LSCopy_primaryinstancename_databasename and
LSRestore_primaryinstancename_databasename jobs. If you choose to configure an optional monitor
server, an alert job there monitors all three log shipping jobs.
When preparing for log shipping, consider the following points to ensure that SQL Server Agent can run
the jobs:
Ensure that SQL Server Agent is running on both the primary and secondary servers.
The accounts that you use for SQL Server Agent should ideally be domain accounts.
Ensure that the accounts used by SQL Server Agents have the required permissions on the log
shipping folders:
o
The SQL Server Agent account on the secondary server needs permission to access the folder that
stores the backup files. This is because the job in the secondary server that copies the log
backups must access this folder.
The SQL Server Agent account on the secondary server needs permission to access the folder that
contains copies of the backup files. This is because the job in the secondary server that copies the
log backups places the files into this folder.
Reference Links: For more information about service accounts for SQL Server Agent in log
shipping scenarios, and about security on log shipping in general, see How to Configure Security
for SQL Server Log Shipping on the Microsoft Support website.
In No Recovery mode, after the SQL Server Agent job applies a transaction log backup to the database on
the secondary instance, the database remains in a recovering state, and is inaccessible to users. In Standby
mode, after the SQL Server Agent job applies a transaction log backup to the database on the secondary
instance, the database remains in a recovering state, but SQL Server performs the additional task of rolling
back any uncommitted transactions and saving them separately. This enables users to access the database
on a read-only basis, and maintains transactional consistency. However, in Standby mode, there are two
additional considerations:
When the SQL Server Agent applies the next transaction log backup to the secondary database, SQL
Server must first reapply the transactions that it rolled back after the previous backup was restored.
This additional overhead can extend the time it takes to restore the database, affecting user access
and, possibly, performance.
SQL Server cannot restore the database while there are active user connections. By default, SQL Server
will wait for users to disconnect before restoring the transaction log, which can delay the process. To
force users to disconnect when a restore is about to begin, you can use the Disconnect users in the
database when restoring backups option when configuring the secondary database settings.
Degree of latency
The amount of latency between the primary and secondary servers depends on the frequency of the
backup and restore of the transaction log. To minimize latency, you should configure the SQL Server
Agent log shipping jobs to perform frequent backups and restores. However, because performing
backups and restores has an impact on performance and user access to databases, you should balance the
need for reduced latency against the effect on performance, and configure the SQL Server Agent jobs
accordingly.
The size of the transaction log file will affect the length of time it takes to run the log shipping SQL Server
Agent jobs that back up, copy, and restore the logs. The larger the log file, the longer these jobs will take
to run, and this can impact both performance and user access to the database. Consequently, when
assessing the suitability of log shipping for a given database, you should take account of processes in the
database, such as index rebuilds and periodic data loading, that can increase the size of the log.
9-15
As part of the configuration of log shipping, you can specify how long to keep the transaction log backup
files on the primary and secondary servers. This is important because, if the folder that stores the
transaction log backups becomes full, SQL Server cannot process any more backups and log shipping will
fail.
SQL Server prevents you from restoring a database to an older version of SQL Server when the backup file
was originally created on a newer version. As log shipping depends on restoring backups from the
primary server to the secondary server, you cant use it if the primary server is running a newer version of
SQL Server. You can configure log shipping if the secondary server is running a newer version than the
primary, but in this scenario you would be unable to fail back after failover.
You can keep track of log shipping jobs and ensure that a designated operator receives an alert if a job
fails to run by using SQL Server Agent alert jobs. You can use a dedicated monitor server to host an alert
job, or just host them on the primary and secondary servers.
A monitor server in a log shipping infrastructure enables you to offload the monitoring task to a separate
server. Using a monitor server also ensures that monitoring does not stop if either the primary or
secondary server fails. The monitor server hosts a single SQL Server Agent job that monitors both the
primary and secondary servers and sends alerts if any of the jobs fails to run for a defined period. You can
use a single monitor server for multiple log shipping instances, which makes it easier to manage and
maintain.
Note: If you decide to use a monitor server, you must configure it when you first configure
log shipping. You cannot subsequently add a monitor server without having to reconfigure log
shipping.
If you do not add a monitor server when you configure log shipping, you can still use SQL Server Agent
alerts. In this scenario, the primary server hosts an alert job that monitors the transaction log backup job,
and the secondary server hosts an alert job that monitors the copy and restore jobs.
Reference Links: In addition to the alerts that you can use to monitor log shipping, there
are various tables and stored procedures available to you. For more information about these
tables and stored procedures, see Monitor Log Shipping (Transact-SQL) on the MSDN website.
Failover
Failover and failback do not occur automatically; you must manually perform these operations.
Pre-Installation Tasks
Before you configure log shipping, there are a
number of tasks you must perform:
The default setting for backup compression in log shipping is to use the server default, so whatever
compression setting the server is set to use, log shipping will also use. You can also set compression
regardless of the server setting by choosing either the Compress backup or the Do not compress
backup options.
Note that, although SQL Server 2014 web edition supports log shipping, it does not support backup
compression, so you should not plan to use compression with servers that run it.
Ensure that the database on the primary server uses either the Full or Bulk Logged recovery model. It
is not possible to perform transaction log backups for databases that use the Simple recovery model,
so you cannot use log shipping for these databases.
Create the required logins on the secondary server to enable users to access the secondary database
in the event of failover. Logins are not contained in user databases, so they are not copied when you
use the backup to create the database on the secondary server.
Reference Links: For more information about managing logins in log shipping, see
Management of Logins and Jobs After Role Switching on the MSDN website.
If the database you want to protect doesnt already exist on the secondary server, you need to create
it by restoring it from a backup of the database on the primary server. You can use an existing backup
for this purpose, or create a new full backup as part of the process of configuring log shipping. Note
that, if you create a new full backup, your existing database backup schedule will be affected and you
will have to ensure that you include it in your restore plan.
Demonstration Steps
Configure Log Shipping
9-17
1.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running. Then log onto
20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the D:\Demofiles\Mod09 folder, right-click Setup.cmd and click Run as administrator. When
prompted, click Yes.
3.
Start SQL Server Management Studio and connect to MIA-SQL by using Windows authentication.
4.
In Object Explorer, expand Databases, right-click DemoDB, click Tasks, and then click Ship
Transaction Logs.
5.
In the Database Properties window, on the Transaction Log Shipping page, click the Enable this
as a primary database in a log shipping configuration check box.
6.
Click Backup Settings. In the Network path to backup folder field, type \\MIASQL\PrimaryBackupFolder.
7.
In the If the backup folder is located on the primary server, type a local path to the folder field,
type D:\DemoFiles\Mod09\PrimaryBackupFolder.
8.
Click Schedule, in the New Job Schedule dialog box, under Daily frequency, in the Occurs every
field, type 3, and then click OK. This configures the job to run every three minutes.
9.
11. In the Secondary Database Settings window, click Connect, in the Connect to Server dialog box, in
the Server name field, type MIA-SQL\SQL2, and then click Connect.
12. On the Initialize Secondary Database tab, ensure that Yes, generate a full backup of the primary
database and restore it into the secondary database is selected.
13. Click the Copy Files tab. In the Destination folder for copied files field, type \\MIASQL\SecondaryRestoreFolder.
Note: In this demonstration, we are using two instances of SQL Server on the same
Windows server. In a real world scenario, you would normally use two separate Windows servers.
14. Click Schedule, in the New Job Schedule dialog box, under Daily frequency, in the Occurs every
field, type 3, and then click OK.
15. Click the Restore Transaction Log tab, and then click Schedule. In the New Job Schedule dialog
box, under Daily frequency, in the Occurs every field, type 3, and then click OK.
16. Click Standby mode, and then click the Disconnect users in the database when restoring
backups check box.
17. In the Secondary Database Settings window, click OK, and then in the Database Properties
DemoDB window, click OK.
18. In the Save Log Shipping Configuration dialog box, wait for the configuration to complete, and
then click Close.
1.
In Object Explorer, expand SQL Server Agent, expand Jobs, and note the two log shipping jobs,
which are called LSAlert_MIA-SQL and LSBackup_DemoDB.
2.
In Object Explorer, click Connect, click Database Engine, in the Connect To Server dialog box, in
the Server name field, type MIA-SQL\SQL2, and then click Connect.
3.
In Object Explorer, under MIA-SQL\SQL2, expand SQL Server Agent, expand Jobs, and note the
three log shipping jobs, which are called LSAlert_MIA-SQL\SQL2, LSCopy_MIA-SQL_DemoDB, and
LSRestore_MIA-SQL_DemoDB.
4.
Expand Databases, and note that the DemoDB database is now present on the MIA-SQL\SQL2
instance, and that it shows as Standby / Read-Only.
5.
6.
7.
2.
3.
In the TestPrimary.sql query window, under the comment Query a table in the DemoDB database,
highlight the Transact-SQL statements and then click Execute.
4.
Review the results, and note that the value in the Color column is Black.
5.
Under the comment Update the color value to Red for ProductKey 210, highlight the TransactSQL statements and then click Execute.
6.
Review the results, and note that the value in the Color column is now Red.
7.
In Object Explorer, under MIA-SQL, under SQL Server Agent, right-click LSBackup_DemoDB, and
then click Start Job at Step.
8.
In the Start Jobs - MIA-SQL dialog box, wait for the steps to complete, and then click Close.
9.
In Object Explorer, under MIA-SQL\SQL2, under SQL Server Agent, right-click LSCopy_MIASQL_DemoDB, and then click Start Job at Step.
10. In the Start Jobs - MIA-SQL\SQL2 dialog box, wait for the steps to complete, and then click Close.
11. Repeat steps 9 and 10 to run the LSRestore_MIA-SQL_DemoDB job.
12. In Windows Explorer, in the D:\Demofiles\Mod09 folder, double-click TestSecondary.sql.
13. In the TestSecondary.sql query window, under the comment Query a table in the DemoDB
database, highlight the Transact-SQL statements and then click Execute.
14. Review the results, and note that the value in the Color column is Red, indicating that the change
you made on the primary server has been restored on the secondary server.
15. Under the comment Update the color value to Blue for ProductKey 210, highlight the TransactSQL statement and then click Execute.
16. Review the message stating that it is not possible to update the read-only database.
9-19
17. Close the TestPrimary.sql and TestSecondary.sql query windows, and do not save any changes.
18. Leave SQL Server Management Studio open for the next demonstration.
Planned failover
With a planned failover, the database on the
primary server is typically still accessible and no
data loss should have occurred. Perform a planned
failover with the following actions:
1.
Check the name of the latest backup on the primary server, and ensure that it has been applied to the
database on the secondary server. You can query the msdb.dbo.log_shipping_monitor_primary
table on the primary server and the msdb.dbo.log_shipping_monitor_secondary table on the
secondary table for this information.
2.
If the latest backup has not yet been restored to the secondary, run the
LSCopy_primaryservername_databasename and LSRestore_primaryservername_databasename
jobs on the secondary server to restore the latest log to the database. If there are multiple unrestored
backups, ensure you restore them in sequence.
3.
Back up the transaction log on the primary server by using the NORECOVERY option. This takes the
primary database offline, so users will no longer be able to perform any read-write operations on the
primary database.
4.
Manually copy the log backup from step 3 to the secondary server and then restore it, specifying the
NORECOVERY option.
5.
Perform a database restore of the secondary database on the secondary server, specifying the
RECOVERY option. This brings the database online.
6.
Disable the backup, copy, and restore jobs on the primary and secondary servers.
7.
To enable you to perform failover back to the original primary server after you have completed
maintenance, set up log shipping on the secondary server. Configure the old secondary server to be
the new primary server and the old primary server to be the new secondary server. Note that, if you
have previously performed this step, you can omit it when performing subsequent failovers because
the required SQL Server Agent jobs will already exist.
8.
Copy any users and SQL Server Agent jobs to the secondary server from the primary server.
Remember that log shipping only copies the transaction logs, so any users or jobs that you added
after initially configuring log shipping will not be present in the secondary database.
9.
Reconfigure applications that access the database to reach it by using the secondary server.
10. After completing maintenance on the primary server, repeat the above steps to fail over back to the
original primary server.
Unplanned failover
The process for performing an unplanned failover is largely the same as that for a planned failover. The
most likely difference is that you will probably not be able to run any backups on the primary server in the
unplanned scenario; consequently, you may suffer some data loss because you cannot bring the
secondary database up to date.
Testing failover
After implementing log shipping, you should ensure that you test failover. There are several reasons for
this, including:
To ensure that you and any other responsible administrators know how to perform failover efficiently
and correctly so you can re-establish database access as quickly as possible.
To make it possible for you to provide an accurate estimate of the recovery time to business users.
Demonstration Steps
Initiate Manual Failover
1.
Ensure that you have completed the previous demonstration in this module.
2.
In Object Explorer, click MIA-SQL, and then open BackupLog.sql in the D:\Demofiles\Mod09 folder.
3.
In SQL Server Management Studio, in the BackupLog.sql query window, below the comment Backup
the DemoDB log on the primary server, highlight the Transact-SQL statement, and then click
Execute.
4.
5.
In SQL Server Management Studio, in Object Explorer, click MIA-SQL\SQL2. Then open
RestoreLog.sql in the D:\Demofiles\Mod09 folder.
6.
In Object Explorer, under MIA-SQL\SQL2, under SQL Server Agent, right-click LSCopy_MIASQL_DemoDB, and then click Start Job at Step.
7.
In the Start Jobs - MIA-SQL\SQL2 dialog box, wait for the steps to complete, and then click Close.
8.
9.
In SQL Server Management Studio, in the RestoreLog.sql query window, under the comment
Restore the final log backup, highlight the Transact-SQL statement, and then click Execute.
10. Under the comment Recover the database, highlight the Transact-SQL statement, and then click
Execute.
11. In Object Explorer, under MIA-SQL, right-click DemoDB, and then click Refresh. Note that the
DemoDB database is now in a Restoring state.
9-21
12. In Object Explorer, under MIA-SQL\SQL2, right-click DemoDB, and then click Refresh. Note that the
DemoDB database is now online.
13. Open TestFailover.sql in the D:\Demofiles\Mod09 folder.
14. In the TestFailover.sql query window, under the comment View a record, highlight the TransactSQL statement, and then click Execute.
15. In the TestFailover.sql query window, under the comment Update a record, highlight the TransactSQL statement, and then click Execute.
16. Close SQL Server Management Studio, and do not save any changes.
17. In Windows Explorer, in the D:\Demofiles\Mod09 folder, right-click Cleanup.cmd, click Run as
Administrator, and when prompted, click Yes.
A branch office at Adventure Works keeps a copy of the Products database on an instance of SQL Server
2014. The manager wants to ensure that, if the Products database becomes unavailable for any reason,
users will be able to access it again with minimal delay. The branch office does not have a WFSC available.
For this reason, and because there no requirement for uninterrupted service, you have decided to
implement log shipping to keep the Products database highly available.
Objectives
After completing this lab, you will have:
Tested access to the Products database on the primary and secondary servers.
To meet the requirements for high availability of the Products database, you will configure log shipping.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Configure Log Shipping
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the MIA-SQL database engine instance, configure the Products database as the primary database
for log shipping.
2.
For the backup settings, specify both the network path \\MIA-SQL\ProductsBackupFolder and the
local path D:\Labfiles\Lab09\Starter\ProductsBackupFolder, and configure the SQL Server Agent
backup job to run every three minutes.
3.
4.
Initialize the secondary database by generating the backup and restoring it automatically.
5.
Use the \\MIA-SQL\ProductsRestoreFolder as the destination folder for copied backups, and
configure both the SQL Server Agent copy and restore jobs to run every three minutes.
6.
Use standby mode, and force the disconnection of users when the restore job runs.
9-23
Having configured log shipping for the Products database, you will check the log shipping configuration
and test access to the database of the primary and secondary servers.
The main tasks for this exercise are as follows:
1. Inspect the Log Shipping Configuration
2. Test Access to The Primary and Secondary Databases
Verify that the log shipping jobs were created successfully on the MIA-SQL and MIA-SQL\SQL2
instances.
2.
Verify that the primary and secondary databases are also in the correct states.
3.
In the MIA-SQL instance, use the following code to query the Product table in the primary database.
SELECT * FROM Products.dbo.Product
WHERE ProductSubcategoryKey = 14
AND StandardCost IS NULL;
GO
2.
On the MIA-SQL instance, use the following code to update the Product table.
UPDATE Products.dbo.Product
SET StandardCost = 868.6342
WHERE ProductSubcategoryKey = 14
AND StandardCost IS NULL;
GO
3.
On the MIA-SQL instance, use the following code to verify the updates.
SELECT * FROM Products.dbo.Product
WHERE ProductSubcategoryKey = 14;
GO
4.
Wait three minutes for the log backups to be shipped to the secondary server.
5.
On the MIA-SQL\SQL2 instance, use the following code to query the Product table and verify that
all rows have a non-NULL value in the StandardCost column (indicating that the change you made
on the primary server has been restored on the secondary server).
SELECT * FROM Products.dbo.Product
WHERE ProductSubcategoryKey = 14;
GO
6.
On the MIA-SQL\SQL2 instance, use the following code to verify that the secondary database is not
updateable.
UPDATE Products.dbo.Product
SET Color = 'Blue'
WHERE ProductKey = 210;
GO
Having configured log shipping and tested access to the primary and secondary databases, you will now
test failover.
The main tasks for this exercise are as follows:
1. Back Up the Log and Copy it to the Secondary Server
2. Restore the Log on the Secondary Server
3. Test Failover
On the MIA-SQL instance, use the following ode to back up the transaction log for the Products
database by using the NORECOVERY option.
BACKUP LOG Products
TO DISK = N'\\MIA-SQL\ProductsBackupFolder\ProductsFinalBackup.trn'
WITH NORECOVERY;
GO
2.
On the MIA-SQL\SQL2 instance, use the following code to bring the Products database online.
RESTORE DATABASE Products WITH RECOVERY;
GO
On the MIA-SQL\SQL2 instance, use the following code to query the Product table.
SELECT * FROM Products.dbo.Product
WHERE ProductKey = 210;
GO
2.
On the MIA-SQL\SQL2 instance, use the following code to update the Product table.
UPDATE Products.dbo.Product
SET Color = 'Blue' WHERE ProductKey = 210;
GO
3.
On the MIA-SQL\SQL2 instance, use the following code to verify that the update succeeded.
SELECT * FROM Products.dbo.Product
WHERE ProductKey = 210;
GO
4.
9-25
On the MIA-SQL\SQL2 instance, use the following code to restore the ProductsFinalBackup.trn
transaction log backup to the Products database using the NORECOVERY option.
RESTORE LOG Products
FROM DISK = N'\\MIA-SQL\ProductsRestoreFolder\ProductsFinalBackup.trn'
WITH NORECOVERY;
GO
2.
In the D:\Labfiles\Lab09\Starter folder, run Cleanup.cmd as Administrator, and when prompted, click
Yes.
Log shipping.
Review Question(s)
Question: In what scenarios might an organization choose log shipping as a high availability
solution?
Module 10
Clustering with Windows Server and SQL Server 2014
Contents:
Module Overview
10-1
10-2
10-11
10-22
10-26
Module Overview
Ensuring that the databases that support an organizations applications remain highly available is
essential. SQL Server 2014 is closely integrated with the Windows Server Failover Clustering (WSFC)
feature in Windows Server 2012 and Windows Server 2012 R2. This enables you to create enterprise-class
clustering solutions that can deliver comprehensive high availability and disaster recovery solutions.
Objectives
After completing this module, you will be able to:
Describe how to use SQL Server AlwaysOn Failover Cluster Instances (FCIs) to maintain high
availability for SQL Server instances.
Lesson 1
The SQL Server AlwaysOn high availability technologies depend upon the functionality of WSFC, so it is
important that you understand the fundamental concepts and configuration options. In this lesson, you
will learn about WSFC in Windows Server 2012 R2 and Windows Server 2012, and the options and
considerations for creating clusters.
Lesson Objectives
After completing this lesson, you will be able to:
Describe how to use WSFC with Hyper-V to deliver high availability for virtualized workloads.
A Microsoft WSFC uses multiple servers that are running the Windows Server operating system, and a
form of shared storage, such as a storage area network (SAN), to provide high availability for services and
applications. The servers in a WSFC are called cluster nodes. A WSFC also includes a set of cluster
resources, that can be items of hardware, or logical resources such as IP addresses and services. Only one
node in a cluster can own a given cluster resource at any point. A WSFC arranges resources into resource
groups. A resource group is a collection of associated resources, such as the set of resources that you
would need to run a given clustered application. As part of the process of creating a SQL Server FCI, you
create a resource group that contains all the required resources to run the FCI. The Failover Cluster
Manager tool displays resource groups as cluster roles.
Note: In earlier versions of Windows Server, clustered roles were called clustered
applications or clustered services.
10-3
In addition to the SQL Server FCI role, there are many cluster roles that you can configure in a Windows
Server 2012 R2 Failover Cluster, including File Server roles, Virtual Machine clustered roles, and SQL Server
AlwaysOn Availability Group roles.
When a cluster node that owns a cluster role fails, ownership of the role passes to another node in the
cluster. This process is called failover. If a role fails over to another node, you can choose to automatically
revert the role to a preferred node. This process is called automatic failback. Each role has a preferred
owner, which is a node in the cluster that the resource group fails back to preferentially. You can specify
failover settings in the properties of each role. The failover settings that you can define include:
The number of times the cluster service is allowed to automatically restart or fail over a role within a
defined period of time. For example, you could allow a maximum of two restarts or failovers for a role
within a six-hour period. If the role fails a third time, the cluster service will not attempt to restart it or
initiate failover.
Whether or not automatic failback to a preferred node is allowed. Often, administrators do not
enable automatic failback because they prefer to manually control when it occurs.
The list of the preferred owners for the role to use on failback. The node that a role fails back to is
determined by a list of preferred owners, which ranks the other cluster nodes in the order that they
should be used to host the role. Nodes lower down the list will only be used if a node higher up is
unavailable.
Note: It is a common misunderstanding that the list of preferred owners specifies the
nodes to which cluster roles can pass on failover. In fact, the preferred owners list is only used to
specify the nodes to use for automatic failback.
Each node in a WSFC has a copy of the cluster database. The cluster database contains data about the
cluster objects in the cluster, the properties of those cluster objects, and additional configuration
information about the cluster. Cluster objects include the nodes, resources, and roles described above, as
well as the networks, network interfaces, and resource types that the WSFC uses. Every node must update
its copy of the cluster database when the cluster configuration changes.
Note: WSFCs include a range of resource types, which includes the cluster name resource
type and the IP address resource type. Each cluster resource in a WSFC must belong to one of the
defined types.
Cluster configuration
With SQL Server 2014, you can choose to create single-instance and multi-instance failover clusters. In a
single-instance failover cluster, you install just one instance of SQL Server, and this can fail over between
the nodes in the cluster. For example, you could create a two-node cluster with a single instance of SQL
Server 2014. In this configuration, one node would host the SQL Server role and the other would be idle,
but would take over the hosting of the role on failover. The benefit of this type of configuration is that it
ensures that after failover, the cluster can continue to process workloads in a predictable way, without any
unanticipated loss of performance.
In a multi-instance failover cluster, most or all of the nodes typically host a SQL Server instance. For
example, you could create a two-node multi-instance cluster in which each node hosts a SQL Server role.
Although this may appear to be a more efficient use of resources, to ensure that the cluster can continue
to process workloads in a predictable way after failover, each server must have the capacity to take on the
workload of the other node in addition to its own.
Reference Links: Windows Server 2012 R2 includes many enhancements to WSFC. For a list
of these enhancements, see the Whats New in Failover Clustering in Windows Server 2012 R2?
page on the TechNet website.
Quorum
Quorum is an important concept in WSFC. As a SQL
Server administrator, you may not frequently need
to configure quorum settings, but it is still helpful to
understand how quorum works. The word quorum
comes originally from the world of politics, and
refers to the minimum number of people required
for an assembly to conduct its business. In a WSFC,
quorum refers to the minimum number of votes
required for the cluster to continue functioning as a
cluster.
Achieving quorum
In a WSFC, each node has a vote. Additionally, other agents can have a vote in a cluster. These include
disk and file share witnesses. The nodes, file share witnesses, and disk witnesses periodically cast their
votes, and then compare the number of votes against the known number of votes in the cluster, which is
specified in the cluster database. For a cluster to achieve quorum, more than half the votes in the cluster
must be counted. If a node determines that the number of votes has fallen below this, it will stop the
cluster service until it can communicate again with the other nodes.
In the example above, network conditions resulted in two groups of nodes which could not communicate.
On counting votes, the nodes in the smaller group would detect only two votesinstead of attempting to
take ownership of any cluster resource groups, they would shut down. The nodes in the larger group
would detect three votes, which is more than the half required to achieve quorum. These three nodes
would continue to function as a cluster, taking ownership of any resources if necessary. Once the network
10-5
problem was resolved, the nodes in the larger group would detect the votes from two previously isolated
nodes, and they would be reincorporated into the cluster.
Quorum configurations
To achieve quorum, a WSFC must contain an odd number of voters. If there were an even number, it
would be possible to divide the cluster into two groups containing the same number of nodes, and
quorum could not be achieved.
WSFCs support four types of quorum configuration:
Node majority. In this configuration, every node in the cluster has a vote. Node Majority is suitable
when there is an odd number of nodes in a cluster.
Node and disk majority. In this configuration, every node has a vote, as well as a disk which also has
a vote. This is known as either the disk witness or the cluster disk. Node and disk majority is suitable
for clusters with an even number of nodes because the disk witness ensures an odd number of votes.
Two-node and four-node clusters are very common, so the node and disk majority configuration is
correspondingly commonly used. Considerations for using a disk witness include:
o
The disk witness must itself be on shared storage and be highly available so that ownership can
switch between nodes.
The disk must be a minimum of 512 MB in size and be formatted with either NTFS or ReFS.
The disk must be dedicated to cluster use, and not used for any other purpose in the cluster.
You typically use a disk witness when the cluster uses shared storage that is not replicated, such as in a
single site cluster configuration.
Node and file share majority. Node and file share majority provides similar functionality to node
and disk majority, because you would use it when you have an even number of nodes. Instead of
using a disk to provide the extra vote, however, this configuration uses a file share. The file share does
not contain a copy of the cluster database, but instead uses a log file to keep track of the cluster
configuration. Considerations for using a file share witness include:
o
The file share can be a share on a server, including a virtual machine, in the same active directory
forest as the cluster nodes.
The file share should not be on one of the cluster nodes because if that node failed, the cluster
would lose two votes instead of just one. To provide high availability for the file share witness,
you can place it on a node in a separate cluster.
The file share should not be used for any other purpose, including as a witness for another cluster
configuration.
When you configure the file share, you must give read and write permissions to the Active
Directory computer account that represents the cluster name read.
You would typically use a node and file share majority configuration when the cluster has an even number
of nodes distributed across multiple sites.
Disk only. This is a legacy configuration and you should not use it for new WSFC implementations. In
a disk-only configuration, only the disk witness has a vote; the cluster nodes do not. Whilst this solves
the problem of even numbers of nodes, the fact that the disk witness is itself a single point of failure
that can cause the cluster to stop functioning if it fails, makes this configuration unsuitable for most
scenarios.
When configuring a cluster, you do not have to specify the quorum type because the cluster does this
automatically. If there is an odd number of nodes, it chooses node majority. If there is an even number of
nodes and shared storage for a disk witness, it chooses node and disk majority. However, note that if
there is an even number of nodes and no shared storage for a disk witness, the cluster will choose node
majority, which may not be suitable for the reasons explained above. The cluster administrator can
reconfigure the quorum configuration as required by using the Configure Cluster Quorum Wizard in
Failover Cluster Manager.
In Windows Server 2012 R2, quorum configuration is simplified by the addition of dynamic witness
functionality and the ability to dynamically readjust the number of quorum votes to prevent a split brain
scenario.
Dynamic witness
Dynamic witness enables the failover cluster to dynamically reconfigure voting by either enabling or
disabling the witness vote as required. If you add a node to a cluster that has two nodes and a file share
witness, the new node would have a vote, making a total of four votes. Because this is an even number,
the cluster would automatically remove the vote from the witness without administrator intervention to
ensure an odd number of votes.
When the file share witness fails or is not available for some reason, a cluster will be left with an even
number of voting nodes. In this scenario, the cluster randomly selects a node and removes the vote from
it to ensure that, once again, there is an odd number of quorum votes.
Storage. You must use shared storage that is accessible by all nodes in the cluster. Any storage you
use for a cluster should be accessible only to nodes in that cluster. You should also ensure that any
proprietary storage solutions, such as storage area networks (SANs), are fully compatible with the
server hardware. For more information about cluster storage, see the topic Storage in a Windows
Server Failover Cluster in this module.
10-7
Storage adapters and device controllers. If you use Serial Attached small computer system interface
(SCSI) or fiber channel storage, you should ensure that all components in the configuration, such as
host bus adapters, drivers, and firmware are identical. You should only use non-identical components
if the hardware manufacturer supports your specific configuration. If you plan to use Internet small
computer system interface (iSCSI) storage, you should configure a dedicated network to connect the
cluster nodes and the storage. You should not use this network for other types of network
communication.
If you plan to create a WSFC by using Hyper-V virtual machines, you should ensure that the servers use
64-bit processors that support both hardware-assisted virtualization and hardware-enforced Data
Execution Protection (DEP).
When you create a WSFC, you must run the Cluster Validation Wizard. This wizard checks the proposed
cluster configuration, including the available hardware components, for suitability to use in a cluster. The
wizard may generate warnings, such as when it detects only one network, and errors in the configuration,
such as when it detects unsuitable or missing hardware components.
For the services and applications that do require shared storage, there are several options you can use.
The most commonly used storage solutions for clusters are SANs and Network Attached Storage (NAS). A
SAN is a custom hardware storage solution that includes a set of high-performance hard disks, multiple
storage controllers for performance and redundancy, and high-speed redundant connectivity adapters
such as iSCSI and fiber channel host bus adapters (HBAs). Many SANs include the ability to replicate data
to a second SAN, which helps to keep data highly available. A NAS storage device is essentially a
dedicated file server that enables file sharing over an Ethernet network. NAS storage is generally cheaper
than SAN storage, but it does not offer comparable levels of performance.
Windows Server 2012 and Windows Server 2012 R2 include a range of features that make it possible to
achieve performance, scalability, and resilience comparable to a dedicated SAN, but by using commodity
hardware. Windows Server 2012 can use high performance Serially Attached SCSI (SAS) disks in JBOD (just
a bunch of disks) enclosures, which can reduce storage costs and complexity considerably, and brings
high performance storage into the reach of many more organizations. The key Windows Server
technologies that you can use to create a storage solution include Storage Spaces and the enhanced
Server Message Block (SMB) protocol.
Storage spaces
Storage spaces were introduced in Windows Server 2012 to enable you to virtualize storage. The
technology allows you to amalgamate the space on a set of standard hard disks into a single unit of
storage, which you can then divide up as required to create appropriately-sized units of storage for use by
applications such as SQL Server.
A storage space solution includes the following components:
A storage pool. This is the logical container for the physical disks.
Storage spaces. Storage spaces, sometimes called virtual disks, are defined portions of the available
space in the storage pool. A storage space is equivalent to a logical unit number (LUN) in a SAN. Note
that the virtual disks in a storage pool are not the same as the virtual hard disks that you use with
Hyper-V virtual machines.
Volumes. A volume is the formatted space that is contained in a virtual disk. Applications store their
data in one or more volumes.
For example, you could take five disks, each of which is 500 GB in size, combine them into a 2.5 TB
storage pool, divide this into two storage spaces of 1.25 TB each, create two volumes on each storage
space, and then present these for use by your applications.
Storage spaces provide performance and resiliency for data in the following ways:
Storage tiers. Storage tiers in storage spaces enable you to combine solid state disks (SSDs) and
traditional spinning hard disks in such a way that the more frequently accessed data resides on the
SSDs, which provide faster performance. SQL Server monitors disk input and output (I/O) for the
storage space and automatically uses the faster SSD storage for frequently-accessed data and the
standard hard disks for less frequently-accessed data.
Resiliency types. You can configure resiliency settings for storage spaces, including:
o
Parity, in which data is striped with parity information, in a similar way to a RAID5 volume.
Considerations for using storage spaces with Windows Server Failover Clustering
When using storage spaces in a WSFC, there are several important considerations, including:
A storage space must include a minimum of three physical disks, each of which has a minimum of 4
GB of storage.
The disks in the storage pool must pass cluster validation. You can run cluster validation by clicking
the Validate Cluster link in the Failover Cluster Manager tool.
Storage spaces must not use thin provisioning. Thin provisioning enables you to define a storage
space size even if the stated maximum amount of storage is not yet included in the storage space. For
example, if you currently have disks with only 2 TB of storage, you can specify a storage space size of
4 TB and add the additional disks as the space is required. However, with a WSFC, you must use fixedsized storage spaces.
You can only use the Simple and Mirror resilience types in a WSFC, and not the Parity resilience type.
Reference Links: For more information about using storage spaces with WSFC, see How to
Configure a Clustered Storage Space in Windows Server 2012 on the MSDN blogs website.
10-9
SMB is a network communication protocol that enables network resources such as file shares. Windows
Server 2012 included SMB 3.0, which was designed to take advantage of high-speed Ethernet network
connectors, which are significantly less expensive than fiber channel adapters. SMB 3.0 removes the need
to install and maintain a dedicated storage network for communication.
SMB 3.0 includes two important features that promote faster communications:
SMB Multichannel. This feature makes more efficient use of fast network connection by enabling the
establishment of multiple SMB sessions over multiple connections or over a single connection.
SMB Direct. For network adapters that support Remote Direct Access Memory (RDMA), SMB Direct
enables the transfer of very large amounts of data over fast Ethernet networks with minimal
processing costs for the sender and receiver.
Windows Server 2012 R2 includes SMB 3.02, which further enhances the functionality of the protocol.
Reference Links: For a list of the enhancements to SMB in SMB 3.02, see the Whats New
in SMB in Windows Server 2012 R2? page on the TechNet website.
Windows Server 2012 and Windows Server 2012 R2 include the Scale-Out File Server for application
data cluster role. Together with the Storage Spaces and SMB 3.0 features described above, you can use
this role to create a storage solution comparable to a SAN in terms of performance, fault tolerance, and
scalability. A Scale-Out File Server for application data is a cluster file server where all the nodes host roles.
The actual storage is provided by SSDs and hard disks in a JBOD enclosure, with the storage virtualized
and presented to the cluster nodes as storage spaces. Application servers can use SMB 3.0 to access the
data through standard network switches.
A Scale-Out File Server is significantly cheaper to implement than a SAN because all the servers, switches,
network components, and disks are standard commodity hardware, so your organization is not locked in
to a particular hardware vendor.
Cluster-Aware Updating
One of the more important tasks associated with
maintaining WSFC is patching the servers that are
nodes in the cluster so they remain secure and
benefit from the latest functionality. Windows
Server 2012 and Windows Server 2012 R2 include a
feature called Cluster-Aware Updating, which
reduces the complexity and service outages that are
associated with the manual management of
patching and updating servers. Cluster-Aware
Updating is a server role you can add to a server
using the Server Manager tool.
Updating Run
10-10
The server that hosts the Cluster-Aware Updating role is called the Update Coordinator. Periodically, the
Update Coordinator initiates an Updating Run, which performs the following actions on each node in
sequence:
Puts the first node into maintenance mode. The Update Run starts with the node that hosts the
fewest clustered roles, then updates the node that hosts the second fewest clustered roles, and so on.
You can change the order in which nodes are updated when you configure Cluster-Aware Updating.
You can also configure the Update Run to occur only if all the nodes are online.
Moves any clustered roles that the node hosts to another node in the cluster.
Installs the required updates and restarts the node if necessary. You can configure Cluster-Aware
Updating to use various sources for updates, including Windows Server Update Services (WSUS) and a
file share that contains the required hotfixes.
Removes the node from maintenance mode before moving on to the next node.
You can schedule Updating Runs to run at regular intervals, which enables you to coordinate updates to
cluster nodes with other regular maintenance tasks. Administrators can preview the available updates by
generating an update preview list. If you use WSUS, this list will include the updates that have already
been approved for the server nodes, but which have not yet been installed. As the updates install, you can
observe the progress of the Update Run live in the Cluster-Aware Updating tool. After the Update Run,
you can also view this information as a report.
Self-updating mode. In this mode, the Update Coordinator is a node in the WSFC. When the Updating
Run needs to update the Update Coordinator, the Update Coordinator role moves to another node in
the cluster, along with any other roles hosted by the node.
Remote-updating mode. In this mode, the Update Coordinator is not a node in the WSFC. In remoteupdating mode, the Update Coordinator must be a computer that is running Windows Server 2012
R2, Windows Server 2012, Windows 8, or Windows 8.1. Typically, you would choose to use remoteupdating mode when the nodes in the WSFC run Server Core editions of Windows Server 2012 R2 or
Windows Server 2012.
Note: To benefit from Cluster-Aware Updating for SQL Server FCIs, all nodes in the FCI
must be running SQL Server 2012 with Service Pack 1 or later.
Reference Links: For more information about Cluster-Aware Updating, see Cluster-Aware
Updating Overview on the TechNet website. For more information about using Cluster-Aware
Updating to apply software updates to SQL Server FCIs, download the technical article Patching
SQL Server Failover Cluster Instances with Cluster-Aware Updating from the TechNet website.
Lesson 2
10-11
SQL Server 2014 FCIs take advantage of WSFC technology to ensure that SQL Server instances can remain
highly available in the event of hardware or operating system failure. This lesson explains how AlwaysOn
FCIs work, how you can install and test them, and describes some considerations for using them.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the enhancements to AlwaysOn FCIs in SQL Server 2012 and SQL Server 2014.
Explain the key concepts and considerations for implementing a multi-site AlwaysOn FCI.
Hardware failures.
Note that errors or corruption within SQL Server databases will not trigger failover in an AlwaysOn FCI.
You can also manually fail over to other nodes. This can be useful when you need to perform upgrades, or
for troubleshooting. For example, you might investigate performance problems on the active node by
failing over to another node and attempting to replicate the issue.
An AlwaysOn FCI includes the following components:
A WSFC clustered role. The clustered role contains all the FCI cluster resources. In an AlwaysOn FCI,
only one node can own the SQL Server clustered role at any given time.
The SQL Server binaries. Each node has a local copy of the binaries. Unlike on stand-alone instances,
the services do not start automatically on each node because the WSFC controls the starting and
stopping of these services.
You can install an AlwaysOn FCI by using the SQL Server Installation Center. When you install an
AlwaysOn FCI, the WSFC registers the SQL Server Service and the SQL Server Agent as cluster resources. If
you install SQL Server Analysis Services, then the cluster registers the SQL Server Analysis Services service
as a resource.
10-12
If you enable the FILESTREAM feature in the instance, the cluster also registers the FILESTREAM file share.
An FCI has dependencies on the following cluster resources:
Shared storage. This enables more than one node in the cluster to take ownership of the storage. You
can use various types of shared storage with an AlwaysOn FCI, such as iSCSI, fiber channel, and SMB
file shares. For more information about shared storage in WSFC, see the Storage for a Windows Server
Failover Cluster topic in the previous lesson of this module.
Virtual network name. This identifies the cluster, and the virtual network name itself has a
dependency on one or more IP address resources.
The principal enhancements to clustering in SQL Server 2012 include improvements to health monitoring,
failover policies, troubleshooting, and support for multi-subnet clustering:
More detailed and reliable health monitoring. The WSFC maintains a dedicated connection to the
active SQL Server instance and gathers detailed health information by using the new system stored
procedure sp_server_diagnostics. The dedicated connection enables the WSFC to maintain health
monitoring even when an instance is processing demanding workloads. This helps to reduce the
number of needless failovers that can occur when a heavily-stressed server does not respond in a
timely fashion to health monitoring requests.
More flexible failover policies. The sp_server_diagnostics system stored procedure periodically
gathers health statistics and sends them to the WSFC. The improved granularity of data that
sp_server_diagnostics provides enables you to configure more flexible failover policies that specify
more precisely the conditions that can and cannot trigger failover.
Reference Links: For more information about the sp_server_diagnostics system stored
procedure, see the sp_server_diagnostics (Transact SQL) topic in SQL Server Books Online.
For more information about failover policies, see the Failover Policy for Failover Cluster Instances
topic in SQL Server Books Online.
10-13
Support for multi-subnet clustering. SQL Server 2012 supports clusters whose nodes are located on
multiple subnets in the same Active Directory domain, including those that are geographically remote
from each other. In previous versions of SQL Server, this configuration was only possible by creating a
virtual local area network (VLAN) and placing all the cluster nodes on to it. In a multi-subnet cluster,
each subnet has its own dedicated storage, such as a SAN. Consequently, when implementing a
multi-subnet failover cluster instance, you must also implement a replication solution to help ensure
that the SANs in each subnet are synchronized with each other.
SQL Server 2014 builds upon the improvements in SQL Server 2012, and includes the following additional
enhancements:
1.
Support for cluster shared volumes. Windows Server 2012 and Windows Server 2012 R2 support
cluster shared volumes, which helps to reduce the number of LUNs required for storage, and the
complexity of configuring and administering them.
2.
Improved diagnostics. The AlwaysOn dynamic management views in SQL Server 2014 include more
information about the WSFC on which the AlwaysOn FCI resides. This makes troubleshooting
problems easier because more of the required information is available.
Pre-installation
Before installing an AlwaysOn FCI, you should
review the following considerations:
PowerShell 2.0. PowerShell 2.0 must be enabled on a computer before you can install SQL Server
2014. SQL Server 2014 setup does not install PowerShell 2.0, so you must enable it manually.
Anti-virus software. Anti-virus software can interfere with the functioning of the WSFC service, so
you should ensure there is no anti-virus software running on any of the servers that will be nodes in
the cluster.
Node configuration. All the nodes in the cluster should have identical configurations, including
COM +, the disk drive letters in use, and the users who are members of the Administrators group.
Error logs. On each server that will be a cluster node, you should review the server error logs and
resolve any issues before proceeding with the installation of SQL Server 2014.
Cluster Validation Wizard. Run the cluster validation wizard to ensure that the cluster configuration
is suitable.
Storage. Ensure the disks that will store the SQL Server 2014 binaries are not compressed or
encrypted, as either of these configurations will prevent installation.
Active Directory. You cannot install a SQL Server FCI on a WSFC node that is a domain controller.
Installation
10-14
To perform an installation of a SQL Server 2014 AlwaysOn FCI, you must log on to the server by using an
account that has local administrator rights, and then in the SQL Server Installation Center, choose the
option New SQL Server failover cluster installation.
During the installation, you must specify the features to install. You can only install the SQL Server
Database Engine, SQL Server Analysis Services in tabular mode, and SQL Server Analysis Services in
multidimensional mode as part of an FCI. You can choose to add other SQL Server features and
components, but these will run as stand-alone components on the cluster node and will not fail over to
other nodes. You must also specify a SQL Server network name and an instance name.
The SQL Server network name is a unique name on the network that identifies the FCI. The SQL Server
network name is associated with the IP address resource for the instance. This enables clients to connect
to the FCI without needing to know which node currently owns the SQL Server role. Clients can query a
DNS server for the SQL Server network name to obtain the IP address for the instance. When they connect
to the instance by using the IP address, they connect to the active node, which currently owns that SQL
Server network name resource. When failover occurs, the SQL Server network name passes to the new
active node.
Note: In older versions of SQL Server, the network name was called the virtual server name.
Instance name
Assuming that there is not already a default instance on any of the cluster nodes, you can choose to install
either a default instance or a named instance. If there is already a default clustered instance or a default
stand-alone instance on any of the nodes, you must install a named instance. When you install a default
instance, you can connect to the SQL Server instance by using just the SQL Server network name. If you
install a named instance of SQL Server, you must specify both the SQL Server network name and the
instance name when you connect to the instance by using a tool such as SQL Server Management Studio.
For example, if you create a named instance called SQL1 and use a SQL Server network name of
SQLCLUSTER, you would connect by using the name SQLCLUSTER\SQL1.
After installing the FCI on the first cluster node, you can run the Add node to a SQL Server failover
cluster option in the SQL Server Installation Center on the other nodes to add additional nodes to the FCI.
When adding the node, you should specify the name of the SQL Server FCI to which you want to add the
node.
Note: To ensure that SQL Server FCIs have a uniform naming convention, it is common
practice to use named instances rather than default instances.
SQL Server 2014 supports placing the tempdb database on local storage instead of shared storage. Unlike
the other system databases and user databases, tempdb does not persist; when a SQL Server Instance
restarts, it creates tempdb each time. Consequently, in an AlwaysOn FCI, it is not necessary to store
tempdb on shared storage because, on failover, the new active node will create the database
automatically when the SQL Server Service starts.
10-15
Storing tempdb locally can improve performance. The tempdb database can experience very high levels
of I/O in some environments, which can negatively affect the performance of the shared storage for the
FCI. This is particularly true when an FCI hosts several databases that all make extensive use of tempdb, or
where snapshot-based isolation levels are enabled. To ensure that tempdb does not impact I/O
performance, you can configure the FCI to use a local path for the database. The performance benefit is
greater if each node in the cluster has a high-performance SSD to store tempdb.
If you decide to use local storage for tempdb, you must ensure that every node is configured to use the
same file path for the tempdb database files.
Demonstration Steps
Prepare for Cluster Installation
1.
2.
In the D:\Demofiles\Mod10 folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes, and wait for the script to finish.
4.
In the D:\Demofiles\Mod10 folder, right-click Setup2.cmd, and then click Run as administrator.
5.
In the User Account Control dialog box, click Yes, and wait for the script to finish.
6.
On the Start screen, type Failover and then start the Failover Cluster Manager app.
7.
Click MIA-CLUSTER.adventureworks.msft, and then review the information about the Windows
Server Failover Cluster, noting the following points:
o
8.
9.
In the Nodes pane, in the Status column, note that the status of each of the three cluster nodes is
Up.
10. Expand Storage, and then click Disks. In the Disks pane, in the Status column, note that the status
of Cluster Disk 1 is Online.
11. Right-click Cluster Disk 1, and then click Properties.
12. In the Cluster Disk 1 Properties dialog box, on the General tab, review the information about the
cluster disk.
13. Click the Policies tab, and review the failure responses for the disk resource.
14. Click the Advanced Policies tab, and review the possible owners for the disk resource.
15. In the Cluster Disk 1 Properties dialog box, click Cancel.
16. Click Networks, and then in the Networks pane, in the Status column, note that the status of the
Cluster Network 1 resource is Up.
17. Click MIA-CLUSTER.adventureworks.msft, and then click Validate Cluster.
18. In the Validate a Configuration Wizard, on the Before you Begin page, click Next.
19. On the Testing Options page, click Run all tests (recommended), and then click Next.
20. On the Review Storage Status page, select the Cluster Disk 1 check box, and then click Next.
21. On the Confirmation page, click Next.
22. Wait for the validation to complete, and then click View Report.
10-16
23. In the Failover Cluster Validation Report, in the Cluster Configuration section, click Validate
Quorum Configuration, review the information, and then click Back to Failover Validation Report.
24. In the Network section, click Validate Network Communication, review the information, and then
click Back to Cluster Failover Validation Report.
25. Close the Cluster Failover Validation Report, and then in the Validate a Configuration Wizard,
click Finish.
26. Close Failover Cluster Manager.
Install a New Failover Cluster Instance
1.
In the C:\SQLServer2014-x64-ENU folder, double-click Setup.exe, and then in the User Account
Control dialog box, click Yes.
2.
Wait a few moments for the SQL Server Installation Center to start, click Installation, and then click
New SQL Server failover cluster installation.
3.
Wait a few moments for the Install a SQL Server Failover Cluster wizard to start. On the Global
Rules page, wait for the rule check to complete. Then on the Microsoft Updates and Product
Updates pages, clear any checkboxes and click Next.
4.
After the Install Setup Files page completes, on the Install Failover Cluster Rules page, wait for the
rules check to complete, review the results, and then click Next.
5.
On the Product Key page, click Next, on the License Terms page click the I accept the license
terms check box, clear the option to turn on Customer Experience Improvement Program (CEIP)
and Error Reporting, and then click Next.
6.
On the Setup Role page, click the SQL Server Feature Installation radio button, and then click
Next.
7.
On the Feature Selection page, click the Database Engine Services check box, and then click Next.
8.
On the Instance Configuration page, in the SQL Server Network Name field, type
DEMOSQLCLUSTER, click Named instance, in the Named instance field, type DEMOSQL, and then
click Next.
9.
On the Cluster Resource Group page, review the information, and then click Next.
10. On the Cluster Disk Selection page, ensure that the Cluster Disk 1 check box is selected, and then
click Next.
11. On the Cluster Network Configuration page, click the IP Type check box, in the Address column,
type 10.10.0.160, and then click Next.
10-17
12. On the Server Configuration page, in the SQL Server Agent row, in the Account Name column,
type ADVENTUREWORKS\ServiceAcct, in the Password column, type Pa$$w0rd, in the SQL Server
Database Engine row, in the Account Name column, type ADVENTUREWORKS\ServiceAcct, in the
Password column, type Pa$$w0rd, and then click Next.
13. On the Database Engine Configuration page, on the Server Configuration tab, click Add Current
User, and then click Next.
14. On the Ready to Install page, click Install. Wait for the installation to complete, on the Complete
page, click Close, close the SQL Server Installation Center, and then close File Explorer.
Add a Node to the Failover Cluster Instance
1.
2.
In the C:\SQLServer2014-x64-ENU folder, double-click Setup.exe, and then in the User Account
Control dialog box, click Yes.
3.
In SQL Server Installation Center, click Installation, and then click Add node to a SQL Server
failover cluster.
4.
On the Global Rules page, wait for the rule check to complete. Then on the Microsoft Updates and
Product Updates pages, clear any checkboxes and click Next.
5.
After the Install Setup Files page completes, on the Add Node Rules page, wait for the rules check
to complete, review the results, and then click Next.
6.
On the Product Key page, click Next, on the License Terms page click the I accept the license
terms check box, clear the Turn on Customer Experience Improvement Program and Error
Reporting check box, and then click Next.
7.
On the Cluster Node Configuration page, note that the node will join the DEMOSQL Failover
Cluster Instance, click Next, and then on the Cluster Network Configuration page, click Next.
8.
On the Service Accounts page, in the SQL Server Agent row, in the Password column, type
Pa$$w0rd, in the SQL Server Database Engine row, in the Password column, type Pa$$w0rd, and
then click Next.
9.
On the Ready to Add Node page, click Install. Wait for the installation to complete, on the
Complete page, click Close, close the SQL Server Installation Center, and then close File Explorer.
2.
In the Roles, pane, click SQL Server (DEMOSQL), review the information in the Status and Owner
Node columns, in the Actions pane, click Properties, review the properties of the role, and then click
Cancel.
3.
In Failover Cluster Manager, at the bottom of the SQL Server (DEMOSQL) pane, click the Resources
tab, and then review the information about the resources associated with SQL Server (DEMOSQL).
4.
Minimize Failover Cluster Manager. Then start SQL Server Management Studio and connect to the
DEMOSQLCLUSTER\DEMOSQL database engine instance using Windows authentication.
5.
6.
10-18
When failover occurs, ownership of the SQL Server role transfers to the new active node, and then the SQL
Server Service on the new cluster node starts. The list below shows the sequence of events that occur
during failover:
1.
If possible, SQL Server writes all the pages in the buffer cache to disk. Note that it may not always be
possible to write the dirty pages to disk because a hardware or other error may prevent it.
2.
The WSFC stops the SQL Server services on the active node.
3.
4.
The SQL Server services start on the new active node. All client connections that use the FCI name to
the cluster automatically go to the new active node.
Writing the dirty pages to disk can be quite time consuming if there is a large number in the cache, and
this can lead to unexpectedly long or inconsistent failover times. If you anticipate that this might be a
problem, you should consider using the indirect checkpoint feature to reduce the time it takes to flush the
cache. To use indirect checkpoint, you specify a target recovery time value (in seconds). SQL Server uses
this value to calculate how frequently it needs to flush the cache to be able to achieve the target recovery
time. While indirect checkpoint can deliver more predictable failover times, it can consume more system
resources because it typically involves running more frequent checkpoints. Consequently, to avoid
introducing performance problems, you should steer clear of setting target recovery time values that are
too short.
The health check time-out threshold controls how long the cluster will wait between successful health
checks before initiating failover. The WSFC performs a health check by using the sp_server_diagnostics
stored procedure. The default value is 30 seconds. When you define a health check time-out threshold,
the WSFC performs a health check every x seconds, where x is one third of the value specified by the
health check time-out threshold. For example, at the default health check time-out threshold value of 30
seconds, the WSFC performs a health check every 10 seconds. If sp_server_diagnostics does not respond
within the full health check time-out threshold period, the WSFC will deem the Availability Group
unresponsive, and attempt to restart the resources on the node or by failing over, depending on the
values configured in the restart and failover settings.
10-19
The failover condition level defines the conditions that cause failover. There are five failover condition
levels, with level one including the fewest events that could initiate failover, and five including the most.
The table below describes the failover condition levels. The default level is level three.
Failover condition level and
name
The SQL Server instance fails to respond within the health check
time-out threshold.
As the levels increase from one to five, each includes the conditions from the previous level. This means
that a higher level will be more likely to trigger failover than a lower level. Consequently, when selecting
which failover condition level to use, you will need to decide the frequency with which you can tolerate
failover for the FCI.
Note: In addition to the five levels in the table above, there is a level 0, which is for
maintenance purposes. If you configure level 0, the WSFC will not attempt to restart or fail over
services at all.
Automatic failback
Automatic failback can be helpful because it enables a node to resume hosting a role without the need
for administrator intervention. However, you should be cautious about using automatic failback because
an administrator does not have the opportunity to perform checks and tests to ensure that the node is
ready to resume ownership of a role, and that whatever problem caused the failover has been resolved.
Trigger failover.
Demonstration Steps
Trigger Failover
10-20
1.
Ensure that you have completed the previous demonstration in this module.
2.
On the MIA-CLUST2 virtual machine, on the Start screen type Failover and then start Failover
Cluster Manager.
3.
4.
On the host computer, in Hyper-V Manager, right-click 20465C-MIA-FCI-CLUST1, and then click
Shut Down. This action simulates failure of the node that is the owner of the SQL Server (DEMOSQL)
role.
5.
On the MIA-CLUST1 virtual machine, in Failover Cluster Manager, click Roles, and then watch the
SQL Server (DEMOSQL) role failover from the MIA-CLUST1 node to the MIA-CLUST2 node.
6.
When failover has finished, in the lower part of the window, click the Resources tab, and review the
available resources, checking that they all report a status of Online.
Right-click the SQL Server (DEMOSQL) role, and then click Properties.
2.
In the SQL Server (DEMOSQL) Properties dialog box, on the General tab, in the Preferred Owners
box, click the MIA-CLUST1 check box, click the Failover tab, click Allow Failback radio button,
ensure that the Immediately radio button is selected, and then click OK.
3.
On the host computer, in Hyper-V Manager, right-click 20465C-MIA-FCI-CLUST1, and then click
Start.
4.
On the MIA-CLUST2 virtual machine, in Failover Cluster Manager, click Roles, and then wait for the
SQL Server (DEMOSQL) role to fail back to the MIA-CLUST1 role from the MIA-CLUST2 role as the
MIS-CLUST1 virtual machine comes online.
Multi-Subnet Clustering
With versions of SQL Server prior to SQL Server
2012, when you create a clustered SQL Server
instance, the nodes must be located on the same IP
network. The Enterprise Editions of SQL Server 2012
and SQL Server 2014 enable you to create multi-site
AlwaysOn FCIs that include nodes residing on
different IP networks. The ability to create a multisite FCI enables various scenarios, including:
Using a single FCI that includes a disaster recovery site in a different geographical location to the
primary site.
Note: Although you can create multi-site clustered instances with SQL Server 2008 R2, this
requires you to implement a VLAN, which adds considerable complexity to the configuration.
10-21
The nodes in each site will not be able to access the same storage, so if you want to enable failover
between the sites you must duplicate the storage on both. Typically, you would achieve this by using a
SAN in each site, and then using SAN replication to copy data from one SAN to the other.
OR dependencies
In earlier versions of SQL Server, all cluster resource dependencies are AND dependencies, which means
that before a resource can come online, all the resources that it depends on have to come online first. For
example, for the SQL Server resource to come online, both the IP address and the disk resources must
already be online. In SQL Server 2012 and SQL Server 2014, the SQL Server resource in an FCI can use OR
dependencies to enable the use of multiple IP addresses. This makes it possible for a resource to come
online if only some of the resources it depends on are available. For example, the SQL Server resource can
come online when the disk resource and either IP address 1 or 2 are online. This makes it possible for the
nodes in one site to use IP address 1, and the nodes in the second site to use IP address 2. The roles can
come online on either site without requiring both IP addresses to be online.
When you create an AlwaysOn FCI that includes nodes on multiple subnets, SQL Server Setup
automatically detects the multi-subnet configuration and sets up the necessary OR dependencies.
Note: Multi-subnet FCIs support only one IP address per subnet.
When the validation check runs during installation, you should not perform the storage check. When
you skip the storage check, you will be prompted to acknowledge that you do not require support
from Microsoft for the cluster. However, because a multi-subnet cluster is a supported configuration,
you will still qualify for Microsoft support.
You should not use the Node and Disk Majority quorum model, even if the SQL Server Setup
recommends it because a multi-site FCI uses separate storage in each site. Instead, you should use the
Node and File Share Majority model or the Node Majority model.
To improve application failover time, SQL Server Setup registers both the IP addresses used in the
multi-site FCI with the DNS servers. When failover occurs, the clients can use the second IP address to
connect.
10-22
After a number of recent service outages because of the non-availability of application databases,
managers at Adventure Works are keen to implement a new high availability solution that will help to
ensure that service availability remains within the published service level agreement limits. You have been
asked to investigate the possible high availability solutions for the SQL Server instances that support the
companys key application databases. You have created a WSFC that includes three cluster nodes to help
you achieve this. The cluster uses an iSCSI target server as its shared storage solution. You plan to
implement a SQL Server AlwaysOn FCI, and test configuration options, including local tempdb storage,
failover, and failback.
Objectives
After completing this lab, you will have:
2.
Task 2: View the Windows Server Failover Cluster configuration in Failover Cluster
Manager
1.
Open Failover Cluster Manager, and review the cluster configuration information, including the
Nodes, Storage and Networks information.
2.
10-23
1.
2.
Install a new SQL Server Failover Cluster Instance of the SQL Server Database Engine by using the
following settings:
o
IP address: 10.10.0.160.
Service accounts: For both the SQL Server service and the SQL Server Agent service, use the
account ADVENTUREWORKS\ServiceAcct with the password Pa$$w0rd.
2.
Add MIA-CLUST2 as a node in the new SQL Server Failover Cluster Instance that you created in the
last task. For both the SQL Server service and the SQL Server Agent service, use the account
ADVENTUREWORKS\ServiceAcct with the password Pa$$w0rd.
3.
Add MIA-CLUST3 as a node in the new SQL Server Failover Cluster Instance in the same way.
On the MIA-CLUST1 virtual machine, review the Failover Cluster Instance configuration in the Failover
Cluster Manager tool.
2.
Ensure that MIA-CLUST1 is the owner node of the SQL Server (SQL 1) role, moving the role to MIACLUST1 if necessary.
3.
Connect to the Failover Cluster Instance in SQL Server Management Studio, and check the value of
the IsClustered property.
10-24
To ensure optimal throughput for the AlwaysOn Cluster Instances shared storage, you will configure the
tempdb system database to use local storage.
The main tasks for this exercise are as follows:
1. Change the Storage Location for tempdb
2. Review tempdb Configuration
2.
In SQL Server Management Studio, execute Transact-SQL statements to move the tempdb data and
log files to C:\tempdb.
3.
Use Failover Cluster Manager to move the SQL Server (SQL1) role from MIA-CLUST1 to MIACLUST2.
2.
On the MIA-CLUST2 virtual machine, open C:\tempdb, and note that the folder contains the tempdb
database files.
10-25
1.
2.
Test connectivity to the new HumanResources database by writing and executing a Transact-SQL
statement to retrieve all the rows in the dbo.Employee table.
3.
On the host computer, turn off the MIA-CLUST2 virtual machine and use the Failover Cluster
Manager tool to observe failover.
4.
Review the configuration information about the SQL Server (SQL1) role, and run the query against
the HumanResources database again to ensure that failover was successful.
Configure the SQL Server (SQL1) role to use MIA-CLUST2 as the preferred owner and to fail back
immediately.
2.
On the host computer, restart the MIA-CLUST2 virtual machine and then on MIA-CLUST1, check that
failback occurs automatically.
3.
20465C-MIA-FCI-CLUST1
20465C-MIA-FCI-CLUST2
20465C-MIA-FCI-CLUST3
Review Question(s)
Question: Can you think of any high availability scenarios in your organization that you
might implement from new or upgrade by using an AlwaysOn FCI?
10-26
Module 11
AlwaysOn Availability Groups
Contents:
Module Overview
11-1
11-2
11-9
11-23
11-28
11-34
Module Overview
SQL Server 2014 includes AlwaysOn Availability Groups to provide high availability for groups of
databases. This module describes AlwaysOn Availability Groups in SQL Server 2014, explains the key
concepts of AlwaysOn Availability Groups, and describes how you can use them to maintain highly
available databases.
Objectives
After completing this module, you will be able to:
Describe the fundamental concepts and terminology for AlwaysOn Availability Groups.
Lesson 1
AlwaysOn Availability Groups take advantage of Windows Server Failover Cluster (WSFC) technology to
protect SQL Server databases. AlwaysOn Availability Groups include various features, such as automatic
failover, and active secondary servers, that ensure continuity of service while helping you to make optimal
use of resources.
This lesson introduces you to AlwaysOn Availability Groups, and explains the fundamental concepts and
terminology you will need to understand when planning to implement them in your organization.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the fundamental concepts and terminology for AlwaysOn Availability Groups.
Replicas
Availability Databases
The databases that you include in an availability group are referred to as availability databases. These
must be online, read/write databases that are configured to use the FULL recovery mode. When you
configure an availability group, you must create a FULL backup of each availability database on the
primary replica, and then restore these backups to the secondary replicas by using the RESTORE WITH
NORECOVERY option. The primary replica keeps the secondary replicas up to date with updates to the
availability databases in a process called synchronization. During synchronization, the primary replica
11-3
sends the transaction log records of each database to the secondary replicas. The secondary replicas then
write these changes to the local transaction log for each database, a process that is sometimes referred to
as hardening the log.
Clients can connect to the databases in an availability group by using an availability group listener. An
availability group listener consists of a DNS name, a TCP port number, and one or more IP addresses.
When clients connect to the availability group listener by using its DNS name, the listener redirects the
client request to the primary replica or to a secondary replica that is enabled for read-only access. Using a
listener in this way enables you to spread client workloads across multiple instances, and removes the
need to reconfigure clients to connect to a new primary replica during a failover.
Note: Even though AlwaysOn Availability Groups use Windows Server Failover Cluster
technology, they do not require the use of a dedicated storage device such as a storage area
network (SAN).
Availability Modes
Ideally, a high availability solution will deliver
excellent read/write performance and minimize the
amount of data loss that might occur when failover
happens. However, it is sometimes not possible or
economically feasible to achieve both of these
goals. Factors such as poor network performance,
high concurrency, or the distance between replicas
can mean that you might have to prioritize one
goal over the other. Availability modes enable you
to balance these competing requirements to
achieve the optimal solution. AlwaysOn Availability
Groups support two availability modes
synchronous-commit and asynchronous-commit.
Synchronous-commit mode
In synchronous-commit mode, when a client updates a database, the primary replica writes the change to
its local log and passes it to its secondary replicas. The primary replica then waits for the secondary
replicas to confirm that they have hardened their logs before committing the transaction and sending an
acknowledgement to the client. Synchronous-commit mode ensures that the availability databases on the
primary and secondary replicas are synchronized with each other at all times, so that, in the event of
failover, there is no data loss. However, the latency that is introduced because the primary replica must
wait before it commits each transaction can result in reduced performance. You would typically use
synchronous-commit mode to support a local high availability scenario in which both primary and
secondary servers are on the same network; using synchronous-commit mode between replicas that are
on remote networks can result in poor performance. Each AlwaysOn Availability Group can include up to
three replicas that use synchronous-commit mode, including the primary replica. For example, you could
create an AlwaysOn Availability Group with one primary replica and either one or two secondary replicas
that use synchronous-commit mode.
Asynchronous-commit mode
In asynchronous-commit mode, when a client updates a database, the primary replica writes the change
to its log and sends the updates to its secondary replicas. The primary replica does not wait for
confirmation from the secondary replicas that they have hardened their logs before it commits the
transaction and sends an acknowledgement to the client. This configuration results in reduced latency and
better performance, but if failover occurs, there is the risk of data loss because transactions on the primary
replica are not guaranteed to be written to the secondary replicas. You can use asynchronous-commit
mode to support various scenarios, including:
Providing high availability between nodes in separate remote sites, or in sites with poor network
connectivity. In this scenario, synchronous-commit mode would be impractical because of the latency
that would be involved. However, using asynchronous-commit mode in this way would not enable
automatic failover, which requires synchronous-commit mode.
Provisioning a reporting server. Using a secondary replica as a dedicated reporting server helps to
reduce the workload on the primary replica. Using asynchronous-commit mode for the reporting
server replica minimizes the impact of synchronization on the primary server, and is appropriate
because reporting servers do not usually need to be completely up-to-date.
Creating a disaster recovery solution. You can use asynchronous-commit mode to maintain a
secondary replica in a separate disaster recovery site, which you can use to recover your databases in
the event of the complete loss or failure of the primary site.
The primary replica must handle the entire read/write workload for all the databases in the availability
group. This can lead to variable levels of application performance because the replica might
sometimes struggle to cope with the workload, for example at peak times, or when running backup
jobs.
You can improve performance and promote more efficient use of resources by designating one or more
secondary replicas as active secondary replicas. Active secondary replicas reduce the workload on the
primary replica, which helps to ensure consistent application performance, and makes more efficient use
of hardware resources. You can also take backups from an active secondary replica, which further reduces
the workload placed on the primary replica.
Note: The availability databases on an active secondary replica are not static, read-only
databases, because they are constantly updated by synchronization with the primary replica.
Therefore, clients that read from an active secondary replica will have access to data that is either
identical, or almost identical to, that on the primary replica.
11-5
When using active secondary replicas, you can create an Availability Group listener, which automatically
routes read-intent client connections to the active secondary replicas. This configuration is called readonly routing. With read-only routing enabled, the workload on the primary replica is minimized because it
does not have to service any read-intent connections. For clients that will use read-only routing, you must
use a connection string that includes the DNS name of the Availability Group listener. You must also
create a read-only routing URL for each replica in the availability group and add these URLs to a routing
list on the primary replica. Client requests are routed to the first available replica on the routing list, and
replicas are tried in the order that they appear on the list. The routing list does not provide a mechanism
for load balancing, and replicas closer to the top of the list will typically have to handle more client
requests than those further down.
Reference Links: You will learn more about read-only routing in the next lesson.
As an alternative to read-only routing, clients can simply connect directly to the secondary nodes by
specifying the node name in the connection string.
To create an active secondary replica, you must specify the type of connection that an active secondary
replica will accept by configuring one of the following options:
No connections. The secondary will not accept any client connections. This is the default setting.
Only read-intent connections. The secondary replica will accept Availability Group-aware clients
with the ApplicationIntent value in the connection string set to ReadOnly.
Allow any read-only connection. The secondary replica will accept all client connections, including
read/write connections, but it allows only read access to its databases.
You can specify how the primary replica behaves when active secondary replicas are enabled by
configuring one of the following options:
Allow all connections. The primary replica will accept all connections. This is the default setting.
When this option is configured, some read-intent connections might still be passed to the primary
replica.
Allow read/write connections. The primary replica will only accept read/write connections or
connections for which the ApplicationIntent property is not defined. The primary replica will not
accept read-only connections. You can use this option to prevent read-intent connections from being
directed to the primary replica.
You can use an active secondary replica to perform backups on the availability databases that it hosts.
Backup operations can consume significant amounts of a server's resources, particularly if the backup is
compressed. Performing backups on an active secondary replica can free up these resources on the
primary replica, making them available for mission-critical workloads.
Active secondary replicas support two types of backup operations:
Log backups
You can specify how backups are performed in the availability group by configuring the automated
backup preference property of the availability group. The options for automated backup preference are:
Only on the primary replica. This setting helps ensure that all backups occur on the primary replica.
You can use this setting when you need to run a backup job, such as a differential backup, that is not
supported on active secondary replicas.
On secondary replicas. This is the default setting. This runs all backups on an active secondary
replica; if there is no active secondary replica available, backup jobs will run on the primary replica.
Only on secondary replicas. This setting runs backup jobs on active secondary replicas only. If there
are no active secondary replicas available, backup jobs will not run.
No preference. Backups can run on any replica. You can use backup priority values to prioritize
specific replicas over others. Replicas with backup priorities between 1 and 100 are all available to run
backup jobs100 is the highest priority, and a priority of 0 means that the replica will not be used for
backup jobs.
Planned manual failover (without data loss). Planned manual failover is only available when both the
primary and secondary replicas are running in synchronous-commit mode. In planned manual
failover mode, a database administrator must issue a failover command to initiate failover. No data
loss will occur with planned manual failover.
Forced manual failover (with possible data loss). Forced manual failover is the only type that you can
use for replicas that are in asynchronous-commit mode. You must initiate forced manual failover
manually. Any transactions that were committed on the primary replica, but which the secondary
replica has not yet written to its log, will be lost. You can also use forced manual failover for replicas
that are in synchronous-commit mode when the secondary replica is not showing as synchronized
with the primary replica.
To specify the conditions that will trigger automatic failover, you can configure a flexible failover policy.
Events that can trigger a failover include the SQL Server service stopping or being unresponsive, or a
critical server error.
Reference Links: For more information about failover in AlwaysOn Availability Groups, see
the Failover and Failover Modes (AlwaysOn Availability Groups) topic in SQL Server Books Online.
11-7
If you need to change the roles within an AlwaysOn Availability Group, you can use the Start Failover
Wizard to initiate failover of the primary replica to the desired secondary replica. To run the Start Failover
Wizard, in SQL Server Management Studio (SSMS), open the dashboard display for the Availability Group,
and then click Start Failover Wizard.
Note: You should use SSMS to initiate failover for AlwaysOn Availability Groups because it
will ensure the databases are synchronized completely before initiating failover. You should not
use Failover Cluster Manager to initiate failover because it just stops the resources without
synchronizing SQL Server databases first.
A flexible failover policy includes a failover condition level. The failover condition level defines the
conditions that cause failover. There are five failover condition levels, with level one including the fewest
events that could initiate failover, and five including the most events. As the levels increase from one to
five, each level includes the conditions from the previous one.
The table below describes the failover condition levels. The default level is three.
Failover condition level and name
Level one On server down
Note: Conditions that affect individual databases in an Availability Group, such as data
corruption, do not trigger automatic failover.
Reference Links: For more information about the lease that the availability group uses to
connect to the WSFC, see the How It Works: SQL Server AlwaysOn Lease Time-out article on the
MSDN website.
Reference Links: For more information about configuring flexible failover policies for
AlwaysOn Availability Groups, see the Configure the Flexible Failover Policy to Control Conditions
for Automatic Failover (AlwaysOn Availability Groups) article on the MSDN website.
Lesson 2
11-9
To ensure that you implement AlwaysOn Availability Groups correctly, you should be aware of the various
pre-requisites for using AlwaysOn Availability Groups and the pre-installation tasks that you will need to
perform. Additionally, you should understand the configuration options that you can choose during
installation, and how to monitor and manage AlwaysOn Availability Groups.
Lesson Objectives
After completing this lesson, you will be able to:
It enhances the ability of administrators to promote database availability and to share workloads, for
example by using DNS round-robin to sequence client requests across replicas.
It enables you to distribute secondary replicas across more sites, which improves response times by
ensuring that client requests can run against local servers.
It improves disaster recovery options by enabling you to configure additional secondary replicas in
dedicated disaster recovery sites.
In SQL Server 2012, when an active secondary replica in an AlwaysOn Availability Group becomes
disconnected from the primary replica due to network disruption or another issue, the secondary server
will stop servicing read-only requests from clients. The same thing happens if the failover cluster that
supports the AlwaysOn Availability Group loses quorum. In SQL Server 2014, active secondary replica can
continue to service read-only requests even if they lose connectivity to the primary replica, or if the
failover cluster loses quorum. This enhancement helps to ensure continuity for reporting workloads and
other read-only workloads, particularly in geographically distributed infrastructures that are more likely to
suffer from network disruption.
Create an AlwaysOn Availability Group entirely within Azure. In this scenario, both the primary and
secondary replicas are virtual machines in Azure. Protecting SQL Server databases in Azure is
important because it does not provide mechanisms for ensuring availability at the database level.
Create an AlwaysOn Availability Group that has an on-site primary replica and at least one on-site
secondary replicaand then add a secondary replica in Azure. This is known as a hybrid-IT
infrastructure, and be useful for various reasons, including implementing an Azure-based disaster
recovery site.
11-10
In SQL Server 2014, when you are using a hybrid-IT infrastructure, the new Add Azure Replica option in
the Add Replica wizard enables you to add a secondary replica in Azure more efficiently. If you are
adding a replica to Azure for the first time, you should ensure that the on-site network that hosts the
primary replica has a virtual private network (VPN) connection to the Azure network, and that you also
have an on-site secondary replica already configured.
Reference Links: For more information about the Add Azure Replica wizard, see the article
Use the Add Azure Replica Wizard in SQL Server Books Online.
SQL Server 2014 includes the ability to create databases that are stored in a servers memory. In-Memory
online transaction processing (OLTP) databases offer excellent performance because of the increased
throughput they enable. You can use AlwaysOn Availability Groups to ensure that In-Memory OLTP
databases remain highly available, in the same way as you can for standard SQL Server databases.
Reference Links: For more information about using AlwaysOn Availability Groups to
provide high availability for In-Memory OLTP databases, see the article In-Memory OLTP: High
Availability for Databases with Memory-Optimized Tables on the SQL Server Blog, on the TechNet
website.
SQL Server 2014 offers improved troubleshooting for AlwaysOn Availability Groups, including simplified
error messages, additional warnings in the New Availability Group Wizard, and a new system function
called sys.fn_hadr_is_primary_replica, which enables you to identify whether a replica in an AlwaysOn
Availability Group is the primary replica.
Reference Links: For more information about the sys.fn_hadr_is_primary_replica system
function, see the article sys.fn_hadr_is_primary_replica (Transact-SQL) in SQL Server Books Online.
Active Directory
11-11
You should ensure that each of the Windows servers that will host an AlwaysOn Availability Group replica
is a node in a WSFC. An AlwaysOn Availability Group relies on a WSFC to monitor the health of the
replicas and to enable failover.
Number of nodes
Each node in a WSFC can host only one replica for each of the availability groups that you intend to
create. When planning replicas, therefore, the cluster will need to have sufficient nodes to support the
number of replicas that you plan to include. For example, if you plan to create an Availability Group with
one primary replica and two secondary replicas, you will require a cluster that has at least three nodes.
Hotfixes
A WSFC that will host AlwaysOn Availability Groups might require the installation of one or more hotfixes
before you configure the Availability Group.
Reference Links: For more information about hotfixes to support AlwaysOn Availability
Groups on WSFCs, see the article Prerequisites, Restrictions, and Recommendations for AlwaysOn
Availability Groups (SQL Server) on the MSDN website.
Pre-Installation Tasks
The easiest way to create an AlwaysOn Availability
Group is to use the New Availability Group
Wizard in SSMS. You can also create an Availability
Group by using the graphical tools in SSMS, by
running Transact-SQL statements, or by running
PowerShell cmdlets.
Reference Links: For more information about Transact-SQL statements and PowerShell
cmdlets you can use to configure an Availability Group, see SQL Server Books Online.
Regardless of the tool that you decide to use, you must perform the following tasks before creating an
Availability Group:
11-12
1.
Install the WSFC feature on each server that you want to act as a host for a replica in the Availability
Group.
2.
Create a WSFC that includes all the servers in task 1. As part of this process, you must specify a name
and IP address for the cluster. The name and IP address must be unique on the network.
3.
Install a stand-alone instance of SQL Server 2014 on each server in the cluster. You must install the
database engine feature, and optionally, the management tools feature.
Note: Typically, you will install AlwaysOn Availability Groups on stand-alone instances of
SQL Server that are hosted on nodes on a WSFC. However, you can install an AlwaysOn
Availability Group replica on a node in an AlwaysOn Failover Cluster Instance to provide
resilience at the server level, in addition to the database level.
4.
Use SQL Server Configuration Manager to enable AlwaysOn Availability Groups for each instance of
SQL Server that will participate in the AlwaysOn Availability Group. Ensure that you re-start the SQL
Server service after enabling this feature.
5.
Create a file share for the backup files used to synchronize the availability group replicas.
6.
Install the databases that you want to protect on the server that will become the primary replica, and
then perform a full backup of each database. Place the backups in the file share that you created in
task 5.
After completing the pre-installation tasks, you can install an Availability Group. During the installation,
you must configure the following settings:
The replicas (server instances) to be included in the availability group, including the type of replica
(primary or secondary), the type of failover supported (manual or automatic), the type of
synchronization to be used (synchronous or asynchronous), and the read-only support (none, readintent only, or full) for each replica.
Optionally, specify a listener configuration for the availability group, including a DNS name, port, and
IP address for the listener.
The file share to be used to set up the AlwaysOn Availability Group. This should be the file share that you
created in task 5 above.
Demonstration Steps
Create an AlwaysOn Availability Group
11-13
1.
2.
In the D:\Demofiles\Mod11 folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes, wait for the script to finish.
4.
Start SQL Server Management Studio and connect to MIA-CLUST1 using Windows authentication.
5.
In Object Explorer, expand Databases, right-click the DemoDB1, point to Tasks, and then click Back
Up.
6.
In the Back Up Database DemoDB1 dialog box, in the Destination list, click the existing backup
file path, click Remove, and then click Add.
7.
In the Select Backup Destination dialog box, in the File name box, type
D:\Demofiles\Mod11\DemoShare\DemoDB1.bak, and then click OK.
8.
In the Back Up Database DemoDB1 dialog box, in the Backup type list, ensure that Full is
selected, and then click OK.
9.
In the Microsoft SQL Server Management Studio dialog box, click OK.
10. Repeat steps 5 to 9 to perform a full database backup of the DemoDB2 database, using the backup
path D:\Demofiles\Mod11\DemoShare\DemoDB2.bak.
11. In SQL Server Management Studio, in Object Explorer, expand AlwaysOn High Availability, rightclick Availability Groups, and then click New Availability Group Wizard.
12. In the New Availability Group wizard, on the Introduction page, click Next.
13. On the Specify Availability Group Name page, in the Availability group name box, type Demo-AG,
and then click Next.
14. On the Select Databases page, select the DemoDB1 and DemoDB2 database check boxes, and then
click Next.
15. On the Specify Replicas page, on the Replicas tab, click Add Replica.
16. In the Connect to Server dialog box, in the Server name box, type MIA-CLUST2, in the
Authentication list, ensure Windows Authentication is selected, and then click Connect.
17. Repeat steps 15 and 16 to add MIA-CLUST3 as a replica.
18. On the Replicas tab, select the Automatic Failover check box for MIA-CLUST1 and MIA-CLUST2.
This automatically selects the Synchronous Commit check box for these replicas.
19. On the Replicas tab, in the Readable Secondary list for MIA-CLUST2, click Read-intent only.
20. On the Replicas tab, in the Readable Secondary list for MIA-CLUST3, click Yes.
21. Review the default settings on the Endpoints and Backup Preferences tabs, and then click the
Listener tab.
22. Click Create an availability group listener, and then specify the following settings:
o
Port: 1433
23. Click Add, and in the Add IP Address dialog box, in the IPv4 Address box, type 10.10.0.55, and
then click OK.
24. Note: If Add is not visible, you may need to maximize the dialog box or increase the screen
resolution of the virtual machine).
25. On the Specify Replicas page, click Next.
26. On the Select Initial Data Synchronization page, ensure that Full is selected.
27. In the Specify a shared network location accessible by all replicas box, type \\MIACLUST1\DemoShare, and then click Next.
28. On the Validation page, review the validation results, and then click Next.
29. On the Summary page, click Finish.
11-14
30. On the Results page, you may see a warning about the cluster quorum. This will not affect the lab. If
you do see this warning, review it, and then click Close.
Test Connectivity to the Availability Group
1.
2.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-DEMO-CLUST availability group listener, and then press Enter:
sqlcmd -E -S MIA-DEMO-CLUST
3.
At the command prompt, type the following commands to verify that the SQLCMD session is
connected to the primary replica (MIA-CLUST1):
SELECT @@ServerName
GO
4.
At the command prompt, type the following commands to retrieve rows from the Employee table in
the DemoDB1 database, and then view the results:
SELECT * FROM DemoDB1.dbo.Employee
GO
5.
At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
6.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-CLUST3 replica, and then press Enter:
sqlcmd -E -S MIA-CLUST3
7.
At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
9.
11-15
At the command prompt, type the following commands to retrieve rows from the Orders table in the
DemoDB2 database, and then view the results:
SELECT * FROM DemoDB2.dbo.Orders
GO
8.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-CLUST2 replica, and then press Enter:
sqlcmd -E -S MIA-CLUST2
10. At the command prompt, type the following commands to attempt to retrieve rows from the
Employee table in the DemoDB1 database:
SELECT * FROM DemoDB1.dbo.Employee
GO
12. At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
13. At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-CLUST2 replica with a read-intent connection, and then press Enter:
sqlcmd -E -S MIA-CLUST2 -K ReadOnly
14. At the command prompt, type the following commands to attempt to retrieve rows from the
Employee table in the DemoDB1 database, and then view the results:
SELECT * FROM DemoDB1.dbo.Employee
GO
15. At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
In SQL Server Management Studio, in Object Explorer, under AlwaysOn High Availability and
Availability Groups, right-click Demo-AG (Primary), and then click Show Dashboard.
2.
3.
In the Fail Over Availability Group: Demo-AG wizard, on the Introduction page, click Next.
4.
On the Select New Primary Replica page, note the warning about data loss if you decide to choose
MIA-CLUST3 as the new primary replica. Select the MIA-CLUST2 check box, and then click Next.
5.
6.
7.
8.
9.
11-16
10. In the dashboard, note that the primary instance is now MIA-CLUST2 and that MIA-CLUST1 is a
secondary replica. Note that the dashboard may take a few seconds to update after failover, so if the
new configuration doesnt display immediately, just wait until the dashboard updates.
11. Right-click the Start button and click Command Prompt.
12. At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-DEMO-CLUST availability group listener, and then press Enter:
sqlcmd -E -S MIA-DEMO-CLUST
13. At the command prompt, type the following commands to verify that the SQLCMD session is
connected to the new primary replica (MIA-CLUST2):
SELECT @@ServerName
GO
14. At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
Read-Only Routing
In a SQL Server 2014 AlwaysOn Availability Group
that has one or more active secondary replicas, you
can enable read-only routing. Read-only routing
directs read-intent client connections to a
secondary replica instead of to the primary replica.
Using secondary replicas in this way reduces the
workload that the primary replica has to process,
which helps to ensure consistent application
performance, and makes more efficient use of
hardware resources.
Read-only routing URLs. A read-only routing URL is associated with a specific replica, and is used to
route read-intent requests to that replica. You create a read-only routing URL for each replica that
you want to accept read-intent requests.
11-17
Read-only routing lists. For each replica, you create a read-only routing list. A replicas read-only
routing list specifies how to handle read-intent client connections when that replica is the primary
one in the Availability Group. For example, in an availability group called AdventureWorks-AG that
includes a primary replica called AG-Replica1 and two active secondary replicas called AG-Replica2
and AG-Replica3, you could configure read-only routing lists as follows:
o
To configure read-only URLs for the replicas in an Availability Group, you can use the CREATE
AVAILABILITY GROUP Transact SQL statement, or the ALTER AVAILABLILTY GROUP Transact SQL
statement for existing Availability Groups. Read-only routing only supports TCP, and you must specify a
TCP port number as part of the URL.
This code example alters an availability group to add read-only routing URLs for three replicas. The URLs
use the default SQL Server TCP port number, 1433.
ALTER AVAILABILITY GROUP Transact-SQL statement to configure read-only routing URLs.
ALTER AVAILABILITY GROUP [AdventureWorks-AG]
MODIFY REPLICA ON 'AG-Replica1'
WITH (Secondary_Role (READ_ONLY_ROUTING_URL = 'tcp:// AGReplica1.ADVENTUREWORKS.MSFT:1433'));
GO
ALTER AVAILABILITY GROUP [AdventureWorks-AG]
MODIFY REPLICA ON 'AG-Replica2'
WITH (Secondary_Role (READ_ONLY_ROUTING_URL = 'tcp:// AGReplica2.ADVENTUREWORKS.MSFT:1433'));
GO
ALTER AVAILABILITY GROUP [AdventureWorks-AG]
MODIFY REPLICA ON 'AG-Replica3'
WITH (Secondary_Role (READ_ONLY_ROUTING_URL = 'tcp:// AGReplica3.ADVENTUREWORKS.MSFT:1433'));
GO
To configure read-only routing lists, you can again use the CREATE AVAILABILITY GROUP Transact SQL
statement, or the ALTER AVAILABLILTY GROUP Transact SQL statement for existing Availability Groups.
11-18
This code example creates a read-only routing list for each of the three replicas in the Adventureworks-AG
Availability Group.
ALTER AVAILABILITY GROUP Transact-SQL statement to configure read-only routing lists.
ALTER AVAILABILITY GROUP [AdventureWorks-AG]
MODIFY REPLICA ON 'AG-Replica1'
WITH (PRIMARY_ROLE(READ_ONLY_ROUTING_LIST = (''AG-Replica2', ''AG-Replica3')));
GO
ALTER AVAILABILITY GROUP [AdventureWorks-AG]
MODIFY REPLICA ON 'AG-Replica2'
WITH (PRIMARY_ROLE(READ_ONLY_ROUTING_LIST = (''AG-Replica1', ''AG-Replica3')));
GO
ALTER AVAILABILITY GROUP [AdventureWorks-AG]
MODIFY REPLICA ON 'AG-Replica3'
WITH (PRIMARY_ROLE(READ_ONLY_ROUTING_LIST = (''AG-Replica2', ''AG-Replica1')));
GO
You can also configure read-only routing by using PowerShell cmdlets. To use PowerShell, you must have
the SQL Server PowerShell Provider installed.
Reference Links: For more information about configuring read-only routing by using
PowerShell, see the article Configure Read-Only Routing for an Availability Group (SQL Server) on
the TechNet website.
Client connections
For a client connection to be redirected to an active secondary replica by read-only routing, the
connection string must specify that the connection is a read-intent connection. The connection string
must also include the name of the database that the client is connecting to.
This code example shows how to use the SQLCMD command-line tool to establish a read-intent
connection to an AlwaysOn Availability Group. The connection string uses the Availability Group Listener
name Adventureworks-AG-Listener, and includes the K parameter that specifies a read-only connection
to the Sales database.
Using SQLCMD to establish a read-intent connection to an Availability Group.
SQLCMD E S Adventureworks-AG-Listener d Sales K ReadOnly
Demonstration Steps
Configure read-only routing
1.
Ensure that you have completed the previous demonstration in this module.
2.
In SQL Server Management Studio, open the ReadOnlyRouting.sql script file in the
D:\Demofiles\Mod11 folder.
11-19
3.
In the query window, under the comment Alter the Availability Group to add read-only routing
URLs, highlight the Transact-SQL statements, and then click Execute.
4.
Under the comment Configure read-only routing lists, which control how each replica behaves
when it is the primary replica, highlight the Transact-SQL statements, and then click Execute.
2.
At the command prompt, type the following to connect to the listener, and then press Enter:
sqlcmd -E -S MIA-DEMO-CLUST -d DemoDB1 -K ReadOnly
3.
At the command prompt, type the following, and then press Enter:
SELECT @@Servername Enter
GO
Note that the result returned is MIA-CLUST2 (the primary replica is MIA-CLUST1).
4.
At the command prompt, type the following, and then press Enter:
Exit
5.
At the command prompt, type the following to connect to the listener, and then press Enter:
sqlcmd -E -S MIA-DEMO-CLUST -d DemoDB1
6.
At the command prompt, type the following, and then press Enter:
SELECT @@Servername Enter
GO
7.
Note that the result returned is MIA-CLUST1 because the K parameter was not used in the
connection string.
8.
At the command prompt, type the following, and then press Enter:
Exit
9.
Close the command prompt, and then close SQL Server Management Studio. Do not save any
changes.
AlwaysOn dashboard
SSMS includes a dashboard that you can use to
manage an AlwaysOn Availability Group. The
dashboard shows the status of the availability group
and enables you to:
View cluster quorum information such as the nodes in the cluster that have a quorum vote.
Connect to the instance that hosts an availability replica in an AlwaysOn Availability Group.
2.
In Object Explorer, expand the instance name, and then expand the AlwaysOn High Availability
node.
3.
You should not use the Failover Cluster Manager tool to manage AlwaysOn Availability Groups, you
should use SSMS instead. Specifically, you should not use Failover Cluster Manager to perform the
following actions:
Change the properties of an Availability Group, for example the preferred owner property.
Fail over an Availability Group to another node. You can use the Failover Wizard on the AlwaysOn
Dashboard in SSMS or Transact-SQL to fail over the Availability Group role.
Although you can perform the majority of tasks by using SSMS, you must perform some advanced
configuration tasks, such as specifying routing information for read-only secondary replicas, by using
Transact-SQL or PowerShell.
Reference Links: For more information about monitoring AlwaysOn Availability Groups by
using PowerShell cmdlets, see the article Monitoring AlwaysOn Health with PowerShell on the SQL
AlwaysOn Team Blog website.
11-20
11-21
1.
Log generation. On the primary replica, cached log data is flushed to the log file on disk and the log
records are prepared for queuing. You can use the SQL Server:Databases Log Bytes Flushed/sec
counter to monitor this process.
2.
Capture. For each database in an Availability Group, there is a separate queue on the primary replica.
The log for each database is captured and sent to the corresponding queue ready to be copied to the
secondary replicas. The SQL Server:Availability Replica Bytes Sent to Replica/Sec counter captures
the sum of the bytes sent to all the queues for an Availability Group.
3.
Send. On the primary replica, the messages in the queues are sent to the secondary replicas. The
counter SQL Server:Availability Replica Bytes Sent to Transport/Sec captures information about
the number of bytes sent per second.
4.
Receive and Cache. The secondary replicas receive the messages from the primary replica and place
them into cache. You can use the SQL Server:Database Replica Log Bytes Received/Sec counter to
monitor this process.
5.
Harden. The log is applied to the secondary database and an acknowledgement sent to the Primary
replica. You can use the SQL Server:Databases Log Bytes Flushed/sec counter on the secondary
database to monitor this process.
6.
Redo. The log pages on the secondary replica are applied to the secondary database data file. You
can use the SQL Server:Database Replica Redone Bytes/Sec counter to monitor this process.
11-22
If a page in a data file becomes corrupted and the replica that hosts the file is not able to read the page,
the replica will automatically contact one of the other replicas in the Availability Group and request a
copy of the damaged page. The replica then replaces the damaged page with the one copied from the
other replica. This process occurs for specific types of errors, without the need for administrator
intervention. The error numbers that trigger automatic page repair include 823, 824, and 829.
Reference Links: For more information about the errors that trigger automatic page repair,
see the article Automatic Page Repair (Availability Groups/Database Mirroring) on the SQL Server
Books Online website.
You can use the sys.dm_hadr_auto_page_repair DMV to view the attempts by SQL Server to implement
automatic page repairs, both successful and unsuccessful.
Lesson 3
11-23
AlwaysOn Availability Groups provide a great deal of flexibility, and organizations can use them to create
highly effective, high availability solutions. When planning to use AlwaysOn Availability Groups, you
should include various factors, such as quorum configuration, failover targets, and the need for disaster
recovery capabilities in your considerations.
Lesson Objectives
After completing this lesson, you will be able to:
Describe how to use AlwaysOn Availability Groups to achieve high availability and disaster recovery
capabilities in various scenarios.
Explain the considerations for the Windows Server Failover quorum configuration for AlwaysOn
Availability Groups.
Explain how to calculate estimations of failover time and potential data loss in an AlwaysOn
Availability Group configuration.
However, a high availability solution will not protect a service against a major event, such as the loss of an
entire data center due to an earthquake, or the failure of network communications between sites because
of power outages. Although these types of events are rare, their effects can be catastrophic, so for
mission-critical services you need to implement a disaster recovery solution to protect against them. A
disaster recovery solution typically includes a dedicated disaster recovery site that is geographically
remote from the main data center site. Ideally, the disaster recovery site is a duplicate of the main site,
with the same servers and other hardware components to enable it to take on the workload of the servers
at the main site if required. In a comprehensive disaster recovery solution, the disaster recovery site should
also have the capability to deliver high availability just as the main site does. Any high availability
solutions that you deploy at the main site should also be configured at the disaster recovery site.
You can use AlwaysOn Availability Groups to provide both high availability for vital services and a disaster
recovery solution.
11-24
In this scenario, you can deploy a primary and a secondary replica at the same site. Because local network
connectivity is likely to be very good, you can use synchronous-commit mode and automatic failover to
ensure continuity of service and to minimize data loss. The advantages of this scenario include:
This scenario builds on the previous one and adds an additional secondary replica that is used as a readonly server. The additional replica will typically use asynchronous-commit mode, which reduces the load
on the primary replica. The fact that the databases on the replica may be slightly out of synch with the
primary replica databases is not a problem because the additional replica is not used for high availability
purposes. The additional advantages of this scenario include:
It reduces workload on the primary replica because it does not need to process client read requests.
It enables additional reduction in stress on the primary replica because you can run backups against
the read-only secondary server.
In the event of a failover, the new primary replica and the read-only reporting replica can continue to
operate without administrator intervention.
You can extend this scenario by implementing read-only reporting servers at separate sites. For example,
by placing a read-only replica at a branch office, you enable faster query response times for clients at the
branch office.
This configuration involves maintaining a primary replica and a secondary replica in synchronous-commit
mode at the main site to provide high availability, and an additional secondary replica at a separate site to
provide disaster recovery capabilities. The replica at the disaster recovery site uses asynchronous-commit
mode because the inter-site network connectivity is not as fast as the intra-site connectivity. In the event
of a disaster at the main site, you can force failover to the replica at the secondary site. The advantages of
this scenario include:
Best Practice: After failover, you should ensure that you configure a high availability
solution at the disaster recovery site to ensure that access to the services can continue
uninterrupted.
11-25
To prevent this situation from arising, you can remove the quorum vote from the node that hosts the
reporting server. You can use the Configure Quorum Cluster Wizard in the Failover Cluster Manager tool
to remove the quorum vote from a cluster node.
When you remove the quorum vote from a cluster node, as in the example above, you need to remember
that with an even number of voting cluster nodes, it might not always be possible to achieve quorum. This
is because achieving quorum requires more than half of the available votes; in a situation where exactly
half of the nodes are available, the cluster cannot achieve quorum. If you do have an even number of
nodes remaining, you can add a quorum witness to ensure that the cluster can achieve quorum.
You can use the Configure Quorum Cluster Wizard to set up one of the following two witness
configurations:
Node plus file share majority. This configuration uses a file share as a witness. The file share has a vote
that counts towards quorum, which ensures that it is possible to achieve a majority.
Node plus disk majority. This configuration uses a disk as a witness. A disk witness is a dedicated disk
that stores a copy of the cluster database. You typically use a disk witness when a cluster has shared
storage that is not replicated, such as in a single site cluster.
Like the file share witness above, the disk witness has a vote that counts towards quorum, which ensures
that it is possible to achieve a majority.
Note: If you are using Windows 2012 R2, Microsoft recommends always configuring a
witness, regardless of the number of nodes. This is because the Dynamic Quorum feature will
automatically determine when the cluster needs to use the witness.
Note: You can view the nodes that have a quorum vote by using the Availability Group
dashboard in SSMS.
11-26
Demonstration Steps
View Quorum Configuration
1.
In SQL Server Management Studio, in the Demo-AG Availability Group dashboard, click View Cluster
Quorum Information.
2.
In the Cluster Quorum Information dialog box, review the information, and note that each of the
three nodes in the cluster has one vote.
3.
On the Start screen, type Failover and then start Failover Cluster Manager.
2.
Click MIA-CLUSTER.adventureworks.msft, in the Actions pane, click More Actions, and then click
Configure Cluster Quorum Settings.
3.
4.
On the Select Quorum Configuration Option page, click the Advanced quorum configuration and
witness selection radio button, and then click Next.
5.
On the Select Voting Configuration page, click Select Nodes, clear the MIA-CLUST3 check box,
and then click Next.
6.
7.
On the Select Quorum Witness page, click the Configure a file share witness radio button, and
then click Next.
8.
On the Configure File Share Witness page, in the File Share Path field, type \\MIADC\DemoWitnessShare, and then click Next.
9.
On the Confirmation page, click Cancel to cancel the change in the configuration.
10. Close SQL Server Management Studio, and then close Failover Cluster Manager.
11. On the taskbar, click File Explorer, browse to D:\Demofiles\Mod11, right-click Cleanup.cmd, click
Run as administrator, in the User Account Control dialog box, click Yes, and then close File
Explorer.
11-27
The time it takes to detect a failure. This depends on the health check time-out threshold, which was
described in the previous topic Flexible Failover Policies.
The redo time. This is the time it takes for the redo process to bring the secondary copy of the
database up to date with the log. You can calculate this by using the
sys.dm_hadr_database_replica_states DMV to obtain the redo_queue_size and redo_rate values
and then using the following formula: Redo time = redo_queue_size / redo_rate.
The failover overhead time. This is the time it takes for the process of failover on the WSFC to
complete, so that the SQL Server databases in the Availability Replica are back online.
In addition to the RTO, designers of high availability solutions need to take account of the amount of data
loss that the solution will permit in the event of a failure. This is usually stated as the recovery point
objective (RPO). The RPO is the point in time to which you must be able to recover after a failure. For
example, an SLA might state that you must be able to recover a database to 15 minutes prior to the point
of failure.
You can calculate an estimation of potential data loss by using the sys.dm_hadr_database_replica_states
DMV to obtain the log_send_queue_size value, and then using the following formula:
Estimated data loss = log_send_queue_size / log bytes flushed per second. You can obtain the log bytes
flushed per second value by using the SQL Server:Database Log Bytes Flushed/Sec performance
counter.
Reference Links: For more information about monitoring and troubleshooting AlwaysOn
Availability Groups, see the Monitor Performance for AlwaysOn Availability Groups webpage on
the MSDN website.
11-28
You are a database administrator at Adventure Works. You need to implement a high availability solution
for the HumanResources and ResellerSales databases to ensure that they remain available in the event
of a server failure. Additionally, because these databases are often used for reporting, you want to create
a high availability solution that provides read-only access to redundant copies of the databases to reduce
the overall workload on the read/write copies of the databases.
To meet these requirements, you have decided to implement an AlwaysOn Availability Group. The
Availability Group will consist of a primary and a secondary replica that use synchronous-commit mode
and automatic failover. You will also create a secondary replica that uses asynchronous-commit mode,
and use this replica to provide read-only access to the databases for the reporting applications. You want
to ensure that the failure of the secondary replica that uses asynchronous-commit mode cannot
compromise the high availability configuration provided by the Availability Group. To achieve this, you
plan to remove the quorum vote from the WSFC node that hosts the reporting replica.
Objectives
After completing this lab, you will have:
To meet the requirements for high availability and reporting for the HumanResources and ResellerSales
databases, you will create an AlwaysOn Availability Group that includes three replicas. The Availability
Group will use an existing three-node WSFC.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Perform Pre-Installation Checks
3. Backup Databases
4. Create an Availability Group
5. Check the Status of the AlwaysOn Availability Group
Start only the 20465C-MIA-DC, 20465C-MIA-AG-CLUST1, 20465C-MIA-AG-CLUST2, and 20465CMIA-AG-CLUST3 virtual machines. Then log on to 20465C-MIA-AG-CLUST1as
ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
11-29
1.
Use the Failover Cluster Manager administrative tool to view the MIA-CLUSTER.AdventureWorks.msft
cluster and verify that the MIA-CLUST1, MIA-CLUST2, and MIA-CLUST3 cluster nodes have a status of
Up.
2.
On the MIA-CLUST1 virtual machine, use SQL Server Configuration Manager to view the properties of
the default SQL Server database engine instance, and verify that the AlwaysOn Availability Groups
feature is enabled.
On the MIA-CLUST1 virtual machine, start SQL Server Management Studio and connect to the MIACLUST1 database engine instance by using Windows authentication.
2.
Perform a full database backup of the ResellerSales database, using the backup path
D:\Labfiles\Lab11\Starter\DataShare\ResellerSales.bak.
3.
Perform a full database backup of the HumanResources database, using the backup path
D:\Labfiles\Lab11\Starter\DataShare\HumanResources.bak.
On the MIA-CLUST1 virtual machine, use the New Availability Group Wizard to create a new
availability group named MIA-SQL-AG. Add the ResellerSales and HumanResources databases as
availability databases, and use MIA-CLUST1, MIA-CLUST2, and MIA-CLUST3 as the replicas.
2.
3.
4.
5.
6.
Port: 1433
On the MIA-CLUST1 virtual machine, use the Dashboard view in SQL Server Management Studio to
view the status information for the availability group.
2.
Verify that the replicas MIA-CLUST1 and MIA-CLUST2 are Synchronized and MIA-CLUST3 is
Synchronizing.
11-30
Now that you have created an AlwaysOn Availability Group, you will test the configuration to ensure that
it meets the requirements for both read/write and read-only access.
The main tasks for this exercise are as follows:
1. Connect to the Primary Replica
2. Connect to a Secondary Replica
3. Connect by Using a Read-Intent Connection
2.
Enter the following command to open a SQLCMD session and connect to the MIA-SQL-CLUST
availability group listener:
sqlcmd -E -S MIA-SQL-CLUST
3.
Enter the following commands to verify that the SQLCMD session is connected to the primary replica
(MIA-CLUST1):
SELECT @@ServerName
GO
4.
Enter the following commands to retrieve rows from the Employee table in the HumanResources
database:
SELECT * FROM HumanResources.dbo.Employee
GO
5.
Exit the SQLCMD session, but keep the command prompt open for the next task.
Enter the following command to open a SQLCMD session and connect to the MIA-CLUST3 replica:
sqlcmd -E -S MIA-CLUST3
2.
Enter the following commands to retrieve rows from the Employee table in the HumanResources
database:
SELECT * FROM HumanResources.dbo.Employee;
GO
3.
Exit the SQLCMD session, but keep the command prompt open for the next task.
Enter the following command to open a SQLCMD session and connect to the MIA-CLUST2 replica:
sqlcmd -E -S MIA-CLUST2
2.
Enter the following commands to attempt to retrieve rows from the Employee table in the
HumanResources database:
SELECT * FROM HumanResources.dbo.Employee;
GO
3.
4.
5.
Enter the following commands to retrieve rows from the Employee table in the HumanResources
database:
SELECT * FROM HumanResources.dbo.Employee;
GO
7.
11-31
Enter the following command to open a SQLCMD session and connect to the MIA-CLUST2 replica
with a read-intent connection:
sqlcmd -E -S MIA-CLUST2 -K ReadOnly
6.
Exit the SQLCMD session and minimize the command prompt window.
To ensure that the AlwaysOn Availability Group fails over as expected, you will test the configuration by
firstly initiating a manual failover, and then by triggering an automatic failover.
The main tasks for this exercise are as follows:
1. Initiate Manual Failover
2. Initiate Automatic Failover
On the MIA-CLUST1 virtual machine, use the Failover Wizard in SQL Server Management Studio to
fail over to the MIA-CLUST2 replica.
2.
In the command prompt window, enter the following command to open a SQLCMD session and
connect to the MIA-SQL-CLUST availability group listener:
sqlcmd -E -S MIA-SQL-CLUST
3.
Enter the following commands to verify that the SQLCMD session is connected to the new primary
replica (MIA-CLUST2):
SELECT @@ServerName
GO
4.
Exit the SQLCMD session, but keep the command prompt open for the next task.
On MIA-CLUST2, stop the SQL Server service and all dependent services.
2.
Enter the following command to open a SQLCMD session and connect to the MIA-SQL-CLUST
availability group listener:
sqlcmd -E -S MIA-SQL-CLUST
3.
Enter the following commands to verify that automatic failover has resulted in MIA-CLUST1 resuming
the primary replica role:
SELECT @@ServerName
GO
4.
11-32
Exit the SQLCMD session, close the command prompt and SQL Server Management Studio.
MIA-CLUST3 hosts a secondary replica that is intended for use as a reporting server only. To prevent this
server having any impact on the cluster quorum if it fails, you intend to remove the quorum vote from
MIA-CLUST3. To ensure that the cluster can still achieve quorum, you have decided to implement a file
share witness.
The main tasks for this exercise are as follows:
1. View the Current Quorum Configuration
2. Change the Quorum Configuration
2.
Use the MIA-SQL-AG Availability Group dashboard to view the current quorum configuration.
3.
Verify that each of the three nodes in the cluster has one vote.
In Failover Cluster Manager, use the Configure Cluster Quorum Wizard to change the quorum
configuration. Use the following settings:
o
11-33
2.
In SQL Server Management Studio, use the MIA-SQL-AG Availability Group dashboard to review the
changes to the quorum configuration.
3.
Viewed the quorum configuration by using Failover Cluster Manager and SQL Server Management Studio.
Removed the quorum vote from the MIA-CLUST3 node and configured a file share witness.
Question: What was the purpose of removing the quorum vote from the MIA-CLUST3
cluster node?
11-34
In this module, you learned about using AlwaysOn Availability Groups to deliver high availability for SQL
Server 2014 databases.
Review Question(s)
Question: In what scenarios might an organization use AlwaysOn Availability Groups to
implement high availability?
Module 12
Planning High Availability and Disaster Recovery
Contents:
Module Overview
12-1
Lesson 1: High Availability and Disaster Recovery with SQL Server 2014
12-2
12-12
12-17
12-22
Module Overview
SQL Server 2014 includes a range of technologies that you can use to provide high availability for services
and applications. You can also use these technologies to create disaster recovery solutions to ensure
business continuity should a major event affect the primary site of operations. Additionally, the services in
Microsoft Azure make it possible to implement high availability and disaster recovery solutions that exist
either partially or entirely in the cloud. This module describes the planning considerations for high
availability and disaster recovery, and provides common implementation scenarios for on-premises,
hybrid, and Azure environments.
Objectives
After completing this module, you will be able to:
Explain the considerations for implementing high availability and disaster recovery by using SQL
Server 2014, and describe some common scenarios.
Explain the considerations for implementing high availability and disaster recovery by using SQL
Server 2014 and Azure services, and describe some common scenarios.
Lesson 1
To successfully implement high availability for on-premises implementations of SQL Server, you need to
understand the difference between high availability and disaster recovery, and how to plan solutions that
can successfully protect data and services by using SQL Server 2014 technologies. This lesson describes the
concepts of high availability and disaster recovery, and the considerations for implementing them. It also
describes how to use SQL Server technologies to create high availability and disaster recovery solutions.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the concepts of high availability and disaster recovery and explain the difference between
the two.
Describe the considerations for implementing high availability and disaster recovery technologies by
using SQL Server 2014.
Describe a high availability and disaster recovery solution that uses a multi-site Failover Cluster
Instance (FCI).
Describe a high availability and disaster recovery solution that uses an AlwaysOn Availability Group.
Describe a high availability and disaster recovery solution that uses a combination of an FCI and an
Availability Group.
One type of disaster recovery solution that most organizations use in some form is data backups. A welldesigned backup solution will include the storage of off-site backups to enable recovery if the original
data is lost or unavailable. However, backups alone do not provide a comprehensive disaster recovery
solution.
The problems with relying solely on database backups for disaster recovery include:
12-3
Backups only preserve data. Backups do not preserve the other elements that are required to deliver a
service, such as the SQL Server instance and server hardware.
Recovery by using backups can be relatively slow. Recovering from a disaster by using only backups is
time consuming, and can result in lengthy and expensive service outages.
By using SQL Server AlwaysOn FCIs and AlwaysOn Availability Groups, you can create disaster recovery
solutions that can rapidly bring services back online after an outage. However, these technologies do not
eliminate the need for backups entirely. Even if you implement an AlwaysOn disaster recovery solution,
you should still carry out a backup schedule to enable recovery from scenarios such as:
Recovery from logical data corruption. AlwaysOn technologies maintain the availability of data, but
they do not have any knowledge of the state of that data. For example, if an employee runs a
Transact-SQL UPDATE statement without including a WHERE clause, they could accidentally update
an entire table instead of just a limited number of rows, and the data in the database will be logically
corrupt. In such a scenario, you could not use an Availability Group or FCI to recover the lost data
because the former would copy the corrupt data to its secondaries, and the latter uses shared
storageso there is no second copy of the data to use for recovery. In this scenario, you could use a
recent backup to recover the lost data, although some data loss would be inevitable. Correct
application design and limitations on the use of impromptu queries can reduce the risk of this kind of
corruption from occurring in the first place.
Widespread disaster. The data recovery site should be geographically remote from the primary data
site so that any disaster that affects one does not affect the other. However, some types of disaster,
such as severe storms and earthquakes, can have far-reaching effects that could affect both sites. In
this scenario, an organization could use backups to recover its data as long as the backups themselves
are stored in a location that is not affected by the disaster.
The process of planning high availability and disaster recovery typically includes calculating RTO and RPO.
The RTO is the period of time within which the service needs to be restored, and the RPO is the period
during which data loss is deemed acceptable in the event of a failure. RPO and RTO are targets dictated
by the needs of the business, and planners of high availability and disaster recovery planning solutions
can use these targets to guide them in their decision-making. For example, if a business group requires a
very limited RTO for their services, you will probably need to choose a solution that enables automatic
failover, and enables clients to access resources on the failover node without requiring reconfiguration. A
solution that does not provide automatic failover, or which requires reconfiguration of clients connection
strings, will take longer to re-establish service availability, and then it might not be possible to meet the
RTO.
The following table provides general guidelines that you can use to incorporate RTO and RPO into high
availability and disaster recovery plans:
Solution
Potential recovery
time (RTO) unit of
measurement
Automatic failover
AlwaysOn Failover
Cluster Instance
Depends on storage
solution
Seconds or minutes
Yes
AlwaysOn Availability
Group with
Synchronous-Commit
None
Seconds
Yes
AlwaysOn Availability
Seconds
Minutes
No
Solution
Potential recovery
time (RTO) unit of
measurement
Automatic failover
Group with
AsynchronousCommit
Database Mirroring in
High-Safety mode
None
Minutes
Yes
Database Mirroring in
High-Performance
mode
Seconds
Minutes
No
Log Shipping
Minutes
Minutes or hours
No
Hours or days
Hours or days
No
Note that it will not always be possible to achieve the values indicated in these guidelines, and you should
include detailed calculations in any high availability and disaster recovery plans that you create.
Number of sites
A comprehensive disaster recovery solution will
incorporate a geographically remote disaster
recovery site, so it is important that the solution you
choose can span multiple sites; typically, this means
that the solution can span multiple IP subnets. SQL
Server 2014 FCIs can span multiple subnets, but all
the servers in the underlying Windows Server
Failover Cluster (WSFC) must be members of the
same Active Directory domain.
Opportunity cost
The cost of high availability and disaster recovery solutions can be high, in particular because of the
hardware required. The standby servers in such a solution need to be identical or near-identical to the
primary servers so that they can take on the workload of the primaries if required. However, it might not
always be possible to justify keeping expensive hardware lying idle for most of the time. If this is the case,
you can consider using a solution such as an AlwaysOn Availability Group, which enables you to configure
active secondary servers that can handle read-only workloads.
Quorum considerations
When configuring AlwaysOn technologies, you should give special consideration to the quorum model
and which cluster members will be configured with a quorum vote. In a WSFC, each node has a quorum
vote by default. However, when planning, it is a good idea to start with the assumption that no node will
be given a vote without a justified reason. This is important to prevent a WSFC from being taken offline
unnecessarily. For example, in a multi-site WSFC, the failure of one or more nodes in the disaster recovery
12-5
site could cause the Cluster Service to take the entire cluster offline, which would result in a loss of service
at the primary data site. To prevent this kind of scenario, you should follow the guidelines below:
Give a quorum vote to each cluster node that hosts a primary replica in an AlwaysOn Availability
Group.
Give a quorum vote to each cluster node that might automatically become a primary replica in an
AlwaysOn Availability Group, or which might host a SQL Server FCI in the event of failover.
Ensure that there is an odd number of votes. For a WSFC to retain quorum, more than half of the
voting nodes must be available; with an even number of nodes, this is not always possible. If you have
an even number of voting cluster nodes, add an additional voting node to the cluster or use a file
share witness to provide the additional vote.
If failover occurs, reassess the quorum configuration and reconfigure it if necessary to ensure the
continued healthy operation of the cluster.
If failover occurs in a WSFC, you may need to use the forced quorum operation to bring the cluster online
in the disaster recovery site. For example, imagine a cluster that has five nodes, three of which are in the
primary data center and have quorum votes. The remaining two are in the disaster recovery site and do
not have votes. If the primary site goes down, the two nodes in the disaster recovery site will detect that
there are not enough votes for the Cluster Service to run, so it will not start. In this situation, you can force
quorum to start the service in the disaster recovery site. To achieve this, you can use the Force Cluster
Start option in the Failover Cluster Manager tool, the Net.exe command line tool with the /fq option, or
Windows PowerShell.
Reference Links: For more information about forcing quorum, see the article Force a WSFC
Cluster to Start Without a Quorum on the MSDN website.
Depending on the application, the tempdb system database can generate a great deal of input/output
(I/O) activity. For SQL Server instances that host such applications, you should consider whether you can
improve overall I/O performance by hosting the data files of the instances tempdb database on local
storage instead of on the clusters shared storage.
You should consider how you will ensure that services remain highly available after a failover. For
example, imagine an FCI with a primary site with two nodes and a disaster recovery site with one node. If
the primary site goes offline and the SQL Server FCI fails over to the node in the disaster recovery site, you
now have one node that is not highly available. To ensure that this doesnt happen, you should configure
high availability for the disaster recovery site, as well as for the primary site.
You can use the following combinations of SQL Server 2014 technologies to create high availability and
disaster recovery solutions:
HA and DR Configuration
Benefits
Reference Links: For general guidelines on the key features and requirements of the
various SQL Server 2014 HA and DR technologies and the differences between them, review the
topic Planning High Availability, in the module Introduction to High Availability in SQL Server
2014, in this course.
You will learn more about these configurations in the next topics.
Degraded availability
When planning a solution to maintain data availability, you should consider the possibility of making
services available at a degraded level. For example, it might be possible to maintain partial service
availability during an update by enabling access to read-only data alone instead of enabling full readwrite access. This would allow some of the service users to continue working and also reduce downtime.
It is essential that organizations test their disaster recovery plans to ensure that they work and also meet
the requirements laid out in service level agreements (SLAs). Testing should not be a one-off occurrence,
but a regular scheduled event; a data center is a dynamic environment, and it is possible that even a small
change can have an impact on a disaster recovery plan.
Additional considerations
This module focuses on planning high availability and disaster recovery by using SQL Server 2014
AlwaysOn technologies. To be truly comprehensive, however, your plans should include additional
considerations, such as:
Redundancy solutions for network components such as network cards, switches, and routers.
12-7
Implement identical SQL Server logins on each SQL Server instance. Using Active Directory domain
logins reduces complexity in this area because you do not need to ensure that passwords are the
same on all instances.
A mechanism for ensuring that SQL Agent jobs can run on all instances.
Note: SQL Server 2014 Standard Edition and SQL Server 2014 Business Intelligence Edition
support two-node FCIs. An FCI that includes more than two nodes will require the use of SQL
Server 2014 Enterprise Edition.
An example of a high availability and disaster recovery solution based on a multi-site FCI might include
the following components:
A WSFC. All nodes in the cluster are members of the same Active Directory domain.
A primary site in the FCI that includes an active cluster node and a second passive node for local
failover. This provides high availability in the primary data site. You could consider adding a third
node to ensure an odd number of nodes that have a quorum vote, and to ensure that local high
availability remains in place, even if the active node fails.
A disaster recovery site in the FCI that includes two nodes. If the primary site fails, you can use the
forced quorum option to start the Cluster Service and bring up the SQL Server FCI on one of these
nodes. The second node provides high availability in the disaster recovery site.
Consider the following points for high availability and disaster recovery solutions that use a multi-site FCI:
To ensure that data remains available after a disaster, a multi-site FCI requires a storage solution in
each site, and relies on hardware-based replication to copy data from the shared storage in the
primary site to the shared storage in the secondary site. For this reason, the recommended shared
storage solution for a multi-site FCI is two storage area networks (SANs). When you create a multi-site
FCI, you should skip the storage validation step during configuration, and when prompted confirm
that you do not require support from Microsoft for the cluster. In fact, selecting this option does not
disqualify you from obtaining support from Microsoft because a multi-site FCI is a supported
configuration.
You can use a multi-site FCI regardless of the recovery model that your databases use, unlike
AlwaysOn Availability Groups, which require databases to use the FULL recovery model.
When you configure a multi-site FCI, the IP address resource dependency is automatically set to use
an OR dependency.
A WSFC.
A disaster recovery site with a secondary replica. The primary replica in the primary site can copy data
to the secondary replica in the disaster recovery site by using either synchronous-commit or
asynchronous-commit mode. The mode you choose will depend on the RPO (how much data loss is
acceptable) and the performance requirements of the system. With asynchronous-commit mode,
some data loss is likely on failover; the exact amount of data loss will vary between systems, and it
depends on factors such as the size and frequency of transactions, as well as the performance
characteristics of the network. Synchronous-commit mode ensures no data loss, but each transaction
takes longer to commit than with asynchronous-commit mode, so response times are comparatively
longer.
You should consider the following points when planning to use an AlwaysOn Availability Group for high
availability and disaster recovery:
All servers in the Availability Group need to be in the same Active Directory domain.
12-9
Like the AlwaysOn FCI solution, using an AlwaysOn Availability Group for high availability and disaster
recovery requires a WSFC to support it. However, a key difference is that an AlwaysOn Availability
Group does not require shared storage, which means that the hardware and implementation costs are
usually lower.
An AlwaysOn Availability Group delivers high availability and disaster recovery for a single database
or group of databases, not for the entire SQL Server instance.
You can use the Availability Group secondary replicas, including the secondary replica at the disaster
recovery site, as active secondaries. This enables organizations to make more efficient use of server
resources.
All databases that participate in an AlwaysOn Availability Group must use the FULL recovery model.
You can use up to eight secondaries if required, enabling you to scale out read-only workloads,
improve response times by adding active secondaries to local sites such as branch offices, and add
resiliency by placing replicas in more sites.
Removing the quorum vote from the secondary replica in the disaster recovery site is recommended
practice. If the primary site does not contain an odd number of WSFC nodes, you can use the Node
and Fileshare Majority quorum model and add a file share witness to ensure that the cluster has an
odd number of quorum votes.
Failover in the primary site is automatic, but if a disaster occurs and you need to fail over to the
disaster recovery site, you must perform a manual failover operation. To do this, you need to carry
out the following actions in the disaster recovery site:
o
Force quorum on the cluster node that hosts the secondary replica in the disaster recovery site
and start the Cluster Service.
Force failover of the availability group. You can use the Transact-SQL ALTER AVAILABILIITY
GROUP statement with the FORCE_FAILOVER_ALLOW_DATA_LOSS option to do this. You can also
use SQL Server Management Studio (SSMS) or PowerShell.
Change the quorum voting configuration by removing votes from the nodes in the primary site
and giving a vote to the node in the disaster recovery site.
Because an Availability Group only protects databases, you will need to transfer logins and SQL
Server Agent jobs separately.
After failover to the disaster recovery site, the service is not highly available. To make it highly
available, you could add a secondary replica in the disaster recovery site.
In versions of SQL Server prior to SQL Server 2012, Availability Groups are not supported, but you can
create a similar solution by using synchronous Database Mirroring in the primary site to provide high
availability and automatic failover, and using log shipping between the principal server in the primary site
and a log shipping secondary in the disaster recovery site to provide a disaster recovery solution. This
solution does not take advantage of WSFC, and the RPO and RTO for disaster recovery are typically
relatively high values. Log shipping delays the application of the log to the secondary server, and the log
shipping schedules for backing up the log, copying it to the secondary, and restoring it, will strongly
influence RPO and RTO.
12-10
A single WSFC, with all cluster nodes belonging to the same Active Directory domain.
Two AlwaysOn FCIs, one in the primary site and one in the disaster recovery site. Each FCI has its own
shared storage.
In the primary site FCI, there is an active node and a passive node. The active node is a primary replica
in an AlwaysOn Availability Group. The FCI, including the Availability Group primary replica, on the
active node can fail over to the passive node to ensure local high availability.
In the disaster recovery site FCI, there is an active and a passive node. The active node has a
secondary replica of the AlwaysOn Availability Group.
The availability set of databases is copied from the primary replica in the primary site FCI to the
secondary replica in the disaster recovery site FCI.
When planning a combined AlwaysOn FCI and AlwaysOn Availability Group solution, you should consider
the following points:
The solution requires a single WSFC and a SQL Server AlwaysOn FCI at the primary site. You can use
either a second SQL Server AlwaysOn FCI or a stand-alone SQL Server instance installed on a WSFC
node in the disaster recovery site. The advantage to using a SQL Server AlwaysOn FCI at the disaster
recovery site is that you can ensure continued high availability at the instance level after a disaster.
Each site has its own storage, which is not visible to the nodes in the other site. When shared cluster
storage is only shared between some of the nodes in a cluster, this is referred to as asymmetric
storage. Windows Server 2012 supports asymmetric storage, as does Windows Server 2008 R2 with
Service Pack one.
You can use the secondary replica in the disaster recovery site as an active secondary if required.
All databases that participate in the Availability Group availability set must use the FULL recovery
model.
As with the AlwaysOn Availability Group solution described in the previous topic, you should remove
quorum votes from the cluster nodes in the disaster recovery site. If there is an even number of nodes
in the primary site, you can use one of these two configurations:
o
Node and disk majority, using the asymmetric storage as a disk witness.
12-11
Failover between FCI nodes is automatic, but when you install an AlwaysOn Availability Group on an
FCI, failover between the primary replica and the secondary replica in the Availability Group is a
manual operation. To fail over to the disaster recovery site, perform the following steps:
o
Force quorum on the cluster node that hosts the secondary replica in the disaster recovery site
and start the Cluster Service.
Force failover of the availability group. You can use the Transact-SQL ALTER AVAILABILIITY
GROUP statement with the FORCE_FAILOVER_ALLOW_DATA_LOSS option to do this. You can also
use SSMS or PowerShell.
Change the quorum voting configuration by removing votes from the nodes in the primary site
and giving a vote to the node in the disaster recovery site.
Because an Availability Group only protects databases, you will need to transfer logins and SQL
Server Agent jobs separately.
Lesson 2
12-12
Although Azure data stores and virtual machines include built-in fault tolerance, you should still plan to
use high availability and disaster recovery technologies to ensure your services and applications can meet
the RPO and RTO requirements that are expected of them. This lesson explains the options for creating
high availability and disaster recovery solutions in scenarios that include Azure services.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the high availability and disaster recovery options in Azure SQL Database.
Describe the high availability and disaster recovery options for databases in SQL Server instances in
virtual machines in Azure, and explain the considerations for each solution.
Describe the high availability and disaster recovery options for databases in hybrid IT scenarios, and
explain the considerations for each solution.
Every Azure SQL Database instance runs simultaneously on three replicas in the same data center.
Although the replicas are in the same data center, each runs on a separate server rack from the other two
replicas, and uses distinct network routers, and so on. This ensures that the loss of any one replica due to
hardware failure in a server, for example, will still leave two more replicas in place and running. One is
referred to as the primary replica, and the other two as secondary replicas. The primary replica processes
all updates and replicates them asynchronously to the secondary replicas. A transaction is only considered
committed when it has been written to both the primary replica and one of the secondary replicas.
Keeping one secondary replica transactionally consistent with the primary replica ensures that, if the
primary should fail, the secondary can take over from it without losing data or introducing integrity issues.
When the primary replica fails over to a secondary, there may be a brief loss of service. You do not need
to change the connection string that clients use to connect to an Azure SQL Database instance if a failover
occurs, but you should ensure that you design applications to use retry logic so they will automatically
reconnect.
12-13
Note that Azure SQL Database does not provide an RTO or RPO for failovers. On failure of any one of the
three replicas, Azure SQL Database automatically creates a new replica to ensure that there are three
available at all times.
Reference Links: For more information about configuring client connections to Azure SQL
Database, see the Azure SQL Database Connection Management article on the TechNet wiki
website.
Azure SQL Database also creates frequent incremental backups, which it stores in the same data center.
The backups are stored for up to 14 days and Azure SQL Database can use them to restore any lost data if
necessary. Note, however, that these backups are for internal use only, and it is not possible for
organizations to use them to restore their data.
As noted in the topic High Availability and Disaster Recovery in the previous lesson, high availability and
disaster recovery solution do not typically provide a mechanism for recovering from logical data
corruption, such as the accidental deletion or updating of data. You can create copies of databases in
Azure SQL Database and store them in the data center. You can then use these copies to restore the
database to a point in time before the logical corruption was introduced. It is a good idea to take a copy
of a database before performing any major updates so that you can revert the database if required. You
can use the CREATE DATABASE Transact-SQL statement with the AS COPY OF clause to create a copy
database.
Reference Links: For more information about creating copies of databases in Azure SQL
Database, see the CREATE DATABASE (Azure SQL Database) topic on the MSDN website.
Consider the following points when planning to use database copies:
A database copy operation can take quite a long time to complete. If you are taking a copy prior to
updating a database, you should ensure that you leave enough time to complete the copy operation.
A finished database copy will reflect the state of the source database at the time when the copy
operation completed. Any changes to the source database between the start and end of the copy
operation are also made to the copy database.
If you use database copies, you will need to plan a schedule for managing them, for example by
defining a retention period for copies. Each database copy incurs the same Azure SQL Database fees
as the source database, but you are only charged from the point when the copy operation completes
until you delete the copy.
To use a copy database to replace a corrupted database, you can simply use the ALTER DATABASE
Transact-SQL statement to rename the corrupted database, and then rename the copy database to
the original database name. Because this is a metadata-only operation, it is faster than performing a
full restore.
You can copy databases on the same server as the source database or to a different server in the
same data center or sub-region. When copying to the same server, you can continue to use the same
logins to enable user access. When copying to a different server, the copy operation moves the
database users, but does not move the logins. You will need to use the ALTER USER Transact-SQL
statement with the WITH LOGIN clause to create an associated login for each user on the server that
hosts the copy database.
12-14
You can use SQL Database Import/Export service to copy databases to a new location. The SQL Database
Import/Export service copies a database as a BACPAC file. You can export the database to a server in a
different data center, or import the database to an on-premises storage location, assuming that the
organizations premises are geographically remote from the data center.
Reference Links: For more information about exporting a database by using the SQL
Database Import/Export service, see the article How to: Import and Export a Database (Azure SQL
Database) on the MSDN website.
Geo-replication
By default, Azure uses geo-replication to automatically copy Blob and Table data to a second data center.
Geo-replication enables protection against the failure of the data center. The primary and secondary
locations for data that is geo-replicated are fixed and predictable. For example, if an organizations data is
in the data center in the East US region, the secondary location will be in the West US region. While georeplication is a very useful backup to have, it does not form a complete disaster recovery solution because
there is no guaranteed recovery time (RTO) or guarantee against data loss (RPO) with this solution.
You can disable geo-replication if required. For example, if you create a database in an instance of SQL
Server in a virtual machine in Azure, you must disable geo-replication if you want to place the data and
log files on different volumes.
You can use AlwaysOn Availability Groups or Database Mirroring to provide high availability for databases
in SQL Server instances running on virtual machines in Azure. The process of configuring Availability
Groups and database mirroring is very similar to configuring them in an on-premises context.
12-15
AlwaysOn Availability Groups require a WSFC to support them, and WSFCs require that all cluster nodes
are members of the same Active Directory domain. Consequently, you can only use AlwaysOn Availability
Groups in Azure if you have an Active Directory domain in place. A basic configuration might include
three virtual machines that are all in the same affinity group, virtual network, subnet, and cloud service.
One is a domain controller, and the other two form a two-node WSFC, with one node hosting the primary
replica in an Availability Group and the other hosting the secondary replica. In this configuration, you
would need to use a file share witness on the domain controller to ensure that there is an odd number of
quorum votes. Alternatively, you could add a third cluster node and use the Node Majority cluster model.
You could then use this node to host a further secondary replica, which you could use as an active
secondary to offload reporting workloads.
Database Mirroring
Database mirroring does not require a WSFC, and you can configure it to run in an Active Directory
domain environment or by using server certificates if there is no domain in place. A typical
implementation with server certificates would require three servers, one to act as the principal server, a
second to act as the mirror, and a third to act as the witness to enable automatic failover. If automatic
failover is not required, you can remove the witness server from the configuration.
Note: Database Mirroring is a deprecated feature in SQL Server 2014 and will be removed
in a future release.
Availability Groups offer many advantages over Database Mirroring, including failover for multiple
databases, up to nine replicas in total, the ability to make secondary replicas active to service read-only
workloads, configurable failover policies, and the ability to use secondary replicas for taking database
backups. Azure Infrastructure Services fully supports the use of Availability Group Listeners, which enables
you to take advantage of all the benefits of Availability Groups in Azure. You should only consider using
Database Mirroring if it is not possible to use an AlwaysOn Availability Group. For example, if there is no
Active Directory domain available, you can implement Database Mirroring with server certificates.
A disaster recovery solution should ensure that if the Azure data center fails, you can recover and reestablish services from another data center. You cannot implement a disaster recovery solution based on
AlwaysOn Availability Groups for SQL Server instances in virtual machines in Azure. This is because it is not
possible to create domains and virtual networks that span multiple Azure data centers. Instead, you can
use Database Mirroring with server certificates, and backups in Azure Blob storage.
Database Mirroring. With database mirroring, you configure the Principal server in one Azure data
center, and the Mirror server in a second data center, using server certificates for authentication.
Backups to Azure Blob storage. By backing up to Blob storage in a different data center, the database
is available for restore if the primary data center fails.
12-16
AlwaysOn Availability Groups require a WSFC, so this solution must include a multi-subnet WSFC to
enable the Availability Group to span the two sites. The multi-site cluster should use a virtual private
network (VPN) to provide secure connectivity.
Within the on-premises site, the replicas can use synchronous-commit mode to ensure that there is
no data loss on failover. The on-premises primary and the secondary in Azure would probably use
asynchronous-commit mode to provide better performance.
For disaster recovery purposes, you should also add a domain controller to the Azure site so that the
secondary can successfully service user requests after failover.
The RPO for this solution is the point at which the primary and secondary replicas last successfully
synchronized.
Database Mirroring. Database Mirroring offers a simple disaster recovery solution for an individual
database. In this scenario, the Principal server is hosted on site and the Mirror server is hosted in a virtual
machine in Azure. This solution does not require the servers to be members of the same domain because
you can use server certificates to secure communications. Additionally, you do not need to configure a
WSFC or a VPN to support this configuration. The RPO for this solution is the point at which the Principal
and the Mirror last successfully synchronized.
Log Shipping. Like Database Mirroring, log shipping is a relatively simple way to implement a disaster
recovery solution. In this scenario, you would place a log shipping primary server in the on-premises site
and host a secondary server in a virtual machine in Azure. A VPN secures the connection between the two
servers. Log shipping involves copying database backups from a windows file share, so an Active Directory
domain is required to support this. You should place domain controllers in both sites to support recovery.
With log shipping, you can add multiple secondary servers, so you could create additional disaster
recovery sites. The RPO for this solution is the point at which the last backup was successfully applied to
the secondary server.
Backup to Azure Blob storage. In this scenario, you back up on-premises databases to an Azure Blob
store. While this solution protects the database, it does not enable access to a SQL Server instance
that hosts the database, and you must restore the database to a new server before you can access it.
12-17
You have been charged with reassessing the high availability and disaster recovery systems that are in use
across the Adventure Works company database infrastructure. You will need to examine the current high
availability and disaster recovery solutions where they exist, and suggest how they could be improved to
make them more resilient, and cost-effective. Where there is no high availability or disaster recovery
solution in place, you must supply an appropriate solution. For the purposes of this assessment exercise,
management have not placed any specific financial constraints on your planning. Although there is an
emphasis on discovering solutions that offer the best value for money, the key task at this stage is to
identify the best solution for each scenario.
In this lab, you will examine three different scenarios and plan a solution for each one that includes both
high availability and disaster recovery. Your plans can include both on-premises SQL Server technologies
and Azure services as required.
Objectives
After completing this lab, you will have:
The Online Sales database supports the companys online sales application. Over the last few years, the
online sales channel has grown to become the largest generator of revenue for Adventure Worksso the
Online Sales database is a vital part of the infrastructure. The following high availability and disaster
recovery solution is currently in place:
The database is hosted on a SQL Server 2008 R2 cluster instance on a two-node WSFC. The cluster is
located at headquarters on the 10.1.0.0/16 network.
There is a second identical cluster that is hosted at a disaster recovery site that uses the 10.2.0.0/16
network.
Each cluster uses a SAN for shared storage, and SAN replication keeps the two SANs synchronized.
While this solution is effective and enables both high availability and disaster recovery, there are two key
issuesidentified by managementthat your new plan should address:
Management do not feel that the investment in the SAN has been cost-effective so far. In particular,
the cost of SAN replication is very high.
There are issues with the quorum configuration that have caused occasional outages in the past, and
which need to be resolved.
In the new solution, local failover must occur automatically. Some data loss is acceptable on failover to the
disaster recovery site.
12-18
You and a colleague will individually assess the current setup, and devise a new plan. You will then explain
your plans to each other and attempt to decide on a single course of action.
The main tasks for this exercise are as follows:
1. Plan the Solution
2. Compare Different Solutions
3. Review the Suggested Solution
Start the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines, and log on to 20465C-MIA-SQL as
ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
Review the current high availability and disaster recovery setup as described in the exercise scenario.
3.
Devise a plan to update or replace the existing solution to meet the stated requirements. Include the
following in your plan:
4.
The details of the configuration. For example, if using a Window Server Failover Cluster, what
quorum configuration will you use? If using an Availability Group, how will the replicas
synchronize? How many sites will there be?
The anticipated RPO for disaster recovery. This should not be an actual value; approximations
such as the solution enables recovery to the point of last synchronization are adequate.
Work with a partner. Take turns to describe your proposed solutions to each other, and explain why
you have chosen the solution that you have. As you listen to your partners solution, make notes, ask
questions to clarify details, and offer your opinion if you think that you might be able to add to the
solution.
2.
Compare your solutions and decide together which you think would be the best one to present to
management. Alternatively, you might decide on a third solution that is different to the ones that you
discussed in step 1.
On the MIA-SQL virtual machine, browse to D:\Labfiles\Lab12\Solution folder, and then double-click
Exercise1_Suggested_Solution.doc.
2.
Review the suggested solution and compare it to the one you discussed in the previous task.
12-19
administrators have concerns that performance and storage space might become an issue in the future. In
addition, a manager in the Human Resources department has requested that Human Resources data
should be hosted separately from line-of-business databases to comply with the companys data
protection rules. These rules also state that Human Resources data cannot be stored off-premises. You will
plan a high availability and disaster recovery solution for the Human Resources database. In your plan,
consider the following points:
A small amount of data loss is permissible if failover to the disaster recovery site occurs.
The primary site for the database will be at the company headquarters.
Two branch offices also require access to the database to enable Human Resources personnel to
access information when dealing with queries from employees. In the past, users at the branch office
have complained about slow response times.
All servers in the sites concerned belong to the same Active Directory domain.
Review the current high availability and disaster recovery setup as described in the exercise scenario.
2.
Devise a plan to update or replace the existing solution to meet the stated requirements. Include the
following in your plan:
3.
The details of the configuration. For example, if using a Window Server Failover Cluster, what
quorum configuration will you use? If using an Availability Group, how will the replicas
synchronize? How many sites will there be?
The anticipated RPO for disaster recovery. This should not be an actual value; approximations
such as the solution enables recovery to the point of last synchronization are adequate.
Work with a partner. Take turns to describe your proposed solutions to each other, and explain why
you have made your choices. As you listen to your partners solution, make notes, ask questions to
clarify details, and offer your opinion if you think you might be able to add to the solution.
2.
Compare your solutions and decide together which you think would be the best one to present to
management. Alternatively, you might decide on a third solution that is different to the ones you
discussed in step 1.
On the MIA-SQL virtual machine, browse to D:\Labfiles\Lab12\Solution folder, and then double-click
Exercise2_Suggested_Solution.doc.
2.
Review the suggested solution and compare it to the one you discussed in the previous task.
12-20
As part of its long-term strategy to establish a stronger presence in the European market, Adventure
Works has recently opened a small office in the United Kingdom. This has its own databases, including for
Sales and Human Resources, which it maintains independently of the databases in the United States. None
of these databases are currently very large, but management anticipate a significant growth in sales over
the next few years.
The databases are hosted on a SQL Server 2008 instance, which is the principal server in a Database
Mirroring configuration. The mirror server is located in the same office and there is a witness server to
enable automatic failover. The database administrator has created a backup schedule and copies of the
backups are stored off site for disaster recovery purposes. Management are concerned about the RTO and
RPO that this solution provides, and would like to create a solution that enables faster resumption of
service and minimizes data loss. A further concern is the cost of transporting and storing the backups. You
must devise a high availability and disaster recovery solution for the UK office that addresses these
concerns. Consider the following additional information as you plan your solution:
The UK office infrastructure includes a single Active Directory domain with two domain controllers.
Management will sanction the purchase of some new hardware and the upgrading of existing
software, as long as the solution meets the stated requirements. However, the budget is limited.
The proposed solution needs to be flexible so it can address the current requirements and expand
quickly if the anticipated increase in sales occurs.
Review the current high availability and disaster recovery setup as described in the exercise scenario.
2.
Devise a plan to update or replace the existing solution to meet the stated requirements. Include the
following in your plan:
3.
The details of the configuration. For example, if using a Windows Server Failover Cluster, what
quorum configuration will you use? If using an Availability Group, how will the replicas
synchronize? How many sites will there be?
The anticipated RPO for disaster recovery. This should not be an actual value; approximations
such as the solution enables recovery to the point of last synchronization are adequate.
12-21
1.
Work with a partner. Take turns to describe your proposed solutions to each other, and explain why
you made your choices. As you listen to your partners solution, make notes, ask questions to clarify
details, and offer your opinion if you think that you might be able to add to the solution.
2.
Compare your solutions and decide together which you think would be the best one to present to
management. Alternatively, you might decide on a third solution that is different to the ones that you
discussed in step 1.
On the MIA-SQL virtual machine, browse to D:\Labfiles\Lab12\Solution folder, and then double-click
Exercise3_Suggested_Solution.doc.
2.
Review the suggested solution and compare it to the one you discussed in the previous task.
Planning high availability and disaster recovery solutions by using SQL Server 2014.
Planning high availability and disaster recovery solutions by using SQL Server 2014 and Azure
services.
Review Question(s)
Question: Think about how high availability and disaster recovery implementations work in
your own organization. Can you think of any ways in which you could improve the solutions
that are currently in place?
12-22
Module 13
Replicating Data
Contents:
Module Overview
13-1
13-2
13-16
13-20
13-23
Module Overview
Enterprise organizations often need to support multiple sites that require access to the same data. SQL
Server Replication is a technology that you can use to synchronize data across multiple SQL Server
instances, making it possible to distribute data throughout an enterprise organization. Replication can
also improve availability by helping to ensure that if one server becomes unavailable, others can serve
users; it also helps you to improve scalability by enabling multiple servers to share a common workload.
This module provides an overview of SQL Server replication and explains the agents used to implement it.
It also describes some common replication scenarios, how to design an appropriate replication system for
your requirements, and how to monitor and troubleshoot replication.
Objectives
After completing this module, you will be able to:
Lesson 1
There are many choices and considerations to make before you can begin to plan your SQL Server
replication system. These include deciding the type of replication to use, which determines the editions of
SQL Server you can use, and how you should secure your system.
This lesson provides an overview of SQL Server 2014 replication, describes the types of replication that
SQL Server supports, and summarizes the replication agent and security features that you should consider.
Lesson Objectives
After completing this lesson, you will be able to:
Reduced network traffic. If your organization has multiple offices in physically separate locations,
you can use replication to enable each site to work with its own copy of the database and merge the
changes back to the master copy and on to the other sites regularly.
Offline processing. If users need to access data when they are disconnected from the main network,
you can use replication to create an offline copy of data that they work with locally, and then
replicate changes with the central copy when they reconnect to the network.
Replication Terminology
SQL Server replication uses a publishing metaphor
to describe the key components in a replication
system.
Publishing metaphor
Replication uses publisher, subscriber, and
distributor terminology, as described in the
following table. This is based on a magazine
publishing scenario; however, certain types of SQL
Server replication extend this metaphor by allowing
subscribers to update data.
Term
Description
Publisher
A publisher is a SQL Server instance that makes data and database objects
available to other locations by using replication.
Article
Publication
Subscriber
Subscription
Distributor
A distributor is a SQL Server instance that hosts one or more publications while it
is being moved to subscribers.
Node
A node is a SQL Server instance that sends or receives data in the replication
process; for example, a publisher or a subscriber.
13-3
When you define a publication, you can specify that a particular article includes only a subset of data from
an object. By reducing the data being replicated, you can reduce synchronization times, subscriber disk
space requirements, and network traffic. You can filter data either by column, by specifying a subset of
columns to include in the article; or by row, using a WHERE clause in the article definition. You can even
create parameterized row filters that enable you to publish different subsets of rows to different
subscribers when using merge replication.
SQL Server supports both push subscriptions and pull subscriptions. For push subscriptions, the articles are
pushed from the distributor to the subscribers. For pull subscriptions, the articles are pulled from the
distributor by the subscribers.
Latency defines the period of time during which copies of the replicated data may not be identical to
each other because the replication process is incomplete. Depending on the type of replication that you
are using, latency can vary from a few seconds to many days.
Autonomy defines the ability to use the replicated database without connecting to other databases in the
replication system. Again, the type of replication that you use determines the autonomy of your
databases.
Types of Replication
There are three key types of replication in SQL
Server 2014snapshot, transactional, and merge.
Each provides different levels of autonomy and
latency that make each type of replication suitable
for distinct scenarios.
Snapshot replication
Snapshot is the simplest type of replication. When
you initiate this, SQL Server takes a copy of the
publication at that point in time and distributes it to
subscribers. It does not monitor changes made to
the publication after that point in time; however, if
you later opt to synchronize a subscriber, a new
copy of the database is taken and redistributed to the subscriber.
Snapshots can be generated on an impromptu basis, such as when an administrator decides that the
subscribers need updating, or on a schedule that you define when you configure the publication.
Snapshot replication exhibits high latency because the changes made at the publisher are only sent to the
subscriber when a new snapshot is distributed. It is also highly autonomous because the subscriber has a
complete copy of the data and database objects, and can work without a connection to the original
publisher.
Note: Snapshot replication can be used as a replication solution in its own right; however, it
is often used to generate the initial data set for other types of replication.
2.
3.
It recreates the schema on the subscriber and inserts the data into the new schema.
Snapshot replication can take a long time to complete because it copies the entire data set each time;
therefore, it is best used when data is updated infrequently.
Transactional replication
Transactional replication enables you to track and distribute changes, including both data and schema
changes, made to a publication after the initial snapshot is distributed. Changes are tracked at the
publisher, passed to the distributor, and then on to the subscribers as they occur. This results in low
latency because the subscribers are only marginally out of synchronization with the publisher. Data
changes are applied to subscribers in the same order that they occurred at the publisher and with
identical transactional boundaries. This helps to ensure that the data remains consistent across all copies
of the database.
2.
3.
It copies and applies transactions made at the publisher to each of the subscribers.
13-5
By default, data at subscribers should be treated as read-only because any changes made to that data will
not be synchronized back to the publisher. However, there is a specialized form of transactional
replication, peer-to-peer (PTP) transactional replication that enables changes to be made at other nodes
in the system.
PTP transactional replication extends the standard transactional replication and enables changes to be
made to data at any node in the system; however, an individual row of data should be changed only one
node at a time. Each node contains a writeable copy of the data and acts as both a publisher and
subscriber, so that changes made at any node can be published from that node and subscribed at all
others.
During synchronization, the transactions that occurred at a subscriber are applied to the publisher before
the transactions at the publisher are applied to the subscriber.
The replication system itself helps ensure that changes to a row are distributed only once to each node
within the system and detects any conflicts that may occur. In the event of a conflict, SQL Server suspends
processing changes at the affected node to avoid data inconsistency. You should then reinitialize that
node from one that contains consistent data.
Merge replication
Merge replication takes transactional replication one stage further by enabling users to make changes to
data and schema at any subscriber, propagate these changes back to the publisher, and then propagate
them on to the other subscribers. In merge replication, transaction ordering is not maintained; changed
rows are simply between databases.
You can either partition data across the subscribers to avoid the same data being changed in two
locations or implement conflict detection and resolution to handle conflicts when they occur. By default,
the publisher wins any conflicts with subscribers. If you want to override this setting, you can write your
own custom resolvers to handle conflicts appropriately.
When you configure merge replication, SQL Server modifies the subscription database to include a unique
identifier for each row in an article (unless the rows already contain a column with the ROWGUIDCOL
property) and to add change tracking tables to the database. These new tables track which rows in an
article are changed at that node so that only the changed rows are merged during synchronization.
2.
It tracks changes made to the data at the publisher and each subscriber.
3.
It merges data changes made at all nodes with the data at each node.
Because data can be modified at any node, conflicts can occur. When this happens, a conflict resolver
automatically determines which data should be accepted and propagated to the other nodes. You can
view the conflict and outcomes, and even manually change an outcome, if required.
The different editions of SQL Server 2014 offer varying levels of support for replication, as described in the
following table:
Edition
Snapshot
replication
Transactional
replication
PTP transactional
replication
Merge
replication
Snapshot
replication
Transactional
replication
PTP transactional
replication
Enterprise
Yes
Yes
Yes
Yes
Business
Intelligence
Yes
Yes
No
Yes
Standard
Yes
Yes
No
Yes
Web
Subscriber only
Subscriber only
No
Subscriber only
Express
Subscriber only
Subscriber only
No
Subscriber only
Edition
Merge
replication
Replication Agents
SQL Server 2014 uses replication agents to actually
perform the replication tasks that a system requires.
Each type of replication uses a different subset of
the available agents to complete its work. SQL
Server Agent schedules the replication agent tasks,
but you can also run them from the command line,
from batch files, or by developing applications that
use the Replication Management Objects (RMO).
Note: By default, the SQL Server Agent
service is set to manual startup when you install SQL
Server, although you can configure this during
installation. If the service is disabled, you must enable it for replication.
Replication agents
Replication agents include:
Snapshot Agent. The Snapshot Agent runs at the distributor. It creates the snapshot schema and
data files for both snapshot and other types of replication. It stores the files in the snapshot folder
and also records the synchronization of jobs in the distribution database. There is one snapshot agent
per publication.
Distribution Agent. The Distribution Agent runs at the distributor for push subscriptions and at the
subscribers for pull subscriptions. It applies the initial snapshot to subscribers in snapshot and
transactional replication. When using transactional replication, it also moves the transactions that are
stored in the distribution database to the subscribers.
Log Reader Agent. The Log Reader Agent runs at the distributor for transactional replication. It
monitors the transaction log of published databases and copies the transactions that occur at the
publisher to the distribution database on the distributor.
Merge Agent. The Merge Agent runs at the distributor for push subscriptions and at the subscriber
for pull subscriptions in merge replication. It applies the initial snapshot to the subscriber, and then
copies the data changes made at the publisher and subscribers to the other nodes in the replication
system. By default, it copies the changes made at the subscriber to the publisher before copying the
13-7
changes made at the publisher to the subscriber. It also resolves any conflicts that occur during the
merge process. There is one merge agent per merge subscription.
Scheduling replication
The replication of data can be configured as a continuous, scheduled, or on-demand process.
Continuous replication. By default, in continuous replication, for transactional replication, the log
reader agent and distribution agent poll every five seconds; for merge replication, the merge agent
polls once every minute. When any changes are detected, the agents perform their tasks of
replicating the changes to the appropriate servers. Continuous replication reduces latency between
publishers and subscribers; however, it can lead to an increase in network traffic. It is commonly used
in transactional replication scenarios.
Scheduled replication. You can use the SQL Server Agent scheduling services to schedule replication
agents to poll at predetermined intervals or at preconfigured times. Scheduled replication aids offhours replication and is commonly used for snapshot replication to avoid overloading servers during
peak hours.
On-demand replication. In on-demand replication, the replication agents are manually run by a user
or an application. On-demand replication is commonly used for applications that provide a
synchronization facility as part of their core functionality and is commonly used for merge
replications.
Each replication agent has profile information that SQL Server uses as parameters when the agent runs.
These profiles enable you to set properties for the agent to configure how it works. The profiles are stored
at the distributor and when an agent starts, it queries the distributor for the parameters to use. This profile
system enables you to easily configure and maintain multiple agents in a single profile, while maintaining
the flexibility to customize the profile for an individual instance of an agent. You can also override the
profiles settings by using command-line parameters when starting agents from a command prompt.
When planning a replication strategy, you need to decide where to place the Distribution Agent or Merge
Agent to help ensure that you do not adversely affect performance. When you place the Distribution
Agent or Merge Agent on the distributor, the distributor is responsible for queuing the published
replication data and for propagating it to the subscribers. This arrangement is usually referred to as push
replication. The alternative is pull subscription, in which Distribution Agents or Merge Agents run on the
subscribers. With pull replication, the distributor still hosts the publications, but it is the responsibility of
the subscribers to initiate a connection to the distributor and copy the available data.
Hosting the Merge Agent and Distribution Agent on a server incurs a performance cost, so you should
evaluate the impact of this in the context of the other workloads that each server has to handle. You
should consider the following points when deciding whether to use pull or push replication:
Do you have a separate publisher and distributor? When a server acts as both publisher and
distributor for a publication, it has a greater workload than it would if it held only one of these roles.
In this scenario, placing the Distribution Agent or Merge Agent on the subscriber reduces the overall
workload for a server.
How great is the workload on the subscribers? If the subscribers have a heavy workload, you
should consider placing the Distribution Agent or Merge Agent on the distributor to avoid overburdening the subscribers.
Securing Replication
Because replication often involves a widespread
distribution of data, it is essential that only
authorized users can set up a replication system.
Replicated data should be secured throughout the
process.
13-9
Every agent requires specific permissions for each different type of replication, as shown in the following
table:
Agent
Permissions
Snapshot Agent
Accessing publications
Each publication contains a publication access list (PAL) that defines which users can subscribe to it. You
can grant users access to a publication by adding them to this list.
All members of the sysadmin role, as well as the publication creator, are automatically added to the list;
however, members of the sysadmin role and the db_owner role can access the publication even if their
names are removed from the list. The creator of the publication and members of the db_owner role can
add logons to the list.
Create a publication.
Create a subscription.
Test replication.
Demonstration Steps
Create a Publication
1.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running and then log on
to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
Replicating Data
13-10
2.
3.
Start SQL Server Management Studio and connect to the MIA-SQL database engine instance using
Windows authentication.
4.
In SQL Server Management Studio, in Object Explorer, expand Replication, right-click Local
Publications, and then click New Publication.
5.
In the New Publication Wizard, on the New Publication Wizard page, click Next.
6.
On the Distributor page, ensure that the MIA-SQL will act as its own Distributor option is
selected, and then click Next.
7.
On the Snapshot Folder page, review the default location for the snapshot folder, and then click
Next.
8.
On the Publication Database page, click DemoDB, and then click Next.
9.
On the Publication Type page, click Transactional publication, and then click Next.
10. On the Articles page, in the Objects to publish box, expand Tables, select the Product (Products)
check box, and then click Next.
11. On the Filter Table Rows page, click Next.
12. On the Snapshot Agent page, select the Create a snapshot immediately and keep the snapshot
available to initialize subscriptions check box, and then click Next.
13. On the Agent Security page, click Security Settings.
14. In the Snapshot Agent Security dialog box, in the Process account box, type
ADVENTUREWORKS\ServiceAcct, in the Password and Confirm Password boxes, type Pa$$w0rd,
click OK, and then on the Agent Security page, click Next.
15. On the Wizard Actions page, ensure that only Create the publication is selected, and then click
Next.
16. On the Complete the Wizard page, in the Publication name box, type DemoPub, and then click
Finish.
17. On the Creating Publication page, wait for the operation to complete, and then click Close.
Create a Subscription
1.
2.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL\SQL2, and then click
Connect.
3.
In Object Explorer, under MIA-SQL\SQL2, expand Replication, right-click Local Subscriptions, and
then click New Subscriptions.
4.
In the New Subscription Wizard, on the New Subscription Wizard page, click Next.
5.
On the Publication page, in the Publisher list, click Find SQL Server Publisher.
6.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL, and then click
Connect.
7.
8.
On the Distribution Agent Location page, ensure that Run each agent at its Subscriber (pull
subscriptions) is selected, and then click Next.
9.
On the Subscribers page, in the Subscription Database column, click New database.
10. In the New Database dialog box, in the Database name box, type DemoDB, and then click OK.
11. On the Subscribers page, click Next.
12. On the Distribution Agent Security page, click the ellipsis button.
13-11
13. In the Distribution Agent Security dialog box, in the Process account box, type
ADVENTUREWORKS\ServiceAcct, in the Password and Confirm Password boxes, type Pa$$w0rd,
and then click OK.
14. On the Distribution Agent Security page, click Next.
15. On the Synchronization Schedule page, ensure that Agent Schedule is set to Run continuously,
and then click Next.
16. On the Initialize Subscriptions page, ensure that Immediately is selected, and then click Next.
17. On the Wizard Actions page, ensure that only Create the subscription(s) is selected, and then click
Next.
18. On the Complete the Wizard page, review the configuration steps, and then click Finish.
19. On the Creating Subscription(s) page, wait for the operation to complete, and then click Close.
Test Replication
1.
In Object Explorer, under MIA-SQL\SQL2, expand Databases, expand DemoDB, expand Tables,
right-click Products.Product, and then click Select Top 1000 Rows.
2.
In the Results pane, review the information, noting that the query returned data for Product 1,
Product 2, and Product 3.
3.
In Object Explorer, under MIA-SQL, expand Databases, right-click DemoDB, and click New Query.
4.
In the query window, type the following Transact-SQL statement, and then click Execute:
INSERT INTO Products.Product
VALUES
('Product 4');
GO
5.
Wait for 30 seconds, and then switch to the query tab you used to select the top 1000 rows from the
Products.Product, in the MIA-SQL\SQL2 instance, and click Execute.
6.
In the Results pane, review the information, noting that the table now includes data for Product 4.
7.
Close SQL Server Management Studio, and do not save any changes.
8.
Replicating Data
Validating replication
You can validate data at the subscribers by using
SQL Server Management Studio (SSMS). When you
mark a subscription or subscriptions for validation,
the next time either the Merge Agent or the
Distribution Agent runs, SQL Server will validate the
data. You can configure how SQL Server performs
validation by using the following row count and
checksum validation options:
13-12
Compute a fast row count based on cached table information. This option enables you to
perform relatively low-cost validation to minimize the impact of validation on performance.
Compute an actual row count by querying the tables directly. This option provides more accurate
information, but it can potentially have a greater impact on performance.
Compute a fast row count; if differences are found, compute an actual row count. This option,
which is the default, combines the two previous options to simplify the validation process.
Compare checksums to verify row data. Checksums are values calculated from the data. By
comparing the checksums from the distributor and its subscribers, SQL Server can identify
mismatched rows.
Expand the Replication node, expand the Local Publications node, right-click a publication, and
then click Validate Subscriptions.
2.
In the Validate Subscriptions dialog box, configure the required validation options, as described
above.
3.
Reference Links: For more information about performing data validation, see the article
Validate Data at the Subscriber in the MSDN library.
Monitoring Replication
SQL Server includes the Replication Monitor tool to enable you to track and troubleshoot replication. You
can open Replication Monitor by right-clicking a publication in SSMS, and then clicking Launch
Replication Monitor. Replication Monitor displays information about the individual agents, including the
Snapshot Agent, Log Reader Agent, Queue Reader Agent, and the various maintenance jobs, such as the
Expired subscription clean up job, that SQL Server runs to ensure that replication remains efficient. In
addition, Replication Monitor displays performance and latency information that you can use to identify
potential replication issues early on.
You can also configure alerts that will contact an operator whenever a specified event occurs, such as
agent failure, or data validation failures.
Note: When creating an alert, you have the option to use a Net Send command to contact
an operator. Net Send is a Windows utility that is no longer supported in Windows Server 2012.
Troubleshooting Replication
Replication problems generally fall into one of the
following categories, which are described below:
Actions at the subscriber roll back or prevent addition of the data. This can occur when the
subscriber includes a trigger that rolls back transactions, or when the stored procedure that SQL
Server uses with articles in transactional replication includes a condition that is not valid.
Failure of one or more agents. If an agent fails, or is not running, data will not be replicated
successfully.
You can perform the following actions to begin troubleshooting these issues:
13-13
Validate data. To verify that data is missing, you should first attempt to validate it. To discover
differences in the number of rows at the publisher and subscriber, use row count validation; to reveal
differences in the content of the rows at the publisher and subscriber, use checksum validation.
Reference Links: You can also use the tablediff command prompt utility to compare
replicated tables and identify mismatching data. For more information about the tablediff utility,
see the article How to: Compare Replicated Tables for Differences (Replication Programming) in
the MSDN library.
Verify filters. You can verify the filters that might apply to the articles to ensure that they are
replicating the correct data. You can do this using the sp_helparticle, sp_helpmergerticle, and
sp_helpmergerfilter stored procedures, and view the filter clause in the result sets.
Verify agent status. Ensure that the appropriate agents are running and not reporting errors. For
merge replication, check the Merge Agent. For transactional replication check the Distribution Agent
and the Log Reader Agent. You can use Replication Monitor to verify the status of the agents.
Reference Links: For more information about verifying the status of replication agents, see
the articles How to: View Information and Perform Tasks for the Agents Associated with a
Publication (Replication Monitor) and How to: View Information and Perform Tasks for the Agents
Associated with a Subscription (Replication Monitor) in the MSDN library.
Ensure that Snapshot Agent has completed when initializing subscriptions. If the snapshot agent
has not completed, you will not be able to view the replicated data.
Replicating Data
13-14
Data at the publisher is treated as read-only, but the data at the subscriber is updateable. If
this is the case, and you are using transactional replication, you should consider changing the method
of replication to merge replication, transactional replication with updating subscribers, or PTP
transactional replication. These methods of replication are designed to allow for changes at the
subscribers to be propagated as required.
Triggers are making changes to the data or rolling back transactions. When you need to create
triggers at the publisher, you can use the NOT FOR REPLICATION option of the CREATE TRIGGER
Transact-SQL statement to prevent those triggers from firing when data is replicated to subscribers.
Constraints at the subscriber prevent the addition of rows. In transactional replication, constraint
errors cause the Distribution Agent to stop synchronizing the data. In merge replication, constraint
errors do not stop synchronization, and the differing values that result from the constraint violation
are treated as conflicts that require resolution. To avoid constraint errors, by default, the NOT FOR
REPLICATION option is used to prevent the subscriber copy of the database from enforcing foreign
key and check constraints when replication inserts, updates, or deletes data.
You can use the validation methods described above and the tablediff utility to verify that there is a
mismatch between the data sets.
Performance issues
Replication performance issues can be down to a number of factors, including:
Network. If the network is subject to changes in usage load, or if other factors affect latency or
throughput, then replication performance will suffer. You can use various Distribution Agent and
Merge Agent parameters to control how data is replicated over the network. You can also configure
Windows network settings, such as size of Transmission Control Packet (TCP) segments to manage
performance.
Large number of agents. If there is a large number of agents running, this can cause problems with
memory shortage. Try to minimize the number of agents that run simultaneously on a single
machine, and consider the placement of agents on subscribers to reduce the impact on the publisher
and distributor.
Reference Links: For more information about memory problems due to replication agents,
see the article Running a Large Number of Agents is Causing Memory Problems in the MSDN
library.
To help to avoid performance problems, consider the following points when planning replication:
Minimize the size of the transactions that you will replicate. The less data that needs to be
replicated, the better performance will be.
Use a dedicated server as the distributor. A dedicated distributor reduces resource consumption
on the publisher.
Reference Links: For more information about publishing stored procedure replication, see
the article Publishing Stored Procedure Execution in Transactional Replication in the MSDN library.
Spread articles across multiple publications. This enables updates to different articles to be
performed in parallel rather than simultaneously.
Reference Links: For more information about performance planning for replication, see
the article Enhancing General Replication Performance in the MSDN library.
Security issues
Common causes of security-related replication issues include:
Replication agent accounts do not have the correct rights and permissions.
Reference Links: For more information about security issues and replication, see the article
Security Issues are Preventing Data from Being Replicated in the MSDN library.
Reference Links: For more information about troubleshooting replication in general, see
the Troubleshooting Concepts (Replication) article in the MSDN library.
13-15
Replicating Data
Lesson 2
Planning Replication
Now that you understand the types of replication and how they work, you can begin to plan your own
systems.
13-16
This lesson describes scenarios for each type of replication and the different topologies that you can use.
Lesson Objectives
After completing this lesson, you will be able to:
Scenarios in which these conditions might be true and where snapshot replication can be useful include:
Generating a product list. If your organization updates its products on an infrequent basis, you can
distribute the product list as a snapshot, generating a fresh one whenever new products are added to
the product line. In this scenario, the workload of generating the entire snapshot is outweighed by
the infrequency with which it is done.
Creating a daily copy of data for general reporting purposes. If you are running a reporting
system that is used for general data analysis instead of real-time reporting, you can create an
overnight job that generates a snapshot of the database, and use it on the next day as the source
data for reporting. This approach can reduce the load on your online transaction processing (OLTP)
server during the working day by running report queries against a different copy of the data.
Distributing tax or delivery rates. If you need to distribute tax rates, delivery rates, or other small
sets of data to many servers, the overhead of creating and applying a new snapshot is unlikely to
impact the overall performance of the systemthe time taken to complete the process is unlikely to
delay the use of the new data.
Scenarios in which these conditions might be true and where transactional replication can be useful
include:
13-17
Updating order status. In a multi-server business to customer ordering system, you need to ensure
that, if a customer checks the status of an order, he or she sees up-to-date information. The low
latency of transactional replication systems can propagate status changes within seconds of them
being updated at the central location.
Financial transactions. When working with financial data, you will often find that multiple updates
need to be applied as one atomic transaction. By using transactional replication, you can help ensure
that the transaction is either applied completely at the subscriber or not at all.
Single-site systems. If your organization hosts all the subscribers at the same location as the
publisher, the local connection will likely be capable of supporting the network traffic associated with
transactional replication.
Replicating Data
13-18
Scenarios in which these conditions might be true and where PTP transactional replication can be useful
include:
Scaling out read/write operations. When working with a large number of subscribers in branchbased scenarios, you can improve performance by enabling them to work against their own subsets
of data that then synchronize to the central publisher. This is particularly useful when the data is
partitioned on a regional basis and updates to each partition are made in their relevant time zones.
Improving data availability. If one or more nodes require maintenance or suffer hardware failure,
users can continue to work with another of the subscribers until their own server becomes available.
Scenarios in which these conditions might be true and where merge replication can be useful include:
Regional offices maintaining their own data. If your organization has separate regional sites with
staff at each one responsible for that regions data, you can partition your database and enable each
regional office to subscribe to their own subset of the data. This avoids conflicts occurring during the
synchronization process, while enabling you to make the best use of bandwidth and server resources.
Sales personnel visiting client sites. Employees can enter new sales data while out of the office, and
then merge their new records with the existing central data when they reconnect to the network.
Choosing a topology
Different topologies enable you to customize your
replication system to meet your explicit
requirements:
13-19
Central publisher and distributor. In this model, you use one server to act as the publisher and
distributor for one or more publications to one or more subscribers. If you find that the load on the
server becomes unmanageable, you can use a separate distributor server located alongside the
publisher. This configuration is easy to set up, maintain, and troubleshoot.
Republisher for slow link. If all the subscribers in your system are connected by a fast network link,
but are separated from the publisher and distributor by a slow link, you can add one or more
republishers to the system that are subscribers to the central publisher and republishers to further
subscribers. This configuration can improve performance because it reduces the network traffic over a
slow link.
For example, you could have a central publisher and distributor in Seattle that propagates data to all the
North American regional offices as well as to one office in Europe and another in Asia. Then the servers in
Europe and Asia can act as republishers for all the regional offices in their own continents.
Republisher for scale-out. If all subscribers in your system subscribe to publications on a central
publisher/distributor, the load on that central server may impede performance at peak times. You can
improve this by distributing subsets of partitioned data to middle-tier subscribers in Europe and Asia,
which republish their region-specific data to other subscribers in their region.
By partitioning the data, you can support many more subscribers at each republisher because the
publication size is smaller than the original table.
Central subscriber with partitioned data. An alternative strategy is to create a central subscriber
with multiple publishers. In this scenario, each autonomous site is responsible for its own data to
ensure data consistency. The data is synchronized to the central subscriber and consolidated with
data from all other sites.
The simplest way to implement this is to add a location-specific key to filter the data on the central
subscriber to help ensure that only rows in that partition are affected during the synchronization process.
You may choose to use several different types and topologies of replication working together to meet
your needs. One database can have many publications, each with its own replication type and topology as
determined by the scenario that it supports.
When designing your replication system, you must also decide where to locate your distribution database.
The location that you choose can impact the efficiency of your configuration, so it is worth spending the
time up front to make a well-informed choice.
Local distributor. When using a local distributor, you place the distribution database on the same
server instance as your publisher database. Typically, local distributors are used for merge replication.
This is because merge replication generally uses pull subscriptions with the replication agents running
at the subscribers, resulting in less overhead on the distributor.
Remote distributor. When using a remote distributor, you place the distribution database on a
separate server to the publisher database. In transactional replication, all replicated transactions are
written to and read from the distribution database, so there is a greater overhead on the distributor.
By separating it from the publisher, you can improve performance by providing dedicated resources
to both the publication and distribution process. However, in a system with a large number of
transactions, you may find that network traffic increases.
Push subscriptions are recommended when the additional workload that they produce can be handled at
the distributor without impacting the overall performance of the system. Pull subscriptions are
recommended for publications with a large number of subscriptions.
Replicating Data
13-20
Employees at the regional offices of Adventure Works need to be able to update and insert records in the
HumanResources database. Employees in the regional Human Resources departments need the ability to
update staff records and the other employees should be able to enter their own timesheet data. To enable
this functionality, the database administration team plan to implement replication.
.Objectives
After completing this lab, you will have:
You have conducted interviews with the key stakeholders and will use this information to plan your
replication strategy. Employees in the central Human Resources department want to maintain a master
copy of all the relevant data and have requested that confidential information, such as pay rates, is not
shared with the regional offices. Updates to this type of confidential data will be made at the company
headquarters.
The main tasks for this exercise are as follows:
1. Prepare the Lab Environment
2. Plan a Replication Strategy
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
2.
Review the statements in the document, discuss with your group the key factors that will influence
your choice of replication strategy for the HumanResources database, and then decide on an
appropriate replication strategy.
3.
13-21
Having planned a replication strategy for the HumanResources database, you will now implement it.
The main tasks for this exercise are as follows:
1. Create a Publication
2. Create a Subscription
3. Test Replication Configuration
In the MIA-SQL instance, use the New Publication Wizard to create a new Merge publication for the
HumanResources database by using the following configuration settings:
o
For all agents, use the account Adventureworks\ServiceAcct with the password Pa$$w0rd.
2.
In MIA-SQL\SQL2, use the New Subscription Wizard to create a subscription that uses the
publication that you created in the preceding exercise by using the following settings:
o
For the Merge Agent, use the account ADVENTUREWORKS\ServiceAcct, with the password
Pa$$w0rd.
In the MIA-SQL\SQL2 instance, use the Transact-SQL script in the Test Replication.sql document in
the D:\Labfiles\Lab13\Starter folder to test that the data replicated to the subscriber correctly.
2.
Use the Transact-SQL script in the Test Replication.sql document to add a new row, and then
observe the synchronization process.
3.
Connect to MIA-SQL and use the following Transact-SQL statement to test that the data merged
successfully:
Replicating Data
Use HumanResources;
GO
SELECT * FROM Payment.Timesheet;
GO
4.
13-22
After you have verified that replication is working as expected, in the D:\Labfiles\Lab13\Starter folder,
run Cleanup.cmd as Administrator.
13-23
In this module, you learned about SQL Server replication and the agents used to implement it. You also
learned about some common replication scenarios and how to design an appropriate replication system
for your needs.
Review Question(s)
Question: What are the main reasons you can think of for implementing SQL Server
replication?
Replicating Data
Course Evaluation
Your evaluation of this course will help Microsoft understand the quality of your learning experience.
Please work with your training provider to access the course evaluation form.
Microsoft will keep your answers to this survey private and confidential and will use your responses to
improve your future learning experience. Your open and honest feedback is valuable and appreciated.
13-24
Ensure that the MSL-TMG, 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the D:\Labfiles\Lab01\Starter folder, right-click Setup.cmd and then click Run as administrator.
3.
Click Yes when prompted to confirm that you want to run the command file, and then wait for the
script to finish.
2.
Click the link to download the latest version of the MAP Toolkit.
3.
Follow the instructions to download and install the MAP Toolkit on the MIA-SQL server.
On the Start screen, type MAP and then click Microsoft Assessment and Planning Toolkit. When
prompted, click Yes.
2.
In the Microsoft Assessment and Planning Toolkit dialog box, in the Create or select a database
area, in the Name field, type MAPData, and then click OK.
3.
Under Scenarios Available, click the Database tab, and then review the information.
4.
5.
In the Inventory and Assessment Wizard dialog box, on the Inventory Scenarios page, click the
SQL Server with Database Details check box, and then click Next.
6.
On the Discovery Methods page, ensure that only the Use Active Directory Domain Services (AD
DS) check box is selected, and then click Next.
7.
On the Active Directory Credentials page, enter the following details and then click Next:
o
Domain: adventureworks.msft
Password: Pa$$w0rd
8.
On the Active Directory Options page, ensure that Find all computers in all domains, containers,
and organizational units is selected, and then click Next.
9.
On the All Computers Credentials page, click Create, in the Account Entry dialog box, in the
Account Name field, type ADVENTUREWORKS\Student, in the Password field type Pa$$w0rd, in
the Confirm password field, type Pa$$w0rd. In the Applies to box, ensure that only the WMI and
SQL Windows check boxes are selected, click Save, and then click Next.
Click Database, and review the information on the SQL Server Products tile.
2.
Click the SQL Server Products tile, and view the summary details that are displayed.
3.
L1-2
Results: At the end of this exercise, you will have used the MAP Toolkit to discover details of SQL Server
instances in the domain.
Start Excel and open the SqlServerAssessment.xlsx workbook in the D:\Labfiles\Lab01\Starter folder.
2.
View the information on the Summary worksheet, noting the number of instances of each major SQL
Server component that was found in the Adventure Works infrastructure.
3.
View the information on the DatabaseInstances worksheet, noting the various versions, service pack
levels, and editions that were found.
4.
View the information on the Components worksheet, noting the various versions, service pack levels,
and editions of other SQL Server components that were found.
2.
View the information on the Overview worksheet, noting the number of instances and databases that
were found in the Adventure Works infrastructure.
3.
View the information on the SQLServerSummary worksheet, noting the number of instances and
databases that were found in each server.
4.
View the information on the DatabaseSummary worksheet, noting the details of the individual
databases that were found.
5.
View the information on the DBInstanceSummary worksheet, noting the details of the database
engine that were found.
2.
View the information on the Overview worksheet, noting the number of instances of each licensed
SQL Server product that were found in the Adventure Works infrastructure.
3.
View the information on the SQL Server License Tracking worksheet, noting the license details that
were found for each server.
4.
View the information on the SQL Server Summary worksheet, noting the details for each SQL Server
product that was found.
5.
View the information on the SQL Server Instance Details worksheet, noting the details of the SQL
Server instances that were found.
6.
View the information on the Client Access Summary worksheet, noting the details of SQL Server
access by users and devices that were found.
7.
Results: At the end of this exercise, you will have examined MAP Toolkit SQL Server reports.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the D:\Labfiles\Lab02\Starter folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes, and then wait for the script to finish.
Start SQL Server Management Studio and connect to the MIA-SQL database engine instance using
Windows authentication.
2.
In Object Explorer, expand Management, expand Policy Management, right-click Policies, and then
click Import Policy.
3.
In the Import dialog box, next to the Files to import box, click the ellipsis button.
4.
In the Select Policy dialog box, double-click SQL Server Best Practices, double-click
DatabaseEngine, and then double-click 1033.
5.
In the Select Policy dialog box, click Data and Log File Location.xml, press and hold the Ctrl key,
click Database Auto Shrink.xml, click Guest Permissions.xml, click Trustworthy Database.xml,
release the Ctrl key, and then click Open.
6.
7.
In Object Explorer, expand Policies to view the policies that you just imported
In Object Explorer, under Policy Management, right-click Conditions, and then click New
Condition.
2.
In the Create New Condition - dialog box, enter the following information, and then press Enter:
Field name
Value
Name
HR Database
Facet
Database
Field
@Name
Operator
Value
HumanResources
3.
4.
5.
In the Create New Condition - dialog box, enter the following information, and then press Enter:
Field name
Value
Name
Facet
Stored Procedure
Field
@Name
Operator
LIKE
Value
usp%
6.
In the Create New Condition Stored Procedure Name dialog box, click OK.
7.
In Object Explorer, expand Conditions to view the conditions that you just created. Note that you
can also see the conditions that were added when you imported the best-practice policies in the
previous task.
L2-2
1.
In Object Explorer, under Policy Management, right-click Policies, and then click New Policy.
2.
In the Create New Policy - dialog box, in the Name box, type Stored Procedure Names in Human
Resources DB, and then in the Check condition list, click Stored Procedure Name.
3.
In the Against targets box, under Every StoredProcedure, next to Every, click the down arrow, and
then click HR Database.
4.
In the Evaluation Mode list, click On demand, and then click OK.
2.
In Object Explorer, under Policies, right-click Stored Procedure Names in Human Resources DB,
and then click Evaluate.
3.
4.
In the Evaluate Policies Stored Procedure Names in Human Resources DB dialog box, in the
Target details box, in the Server column, note the name of the SQL Server instance, and then in the
Details column, click View.
5.
In the Results Detailed View window, in the Actual Value column, note the name of the existing
stored procedure that violates the policy, and then click Close.
6.
In the Evaluate Policies Stored Procedure Names in Human Resources DB dialog box, click
Close.
In Object Explorer, right-click Stored Procedure Names in Human Resources DB, and then click
Properties.
2.
In the Open Policy Stored Procedure Names in Human Resources DB dialog box, in the
Evaluation Mode list, click On change: prevent, and then click OK.
3.
In Object Explorer, right-click Stored Procedure Names in Human Resources DB, and then click
Enable.
4.
5.
In the Open File dialog box, navigate to the D:\Labfiles\Lab02\Starter folder, click
CreateStoredProcedure.sql, and then click Open.
6.
7.
In the Results pane, review the message that explains why the create procedure Transact-SQL
statement failed.
L2-4
1.
2.
Expand Database Engine, right-click Local Server Groups, and then click New Server Group.
3.
In the New Server Group properties dialog box, enter the group name Adventure Works DB
Servers, and click OK.
4.
Right-click the Adventure Works DB Servers server group and click New Server Registration. Then
in the Server name box, type MIA-SQL, and click Save.
5.
Repeat the previous step to add the MIA-SQL\SQL2 server to the Adventure Works DB Servers
group.
6.
Right-click the Adventure Works DB Servers server group, and click Evaluate Policies.
7.
In the Evaluate Policies Adventure Works DB Servers dialog box, next to the Source box, click
the ellipsis button.
8.
In the Select Source dialog box, click Server, in the Server name box, type MIA-SQL, and then click
OK.
9.
In the Evaluate Policies Adventure Works DB Servers dialog box, in the Policies list, select the
following policy check boxes, and then click Evaluate:
o
Guest Permissions
Trustworthy Database
10. In the Results box, click Data and Log File Location, and then in the Target details box, review the
results.
11. In the Target Details box, in the Target column, locate the row for the HumanResources database,
and then in the Details column, click View.
12. In the Results Detailed View dialog box, review the results. In the first row, note that the database
violates the policy because it does not have its data and log files on separate drives. In the second
row, note that the evaluation did not report the violation because the size of the database is below
the threshold value of 5120. Click Close.
13. Repeat the previous three steps to view the detailed results for each of the other policies listed in the
Results field.
14. In the Evaluate Policies Adventure Works DB Servers window, click Close, and then close SQL
Server Management Studio. Do not save any changes.
In Object Explorer, in the Policies folder, right-click Backup and Data File Location and click Delete.
Then in the Delete Object dialog box, click OK.
2.
Guest Permissions
Trustworthy Database
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
In the User Account Control dialog box, click Yes, and then wait for the script to finish.
4.
Start SQL Server Management Studio and connect to the MIA-SQL database engine using Windows
authentication.
2.
In Object Explorer, expand Management, right-click Data Collection, click Tasks, and then click
Configure Management Data Warehouse.
3.
4.
On the Configure Management Data Warehouse Storage page, in the Server name field, ensure
that MIA-SQL is displayed, click New, in the New Database dialog box, in the Database name field,
type MDW, click OK, and then click Next.
5.
On the Map Logins and Users page, click Next, and then on the Complete the Wizard page, click
Finish.
6.
7.
In Object Explorer, expand Databases and verify that a database named MDW has been created.
In SQL Server Management Studio, in Object Explorer, expand Management, right-click Data
Collection, click Tasks, and then click Configure Data Collection.
2.
3.
On the Setup Data Collection Sets page, to the right of the Server name field, click the ellipsis
button, in the Connect to Server dialog box, type MIA-SQL, click Connect, and then in the
Database name field, click MDW.
4.
In the Select data collector sets you want to enable field, click the System Data Collection Sets
check box, and then click Next.
5.
On the Complete the wizard page, click Finish, and then when configuration is complete, click
Close.
In SQL Server Management Studio, in Object Explorer, under MIA-SQL, under Management, expand
Data Collection, expand System Data Collection Sets, and view the available collection sets.
2.
Right-click Disk Usage, click Collect and Upload Now, and then when the Collect and Upload
Data Set job completes, click Close.
3.
Repeat the previous step for the Query Statistics and Server Activity collection sets.
L3-2
1.
In SQL Server Management Studio, in Object Explorer, right-click the MDW database, click Reports,
point to Management Data Warehouse, and then click Management Data Warehouse Overview.
2.
On the Management Data Warehouse Overview: MDW page, in the MIA-SQL row, click the link in
the Disk Usage column.
3.
On the Disk Usage Collection Set page, click AdventureWorks, and then on the Disk Usage for
database: AdventureWorks page, review the disk usage statistics.
4.
5.
Keep SQL Server Management Studio open for the next exercise.
Results: At the end of this lab., you will have configured data collection on the MIA-SQL instance of SQL
Server.
In SQL Server Management Studio, on the View menu, click Utility Explorer.
2.
In Utility Explorer, on the Getting Started tab, click Create a Utility Control Point (UCP).
3.
In the Create a Utility Control Point wizard, on the Introduction page, review the information, and
then click Next.
4.
5.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL\SQL2, and then click
Connect.
6.
On the Specify the Instance of SQL Server page, in the Utility Control Point Name box, note that
the default name is Utility, and then click Next.
7.
On the Utility Collection Set Account page, select Use the SQL Server Agent service account and
click Next.
8.
On the SQL Server Instance Validation page, wait for the validation operations to complete, and
then click Next.
9.
On the Summary of UCP Creation page, review the information, and then click Next.
10. On the Utility Control Point Creation page, wait for the creation operations to complete, and then
click Finish.
11. In the Utility Explorer Content tab, note that there is 1 managed instance; but no data has been
collected yet.
12. On the Getting Started tab, note the Enroll instance of SQL Server with a UCP link you can use
this to enroll additional servers in the UCP.
In SQL Server Management Studio, in Object Explorer, in the Connect drop-down list, click Database
Engine.
2.
3.
In Object Explorer, under MIA-SQL\SQL2, expand SQL Server Agent, and then expand Jobs.
4.
5.
In the Start Jobs MIA-SQL\SQL2 dialog box, wait until the job completes, and then click Close.
6.
7.
In the Start Jobs on MIA-SQL\SQL2 dialog box, click Start, wait until the job completes, and then
click Close.
8.
9.
In the Start Job on MIA-SQL\SQL2 dialog box, click Start, wait until the job completes, and then
click Close.
In the Utility Explorer pane, right-click Utility (MIA-SQl\SQL2) and click Refresh.
2.
In the Utility Explorer pane, note that the Managed Instance Health chart now shows a single
instance that is well utilized.
2.
3.
Under Specify the file space utilization policies for all managed instances of SQL Server, in the
Disk space of a data file is overutilized when it is greater than box, type 50, and then click Apply.
In the D:\LabfilesLab03\Starter folder, right-click Fill DB.cmd and click Run as administrator. Click
Yes when prompted to confirm, and wait for the script to complete.
2.
In SQL Server Management Studio, in Object Explorer, under SQL Server Agent, under Jobs, rightclick sysutility_mi_collect_performance, and then click Start Job at Step.
3.
In the Start Jobs MIA-SQL\SQL2 dialog box, wait until the job completes, and then click Close.
4.
5.
In the Start Jobs on MIA-SQL\SQL2 dialog box, click Start, wait until the job completes, and then
click Close.
6.
L3-4
7.
In the Start Job on MIA-SQL\SQL2 dialog box, click Start, wait until the job completes, and then
click Close.
8.
9.
In the Start Jobs - MIA-SQL\SQL2 dialog box, wait until the job completes, and then click Close.
In the Start Job on MIA-SQL\SQl2 dialog box, click Start, wait until the job completes, and then
click Close.
11. In the Utility Explorer pane, right-click Utility (MIA-SQL\SQL2) and click Refresh.
In the Utility Explorer pane, note that the Managed Instance Health chart now shows a single
instance that is overutilized.
In the Managed Instances with Overutilized Resources section, click Overutilized Database Files.
In the top pane of the Utility Explorer Content window, note the File Space icon for the MIASQL\SQL2 instance. Then, with the MIA-SQL\SQL2 instance selected, in the bottom pane, click the
Storage Utilization tab.
Expand the Sales database and the PRIMARY filegroup, and note that the SalesDB.mdf file is full.
12. Close SQL Server Management Studio.
Results: After completing this exercise, you will have created a UCP on the MIA-SQL\SQL2 instance of SQL
Server.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the D:\Labfiles\Lab04\Starter folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes to confirm that you want to run the command file,
and then wait a few minutes for the script to finish.
Start SQL Server Management Studio, and connect to the MIA-SQL database engine using Windows
authentication.
2.
3.
In the query window, select the code under the comment Create resource pools and click Execute.
This code creates two resource pools, named Low Priority and High Priority.
4.
In the query window, select the code under the comment Create workload groups and click
Execute. This code creates a workload group named ResellerSalesWG that uses the Low Priority
resource pool, and a workload group named InternetSalesWG that uses the High Priority resource
pool.
5.
Select the code under the comment Reconfigure resource governor and click Execute. This
reconfigures Resource Governor, enabling the resource pools and workload groups you have created.
6.
In the query window, select the code under the comment Create classifier function and click
Execute. This code creates a function that returns the appropriate workload group name for the
current session, based on the name of the database to which the connection has been made.
7.
Select the code under the comment Add classifier function to resource governor and click
Execute. This reconfigures Resource Governor so that the function you created is used as the classifier
function for all future connections.
8.
2.
In Computer Management, in the pane on the left, expand Performance, expand Monitoring Tools,
and then click Performance Monitor.
3.
If any counters are listed under the chart, select them and press Delete so that the chart is blank.
4.
5.
L4-2
In the Add Counters dialog box, in the list of objects, expand the SQLServer: Resource Pool Stats
object, and then click Avg Disk Write IO (ms). Hold the CTRL key and click the following counters:
o
CPU usage %
6.
In the Instances of selected object list click High Priority, hold the Ctrl key and click Low Priority,
and then click Add. This adds the counters you selected for both resource pool instances.
7.
In the Add Counters dialog box, click OK. Note that Performance Monitor displays the counter
values. Click any of the counters under the chart and press Ctrl+H, and note that this highlights the
currently selected counter in the graph.
8.
Wait for the red line (which indicates the current time) to return to the beginning of the chart, and
then in the D:\Labfiles\Lab04\Starter folder, double-click ResellerWorkload.cmd to start a user query
workload.
9.
Observe the values of the counters in Performance Monitor until the red bar is approximately a third
of the way across the chart.
10. With the Reseller workload still running, in the D:\Labfiles\Lab04\Starter folder, double-click
InternetWorkload.cmd to start the help desk workload. Observe the values of the counters in
Performance Monitor until the red bar is approximately two thirds of the way across the chart.
11. Close the console window for the Internet workload and observe the values of the counters in
Performance Monitor until the red bar is almost all the way across the chart. Then in Performance
Monitor, click the Pause button before the red line reaches the end of the chart.
12. Close the console window for the Reseller workload.
13. View the counters in the chart, and note the following:
o
The CPU control effect % counter shows the extent to which Resource Governor influenced CPU
utilization.
The CPU target and actual usage for the Reseller workload was noticeably reduced during the
period when the Internet workload was running.
Disk write IO was throttled for the low priority workgroup was throttled, but not for the high
priority resource group.
Neither resource group required its full allocation of memory the workloads were CPUintensive and required considerable IO to write data, but they were not memory-intensive.
In SQL Server Management Studio, in Object Explorer, click Connect Object Explorer.
2.
In the Connect to Server dialog box, in the Server type list, click Database Engine, in the Server
name list, type MIA-SQL\SQL2, in the Authentication list, click Windows Authentication, and then
click Connect.
3.
4.
In Object Explorer, click MIA-SQL, and then on the toolbar, click New Query.
5.
In the query window, type the following Transact-SQL statement, and then click Execute:
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
6.
In the query window, under the Transact-SQL statement that you typed in the previous step, type the
following Transact-SQL statement, highlight the statement, and then click Execute:
EXEC sp_configure 'affinity mask', 12;
RECONFIGURE;
GO
7.
8.
9.
In the Server Properties - MIA-SQL dialog box, on the Processors page, in the Enabled processors
section, expand the ALL node, expand NumaNode0, and note that MIA-SQL has an affinity with
CPU2 and CPU3. Then click Cancel.
10. In Object Explorer, click MIA-SQL\SQL2, and then on the toolbar, click New Query.
11. In the query window, type the following Transact-SQL statement, and then click Execute:
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
12. In the query window, under the Transact-SQL statement that you typed in the previous step, type the
following Transact-SQL statement, highlight the statement, and then click Execute:
EXEC sp_configure 'affinity mask', 1;
RECONFIGURE;
GO
L4-4
15. In the Server Properties - MIA-SQL dialog box, on the Processors page, in the Enabled processors
section, expand the ALL node, expand NumaNode0, and note that MIA-SQL\SQL2 has an affinity
with CPU0. Then click Cancel.
16. In Object Explorer, click MIA-SQL\SQL3, and then on the toolbar, click New Query.
17. In the query window, type the following Transact-SQL statement, and then click Execute:
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
18. In the query window, under the Transact-SQL statement that you typed in the previous step, type the
following Transact-SQL statement, highlight the statement, and then click Execute:
EXEC sp_configure 'affinity mask', 2;
RECONFIGURE;
GO
21. In the Server Properties - MIA-SQL dialog box, on the Processors page, in the Enabled processors
section, expand the ALL node, expand NumaNode0, and note that MIA-SQL\SQL2 has an affinity
with CPU1. Then click Cancel.
22. Leave SQL Server Management Studio open for the next task.
In the Configure MIA-SQL\SQL2 query window, under the existing Transact-SQL statements, type
the following Transact-SQL statement, highlight it, and then click Execute:
EXEC sp_configure 'max server memory', 1024;
RECONFIGURE;
GO
2.
3.
4.
In the Server Properties - MIA-SQL dialog box, on the Memory page, verify that the maximum
server memory has been set to 1024 MB. Then click Cancel.
5.
In the Configure MIA-SQL\SQL3 query window, under the existing Transact-SQL statements, type
the following Transact-SQL statement, highlight it, and then click Execute:
EXEC sp_configure 'max server memory', 1024;
RECONFIGURE;
GO
6.
7.
8.
In the Server Properties - MIA-SQL dialog box, on the Memory page, verify that the maximum
server memory has been set to 1024 MB. Then click Cancel.
9.
Maximize Computer Management, and in the Performance Monitor pane, under the chart, click the
first counter, hold SHIFT and click the last counter, and press Delete.
2.
3.
In the Add Counters dialog box, in the list of objects, expand the Processor object, and then click %
Processor Time. If the Instances of selected object list is empty, click % Processor Time again. In
the Instances of selected object list click 0, then hold CTRL and click 1, 2, and 3 to select them all.
Then click Add.
4.
In the Add Counters dialog box, in the list of objects, expand the SQLServer: Memory Manager
object, click Total Server Memory (KB), and then click Add.
5.
Repeat the previous steps twice to add the Total Server Memory (KB) counter for the
MSSQL$SQL2: Memory Manager and the MSSQL$SQL3: Memory Manager objects.
6.
7.
In the D:\Labfiles\Lab04\Starter folder, double-click MIA-SQL.cmd, and then view the counters in
Performance Monitor. Note the following, using the Highlight button to help you to view counters if
necessary:
o
The Total Server Memory (KB) counter for the SQLServer: Memory Manager object rises and
falls as memory is required. Note that this may not happen immediately.
8.
9.
In the D:\Labfiles\Lab04\Starter folder, double-click MIA-SQL2.cmd, and then view the counters in
Performance Monitor. Note the following, using the Highlight button to help you to view counters if
necessary:
o
The % Processor Time counter for instance 3 or for instance 2 rises. The MIA-SQL\SQL2 instance
has an affinity to both CPU2 and CPU3, so the server can use either of these cores to service the
query.
The Total Server Memory (KB) counter for the MSSQL$SQL2: Memory Manager object rises
and falls as memory is required. Note that this may not happen immediately.
11. In the D:\Labfiles\Lab04\Starter folder, double-click MIA-SQL3.cmd, and then view the counters in
Performance Monitor. Note the following, using the Highlight button to help you to view counters if
necessary:
o
The Total Server Memory (KB) counter for the MSSQL$SQL3: Memory Manager object rises
and falls as memory is required. Note that this may not happen immediately.
12. Close the command window for the MIA-SQL3 query, and then close Computer Management.
13. In the D:\Labfiles\Lab04\Starter folder, right-click CleanUp.cmd, and then click Run as
administrator. Click Yes when prompted to confirm that you want to run the command file, and
then wait for the script to finish.
Results: After completing this exercise, you will have configured processor affinity and memory settings.
Start the 20465C-SQL-VM-Template virtual machines, and then log on as Administrator with the
password Pa$$w0rd.
Run Setup.exe from C:\SQLServer2014-x64-ENU. When prompted to allow the program to make
changes, click Yes.
2.
In the SQL Server Installation Center, on the Advanced page, click Image preparation of a standalone instance of SQL Server. The setup program may take a few minutes to start.
3.
On the Microsoft Updates and Product Updates pages, clear any checkboxes and click Next.
4.
On the Prepare Image Rules page, review the report and click Next.
5.
On the License Terms page, select I accept the license terms and click Next.
6.
On the Feature Selection page, select Database Engine Services and Management Tools
Complete. Then click Next.
7.
8.
On the Instance Configuration page, in the Instance ID box, type SQLTemplate, and click Next.
9.
Results: At the end of this exercise, you will have installed a prepared instance of SQL Server.
In the SQL Server Installation Center, on the Advanced page, click Image completion of a prepared
stand-alone instance of SQL Server.
2.
On the Product Key page, select Evaluation edition and click Next.
3.
On the License Terms page, select I accept the license terms, and click Next.
4.
5.
On the Select Prepared Features page, ensure that SQLTEMPLATE is selected, and click Next.
6.
7.
On the Instance Configuration page, select Default instance, and click Next.
L5-2
8.
On the Server Configuration page, review the default service accounts and startup types, and click
Next.
9.
On the Database Engine Configuration page, click Add Current User, and click Next.
10. On the Ready to Complete Image page, note the Configuration file path, which indicates the
location where the ConfigurationFile.ini file has been generated, and click Cancel. When prompted
to confirm the cancellation, click Yes.
11. Close the SQL Server Installation Center.
12. Copy the ConfigurationFile.ini file generated by the wizard to C:\.
2.
Make the following modifications to the configuration file, then save it and close notepad:
o
Add a semi-colon character (;) in front of the UIMODE="Normal" statement so that it resembles
the following:
;UIMODE="Normal"
3.
Open a command prompt and enter the following command to run SQL Server Setup from the
command line with the configuration file generated by the wizard:
C:\SQLServer2014-x64-ENU\Setup.exe /ConfigurationFile=C:\ConfigurationFile.ini
/IAcceptSQLServerLicenseTerms
4.
Wait for setup to complete then close the command prompt window.
5.
On the Start screen, type SQL Server Configuration Manager and start the SQL Server 2014
Configuration Manager app.
6.
In SQL Server Configuration Manager, click the SQL Server Services node and verify that a default
instance of SQL Server (MSSQLSERVER) has been installed and is running.
7.
Results: After completing this exercise, you will have a configuration file that completes the installation of
the prepared SQL Server instance.
Ensure that the MSL-TMG1, 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
Click Yes when prompted to confirm you want to run the command file, and wait for the script to
finish.
4.
If you have not already created a Microsoft Azure trial subscription, follow the instructions in
D:\Creating a Microsoft Azure Trial Subscription.htm to do so.
2.
Sign in using the Microsoft account that is associated with your Azure subscription.
3.
Note the list of services available in Azure. You can provision and manage each service on its own
page, or you can manage all services from the All Items page.
Click the Storage page, and then click the New icon.
2.
Use the Quick Create option to create a storage account with a unique name. Ensure that the GeoRedundant Replication option is selected.
3.
Wait for the status of the new storage account to indicate that it is online.
4.
Select the new storage account and click Manage Access Keys to view the access keys that have
been generated for your storage account. Note that you can copy a key to the clipboard from here.
5.
Click the arrow in the Name column for your storage account.
2.
3.
Click the Add icon and create a container named backups with Private access.
4.
Click the arrow next to the container name to verify that it contains no blobs.
L6-2
1.
2.
When prompted, connect to the MIA-SQL instance of the database engine using Windows
authentication.
3.
Click New Query and enter the following Transact-SQL code to create a credential named
AzureStore. Alternatively, you can open Create Credential.sql in the D:\Labfiles\Lab06\Solution
folder. Replace Azure-Account-Name with the unique name you specified when creating your Azure
storage account, and replace XXXXX-access-key-XXXXX with the primary access key for your Azure
storage account (remember that you can copy this to the clipboard from the Azure management
portal).
USE [master]
GO
CREATE CREDENTIAL AzureStore
WITH IDENTITY = Storage-Account-Name',
SECRET = 'XXXXX-access-key-XXXXX';
GO
4.
Click New Query and enter the following Transact-SQL code to back up the Products database to
Azure. Alternatively, you can open Back Up Database.sql in the D:\Labfiles\Lab06\Solution folder.
Replace Storage-Account-Name with the unique name you specified when creating your Azure
storage account.
USE [master]
GO
BACKUP DATABASE Products
TO URL = 'http://Storage-Account-Name.blob.core.windows.net/backups/Products.bak'
WITH CREDENTIAL = 'AzureStore';
GO
2.
3.
In Object Explorer, in the Connect drop-down list, click Azure Storage. Then enter your storage
account name, paste the access key you copied to the clipboard previously, and click Connect.
4.
In Object Explorer, under your storage account, expand Containers, expand the backups container,
and verify that a file named Products.bak has been created there.
2.
Right-click the Products database and click Delete. Then, in the Delete Object dialog box, select
Close existing connections and click OK.
3.
Click New Query and enter the following Transact-SQL code to restore the Products database from
the backup in Azure. Alternatively, you can open Restore Database.sql in the
D:\Labfiles\Lab06\Solution folder. Replace Storage-Account-Name with the unique name you
specified when creating your Azure storage account.
USE [master]
GO
RESTORE DATABASE Products
FROM URL = 'http://Storage-Account-Name.blob.core.windows.net/backups/Products.bak'
WITH CREDENTIAL = 'AzureStore';
GO
4.
5.
In Object Explorer, right-click the Databases folder and click Refresh to verify that the Products
database has been restored.
On the task bar, right-click Windows PowerShell and click Windows PowerShell ISE.
2.
In the PowerShell command-line pane at the bottom of the editor, enter the following command to
identify the Microsoft Azure accounts currently associated with PowerShell:
Get-AzureAccount
3.
If your accounts are listed, in the PowerShell command line pane, enter the following command to
remove each one (replacing <microsoft_account> with the Microsoft account name):
Remove-AzureAccount <microsoft_account> - Force
4.
In the Microsoft command-line pane at the bottom of the editor, enter the following command to
identify the Microsoft Azure subscriptions currently associated with PowerShell:
Get-AzureSubscription
5.
If any Microsoft Azure subscriptions are listed, in the PowerShell command-line pane, enter the
following command to remove each one (replacing <subscription_name> with the Microsoft Azure
subscription name):
Remove-AzureSubscription "<subscription_name>" - Force
6.
In the PowerShell command-line pane at the bottom of the editor, enter the following command to
obtain a new credentials certificate for your Microsoft Azure subscription:
Get-AzurePublishSettingsFile
L6-4
7.
If you are prompted to sign into Microsoft Azure, do so using your Microsoft account. Then, when
Internet Explorer opens a new tab, in the message bar, in the Save drop-down list, click Save As and
save the file as credentials.publishsettings in the D:\Labfiles\Lab06\Starter folder.
8.
In the PowerShell editor, in the PowerShell command line, enter the following command to associate
your Microsoft Azure account with the PowerShell session:
Import-AzurePublishSettingsFile D:\Labfiles\Lab06\Starter\credentials.publishsettings
9.
In the PowerShell command-line pane, enter the following command to verify that your Microsoft
Azure subscription is now associated with PowerShell:
Get-AzureSubscription
10. In the PowerShell command-line pane, enter the following command to set the default subscription
and storage account for your PowerShell environment (replacing <your_subscription_name> with the
name of your subscription and <your_storage_account_name> with the name of the storage account
you created in the first exercise of this lab):
Set-AzureSubscription "<your_subscription_name>" -CurrentStorageAccount
"<your_storage_account_name>"
11. In the PowerShell command-line pane, enter the following command to verify that the default
subscription and storage account have been set:
Get-AzureSubscription
12. Keep the Windows PowerShell interactive scripting environment open for the next task.
In the PowerShell command-line pane, enter the following command to create a new container
named datafiles:
New-AzureStorageContainer datafiles
2.
In the PowerShell command-line pane, enter the following command to create a shared access
signature for the datafiles container:
New-AzureStorageContainerSASToken Name datafiles Permission rw
3.
Copy the returned value (which starts after the question mark at the beginning) to the clipboard.
In SQL Server Management Studio, click New Query and enter the following Transact-SQL code.
Alternatively, you can open SAS Credential.sql in the D:\Labfiles\Lab06\Solution folder. Replace
storage_account_name with the name of your storage account, and XXXXX_SAS_Key_XXXXX with the
shared access signature token you copied to the clipboard in the previous task (note that the shared
access signature token must be on a single line):
USE master
CREATE CREDENTIAL [https://storage_account_name.blob.core.windows.net/datafiles]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'XXXXX_SAS_Key_XXXXX'
2.
In SQL Server Management Studio, click New Query and enter the following Transact-SQL code.
Alternatively, you can open Create Database.sql in the D:\Labfiles\Lab06\Solution folder. Replace
storage_account_name with the name of your storage account:
2.
3.
In Object Explorer, under your Microsoft Azure storage account, right-click Containers and click
Refresh. Then expand the datafiles container and verify that the database files have been created
there.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
If you have not already created a Microsoft Azure trial subscription, follow the instructions in
D:\Creating a Microsoft Azure Trial Subscription.htm to do so.
Start Internet Explorer, browse to http://azure.microsoft.com, click Portal, and sign in using the
Microsoft account that is associated with your Azure subscription.
2.
3.
Create a new SQL Database named CloudDB. Use the default edition, size, and collation to create the
database on a new SQL Database server with the login name Student and the password Pa$$w0rd.
Specify a suitable region, and allow Azure services to access the server.
4.
When the new database is online, on the Servers tab, click the server name that has been generated.
5.
On the Configure tab, add the current client IP address to the allowed IP addresses and save the
changes to the configuration.
In the Azure management portal, on the SQL Databases page, on the Servers tab, select your server
and click the Manage icon. This opens a new browser tab. If Internet Explorer blocks a pop-up, in the
Options for this site menu, click Always Allow. Then click Manage again.
2.
In the new browser tab, enter the username Student and the password Pa$$w0rd, and log on to the
server.
3.
On the Administration tab, click the Design icon for the CloudDB database.
4.
On the Tables tab, create a new table named Products with the following columns:
5.
6.
ProductName (nvarchar(50))
Price (money)
Save the table, then on the Data tab, add the following rows to the table:
ID
ProductName
Price
Product 1
1.99
Product 2
2.99
7.
L7-2
Close the Management Portal - SQL Database tab, but keep the original Azure management portal
open. When prompted, click Leave this page.
Start SQL Server Management Studio and connect to your Azure SQL database server using SQL
Server authentication. Enter the server name in the format <server_name>.database.windows.net,
enter the login name in the format Student@<server_name>, and enter the password Pa$$w0rd.
2.
In Object Explorer, expand Databases, expand System Databases, right-click master, and click New
Query.
3.
Enter the following Transact-SQL code and click Execute to create a login named AWLogin:
CREATE LOGIN AWLogin
WITH PASSWORD = 'Pa$$w0rd';
GO
4.
In Object Explorer, expand the server-level Security folder and its Logins subfolder, and verify that
AWLogin has been created.
In Object Explorer, right-click the CloudDB database and click New Query.
2.
Enter the following Transact-SQL code and click Execute to create a user named AWUser from the
AWLogin login:
CREATE USER AWUser
FROM LOGIN AWLogin;
GO
3.
In Object Explorer, expand the CloudDB database, expand its Security folder, and expand the Users
subfolder to verify that AWUser has been created.
4.
In the query editor, under the existing code, add the following Transact-SQL code to add AWUser to
the db_datareader role:
EXEC sp_addrolemember 'db_datareader', 'AWUser';
GO
5.
Select the code you entered in the previous step and click Execute.
In the query window that is connected to the master database, enter the following Transact-SQL
code:
SELECT * FROM sys.firewall_rules;
GO
2.
Select the code you entered in the previous step and click Execute to view the server-level firewall
settings.
3.
In the query window that is connected to the CloudDB database, enter the following Transact-SQL
code:
SELECT * FROM sys.database_firewall_rules;
GO
4.
Select the code you entered in the previous step and click Execute to verify that there are no explicit
database-level firewall settings for the CloudDB database.
In Internet Explorer, in the SQL Databases page, on the Servers tab, select your server and click the
Manage icon. This opens a new browser tab.
2.
In the new browser tab, leave the Database text box empty, enter the username AWLogin, and enter
the password Pa$$w0rd. Then try to log on. The connection should fail because AWLogin does not
have an associated user in the master database, and no database was specified.
3.
Close the Management Portal - SQL database tab containing the error, clicking Leave this page if
prompted, and then in the Database box, enter CloudDB and log on as AWLogin with the password
Pa$$w0rd. This time the connection succeeds, because there is a valid user for the AWLogin login in
the CloudDB database.
4.
5.
Click Run and view the results. The user can select data in the table through membership of the
db_datareader role.
6.
7.
Click Run and view the results. The statement fails because the user does not have INSERT permission
on the table.
Ensure that the MSL-TMG1, 20465C-MIA-DC, and 20465C-MIA-SQL virtual machines are running, and
then log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
If you have not already created a Microsoft Azure trial subscription, follow the instructions in
D:\Creating a Microsoft Azure Trial Subscription.htm to do so.
Start Internet Explorer and browse to www.microsoft.com/azure, click Portal, and sign in using the
Microsoft account that is associated with your Azure subscription.
2.
On the Storage page, if you have an existing storage account, note which region it is in.
3.
4.
Use the From Gallery option and select an image that includes SQL Server 2014 Standard on
Windows Server 2012 R2.
5.
Specify the virtual machine name MIA-SQLVM, select the Standard tier and a size that includes 2
cores, and specify the user name Student and the password VMPa$$w0rd.
6.
Select the option to create a new cloud service, changing the default cloud service DNS name to
something unique if necessary.
7.
If you have an existing storage account, create the virtual machine in the same region and use the
existing storage account. Otherwise you can create the virtual machine in any geographical region
and use an automatically generated storage account. Do not create an availability set.
8.
Review the default endpoint configuration, noting that remote desktop and PowerShell access is
enabled. Complete the wizard, adding the VM Agent but no additional extensions to create the
virtual machine.
9.
Wait for the virtual machine status to change to running. Note that the provisioning process can take
a long time.
In the Azure management portal, view the Cloud Services page and note that a cloud service has
been created for the virtual machine.
2.
View the Storage page and note that a storage account has been generated for your virtual machine.
3.
Click the name of the storage account to view its details, and then click the Containers tab, noting
that a container named vhds has been created.
4.
Click the name of the vhds container to view its contents. Note that it contains a virtual hard disk file
for the virtual machine.
5.
On the Virtual Machines tab, click the name of your virtual machine to view its details, and then click
the Dashboard tab. Note that this page provides status information for the virtual machine.
L8-2
1.
In the Azure management portal, on the dashboard page for your virtual machine, click Attach, and
then click Attach empty disk.
2.
Change the file name of the new disk to Data1, set the size in GB to 10, and set the host cache
preference to Read Only.
3.
When the disk has been attached, attach a second empty disk named Data2. Set the size in GB to 10,
and set the host cache preference to Read Only.
4.
When the second disk has been attached, add a third disk named Logs. Set the size to 10 GB and the
host cache preference to None.
In the Azure management portal, on the Virtual Machines page, select your virtual machine and
click Connect.
2.
In the message informing you that the portal is retrieving the .rdp file, click OK. In the prompt to
open or save the .rdp file, click Open.
3.
If a message box informs you that the publisher of the remote connection cant be identified, click
Connect.
4.
When prompted to enter your credentials, use the MIA-SQLVM\Student account with the password
VMPa$$w0rd.
5.
If a message box informs you that the identity of the remote computer cant be verified, click Yes.
6.
If you are prompted to find PCs, devices, and content, click Yes.
7.
Wait for Server Manager to start, and view the information on the Local Server page. Then minimize
Server Manager.
8.
Right-click the Start button and click Disk Management. If you are prompted to initialize disks, click
OK.
9.
In Disk Management, right-click Disk 2 and click New Simple Volume. Then use the wizard to create
a volume that uses all the space on the disk. Assign the drive letter M and format the disk as NTFS
with a 64K allocation unit size and the label Data 1.
10. Repeat the previous step for Disk 3, creating a volume that is assigned to drive letter N with a 64K
allocation unit size and the label Data 2.
11. Repeat the previous step for Disk 4, creating a volume that is assigned to drive letter L with a 64K
allocation unit size and the label Logs.
12. Close Disk Management.
13. If any dialog box prompting you to format disks are displayed, click Cancel in them.
14. Maximize Server Manager for the next exercise.
In the remote desktop session to the Azure virtual machine, in Server Manager, on the Local Server
page, click the status for Windows Firewall.
2.
3.
In Windows Firewall with Advanced Security, select the Inbound Rules page and then in the Actions
pane, click New Rule.
4.
In the New Inbound Rule Wizard window, select Port and click Next.
5.
On the Protocols and Ports page, ensure that TCP and Specific local ports are selected, and enter
the port number 1433. Then click Next.
6.
On the Action page, ensure that Allow the connection is selected, and click Next.
7.
On the Profile page, ensure that all profiles are selected and click Next.
8.
On the Name page, enter the name SQL Server Port and click Finish.
In the remote desktop session to the Azure virtual machine, use Windows Explorer to view the disks
on the PC.
2.
Right-click drive L: and click Properties. Then on the Security tab, click Edit.
3.
Click Add. Then enter the name NT Service\MSSQLSERVER and click Check Names. If multiple
users are found, select MSSQLSERVER and click OK. Then click OK.
4.
Select the Allow check box for the Full Control permission and click OK. Then click OK again to close
the disk properties dialog box.
5.
In the remote desktop session to the Azure virtual machine, on the Start screen, type SQL Server
Management and then start the SQL Server Management Studio app.
2.
When prompted, connect to the default database engine on the virtual machine by using Windows
authentication.
3.
4.
In the Server Properties dialog box, on the Security page, select SQL Server and Windows
Authentication mode. Then click OK.
5.
When notified that you must restart the service, click OK.
6.
In Object Explorer, right-click the server and click Restart. When prompted to confirm the restart,
click Yes.
7.
In Object Explorer, expand Security. Then right-click Logins and click New Login.
8.
In the Login New dialog box, on the General page, enter the login name Student and select SQL
Server authentication.
9.
Enter and confirm the password Pa$$w0rd, and clear the Enforce password expiration and User
must change password at next login check boxes.
10. On the Server Roles page, select sysadmin. Then click OK.
11. Close all open windows and sign out of the remote desktop session.
L8-4
1.
In the Azure management portal, ensure that you are viewing the Dashboard page for your virtual
machine.
2.
3.
Use the wizard to add a stand-alone endpoint. In the Name drop-down list, select the predefined
MSSQL endpoint, which uses the TCP protocol on the public port 1433 and the private port 1433.
4.
When the endpoint has been added, repeat the previous two steps to add another stand-alone
endpoint named SQLCloudSvc with public and private TCP ports 11435. There is no predefined
option for this endpoint.
When the endpoint has been created, start SQL Server Management Studio in the MIA-SQL local
virtual machine.
2.
When prompted, connect to the default instance of SQL Server in your Azure virtual machine by
specifying the following settings:
3.
Login: Student
Password: Pa$$w0rd
Keep SQL Server Management Studio open for the next exercise.
In SQL Server Management Studio, in Object Explorer, right-click Databases and click New
Database.
2.
In the New Database dialog box, on the General page, enter the name Marketing and change the
path of the database log file to L:\.
3.
Under the list of files, click Add. Then add a new file named MarketingData1. In the Filegroup
column, select <new filegroup> and create a new filegroup named MarketingData with the
Default option selected. Change the path for the new file to M:\.
4.
Repeat the previous step to add a new file named MarketingData2 on the MarketingData filegroup
with the path N:\.
5.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
3.
In the User Account Control dialog box, click Yes, and then wait for the script to finish.
Start SQL Server Management Studio, and connect to the MIA-SQL database engine instance using
Windows authentication.
2.
In Object Explorer, expand Databases, right-click Products, click Tasks, and then click Ship
Transaction Logs.
3.
In the Database Properties window, on the Transaction Log Shipping page, click the Enable this
as a primary database in a log shipping configuration check box.
4.
Click Backup Settings. In the Network path to backup folder field, type \\MIASQL\ProductsBackupFolder.
5.
In the If the backup is located on the primary server, type a local path to the folder field, type
D:\LabFiles\Lab09\Starter\ProductsBackupFolder.
6.
Click Schedule, in the Occurs every field, type 3, and then click OK.
7.
8.
9.
In the Secondary Database Settings window, click Connect, in the Connect to Server dialog box, in
the Server name field, type MIA-SQL\SQL2, and then click Connect.
10. On the Initialize Secondary Database tab, ensure that Yes, generate a full backup of the primary
database and restore it into the secondary database is selected.
11. Click the Copy Files tab. In the Destination folder for copied files field, type \\MIASQL\ProductsRestoreFolder.
12. Click Schedule, in the Occurs every field, type 3, and then click OK.
13. On the Restore Transaction Log tab, click Schedule. In the Occurs every field, type 3, and then click
OK.
14. Click Standby mode, and then click the Disconnect users in the database when restoring
backups check box.
15. In the Secondary Database Settings window, click OK, and then in the Database Properties
Products window, click OK.
16. In the Save Log Shipping Configuration dialog box, wait for the configuration to complete, and
then click Close.
L9-2
1.
In Object Explorer, expand SQL Server Agent, expand Jobs, and note the two log shipping jobs,
which are called LSAlert_MIA-SQL and LSBackup_Products.
2.
In Object Explorer, click Connect, click Database Engine, in the Connect To Server dialog box, in
the Server name field, type MIA-SQL\SQL2, and then click Connect.
3.
In Object Explorer, under MIA-SQL\SQL2, expand SQL Server Agent, expand Jobs, and note the
three log shipping jobs, which are called LSAlert_MIA-SQL\SQL2, LSCopy_MIA-SQL_Products, and
LSRestore_MIA-SQL_Products.
4.
Under the MIA-SQL\SQL2 instance, expand Databases, and note that the Products database shows
as Standby / Read-Only.
5.
6.
In SQL Server Management Studio, in Object Explorer, click MIA-SQL. Then open the
TestPrimary.sql script file in the D:\Labfiles\Lab09\Starter folder.
2.
In the TestPrimary.sql query window, under the comment Query the Product table to identify
products that have a NULL standard cost value, highlight the Transact-SQL statements, and then
click Execute.
3.
Review the results, and note there are two products returned that have no standard cost value.
4.
Under the comment Update the standard cost for products with a NULL standard cost, highlight
the Transact-SQL statement, and then click Execute.
5.
Review the results, and note that two rows were updated.
6.
Under the comment Check that all products with the Subcategory key 14 have a non-NULL
standard cost, highlight the Transact-SQL statement, and then click Execute.
7.
Wait three minutes for the log backups to be shipped to the secondary server.
8.
In Object Explorer, click MIA-SQL\SQL2. Then open the TestSecondary.sql script file in the
D:\Labfiles\Lab09\Starter folder.
9.
In the TestSecondary.sql query window, under the comment View products with a product
subcategory value of 14, highlight the Transact-SQL statements, and then click Execute.
10. Review the results, and note that all rows have a non-NULL value in the StandardCost column,
indicating that the change you made on the primary server has been restored on the secondary
server.
11. Under the comment Update the color value to Red for ProductKey 210, highlight the TransactSQL statement, and then click Execute.
12. Review the message that states it is not possible to update the read-only database.
13. Close the TestPrimary.sql and TestSecondary.sql query windows, and do not save any changes.
In Object Explorer, click MIA-SQL. Then open the BackupLog.sql script file in the
D:\Labfiles\Lab09\Starter folder.
2.
Review the Transact-SQL code, which backs up the transaction log of the products database with
NORECOVERY, and then click Execute.
3.
In Object Explorer, click MIA-SQL\SQL2. Then open the RestoreLog.sql script file in the
D:\Labfiles\Lab09\Starter folder.
2.
In the RestoreLog.sql query window, under the comment Restore the final log backup, highlight
the Transact-SQL statement, and then click Execute.
3.
If an error occurs because another backup job has run since you backed up the log manually, perform
the following actions:
o
In Object Explorer, under MIA-SQL\SQL2, under SQL Server Agent, right-click LSCopy_MIASQL_Products, and then click Start Job at Step.
In the Start Jobs - MIA-SQL\SQL2 dialog box, wait for the steps to complete, and then click
Close.
4.
In Object Explorer, under MIA-SQL, right-click Products, and then click Refresh. Note that the
Products database is now in a Restoring state.
5.
Under the comment Recover the database, highlight the Transact-SQL statement, and then click
Execute.
6.
L9-4
In Object Explorer, under MIA-SQL\SQL2, right-click Products, and then click Refresh. Note that the
Products database is now online.
In Object Explorer, click MIA-SQL\SQL2. Then open the TestFailover.sql script file in the
D:\Labfiles\Lab09\Starter folder.
2.
In the TestFailover.sql query window, under the comment View a record, highlight the TransactSQL statement, and then click Execute.
3.
In the TestFailover.sql query window, under the comment Update a record, highlight the TransactSQL statement, and then click Execute.
4.
Close SQL Server Management Studio, and do not save any changes.
5.
2.
In the D:\Labfiles\Lab10\Starter folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes, and wait for the script to finish.
4.
In the D:\Labfiles\Lab10\Starter folder, right-click Setup2.cmd, and then click Run as administrator.
5.
In the User Account Control dialog box, click Yes, and wait for the script to finish.
Task 2: View the Windows Server Failover Cluster configuration in Failover Cluster
Manager
1.
On the Start screen, type Failover, and then start Failover Cluster Manager.
2.
Click MIA-CLUSTER.adventureworks.msft, and then review the information about the Windows
Server Failover Cluster, noting the following points:
o
3.
4.
In the Nodes pane, in the Status column, note that the status of each of the three cluster nodes is
Up.
5.
Expand Storage, and then click Disks. In the Disks pane, in the Status column, note that the status
of Cluster Disk 1 is Online.
6.
7.
In the Cluster Disk 1 Properties dialog box, on the General tab, review the information about the
cluster disk.
8.
Click the Policies tab, and review the failure responses for the disk resource.
9.
Click the Advanced Policies tab, and review the possible owners for the disk resource.
11. Click Networks, and then in the Networks pane, in the Status column, note that the status of the
Cluster Network 1 resource is Up.
L10-2
1.
2.
In the Validate a Configuration Wizard, on the Before you Begin page, click Next.
3.
On the Testing Options page, click Run all tests (recommended), and then click Next.
4.
On the Review Storage Status page, select the Cluster Disk 1 check box, and then click Next.
5.
6.
Wait for the validation to complete, and then click View Report.
7.
In the Failover Cluster Validation Report, in the Cluster Configuration section, click Validate
Quorum Configuration, review the information, and then click Back to Failover Cluster Validation
Report.
8.
In the Network section, click Validate Network Communication, review the information, and then
click Back to Failover Cluster Validation Report.
9.
Close the Failover Cluster Validation Report, and then in the Validate a Configuration Wizard,
click Finish.
In the C:\SQLServer2014-x64-ENU folder, double-click Setup.exe, and then in the User Account
Control dialog box, click Yes.
2.
Wait a few moments for the SQL Server Installation Center to start, click Installation, and then click
New SQL Server failover cluster installation.
3.
Wait a few moments for the Install a SQL Server Failover Cluster wizard to start. On the Global
Rules page, wait for the rule check to complete. Then on the Microsoft Updates and Product
Updates pages, clear any checkboxes and click Next.
4.
After the Install Setup Files page completes, on the Install Failover Cluster Rules page, wait for the
rules check to complete, review the results, and then click Next.
5.
On the Product Key page, click Next, on the License Terms page click the I accept the license
terms check box, clear the option to turn on Customer Experience Improvement Program (CEIP)
and Error Reporting, and then click Next.
6.
On the Setup Role page, click the SQL Server Feature Installation radio button, and then click
Next.
7.
On the Feature Selection page, click the Database Engine Services check box, and then click Next.
8.
On the Instance Configuration page, in the SQL Server Network Name field, type SQLCLUSTER,
click Named instance, in the Named instance field, type SQL1, and then click Next.
9.
On the Cluster Resource Group page, review the information, and then click Next.
10. On the Cluster Disk Selection page, ensure that the Cluster Disk 1 check box is selected, and then
click Next.
11. On the Cluster Network Configuration page, click the IP Type check box, in the Address column,
type 10.10.0.160, and then click Next.
12. On the Server Configuration page, in the SQL Server Agent row, in the Account Name column,
type ADVENTUREWORKS\ServiceAcct, in the Password column, type Pa$$w0rd, in the SQL Server
Database Engine row, in the Account Name column, type ADVENTUREWORKS\ServiceAcct, in the
Password column, type Pa$$w0rd, and then click Next.
13. On the Database Engine Configuration page, on the Server Configuration tab, click Add Current
User, and then click Next.
14. On the Ready to Install page, click Install. Wait for the installation to complete, on the Complete
page, click Close, and then close the SQL Server Installation Center.
15. Close File Explorer.
2.
In the C:\SQLServer2014-x64-ENU folder, double-click Setup.exe, and then in the User Account
Control dialog box, click Yes.
3.
In SQL Server Installation Center, click Installation, and then click Add node to a SQL Server
failover cluster.
4.
On the Global Rules page, wait for the rule check to complete. Then on the Microsoft Updates and
Product Updates pages, clear any checkboxes and click Next.
5.
After the Install Setup Files page completes, on the Add Node Rules page, wait for the rules check
to complete, review the results, and then click Next.
6.
On the Product Key page, click Next, on the License Terms page click the I accept the license
terms check box, clear the Turn on Customer Experience Improvement Program and Error
Reporting check box, and then click Next.
7.
On the Cluster Node Configuration page, click Next, and then on the Cluster Network
Configuration page, click Next.
8.
On the Service Accounts page, in the SQL Server Agent row, in the Password column, type
Pa$$w0rd, in the SQL Server Database Engine row, in the Password column, type Pa$$w0rd, and
then click Next.
9.
On the Ready to Add Node page, click Install. Wait for the installation to complete, on the
Complete page, click Close, close the SQL Server Installation Center, and then close File Explorer.
10. Log on to the MIA-CLUST3 virtual machine as ADVENTUREWORKS\Student with the password
Pa$$w0rd.
11. Repeat steps 2 to 9 to add MIA-CLUST3 as a node in the failover cluster instance.
2.
L10-4
3.
In the Roles, pane, click SQL Server (SQL1), and then review the information in the Status and
Owner Node columns.
4.
If the value in the Owner Node column is not MIA-CLUST1, in the Actions pane, under SQL Server
(SQL1), click Move, click Select Node, in the Move Clustered Role dialog box, click MIA-CLUST1,
click OK, and then wait for the role to finish moving to the MIA-CLUST1 node.
5.
In the Actions pane, click Properties. In the SQL Server (SQL1) Properties dialog box, on the
General tab, in the Preferred Owners section, note that MIA-CLUST1, MIA-CLUST2, and MIACLUST3 are all possible owners of the resource.
6.
Click the Failover tab, review the failover settings, and then click Cancel.
7.
In Failover Cluster Manager, at the bottom of the SQL Server (SQL1) pane, click the Resources tab,
and then review the information about the resources associated with SQL Server (SQL1).
8.
Minimize Failover Cluster Manager, on the taskbar, click SQL Server Management Studio.
9.
In the Connect to Server dialog box, in the Server Name field, type SQLCLUSTER\SQL1, and then
click Connect.
10. In Object Explorer, right-click SQLCLUSTER\SQL1, click Properties, and then in the Server
Properties SQLCLUSTER\SQL1 dialog box, on the General tab, in the IsClustered row, note that
the value is True.
11. Click Cancel, and then minimize SQL Server Management Studio.
2.
Start SQL Server Management Studio, and then click New Query.
3.
In the SQLQuery1.sql window, type the following Transact-SQL statements, and then click Execute:
ALTER DATABASE
(NAME=tempdev,
GO
ALTER DATABASE
(NAME=templog,
GO
4.
In the Results pane, review the messages and note that the folder must exist on every node in the
cluster, as well as the permissions requirements.
5.
Close the SQLQuery1.sql window, and do not save any changes. Minimize SQL Server Management
Studio.
6.
On the MIA-CLUST2 virtual machine, on the taskbar, click File Explorer, browse to C:\tempdb, and
note that the folder is empty. This is because the service must restart for SQL Server to move the
tempdb files.
2.
On the MIA-CLUST1 virtual machine, on the taskbar, click Failover Cluster Manager, under MIACLUSTER.adventureworks.msft, click Roles, and then in the Roles (1) window, click SQL Server
(SQL1).
3.
In the Actions pane, under SQL Server (SQL1), click Move, click Select Node, in the Move
Clustered Role dialog box, click MIA-CLUST2, and then click OK.
4.
Wait for the move to complete, and then minimize Failover Cluster Manager.
5.
On the MIA-CLUST2 virtual machine in File Explorer, in C:\tempdb, note that the folder now
contains the tempdb database files. This is because moving the role to the MIA-CLUST2 node forced
the recreation of the tempdb database on that node.
2.
In the User Account Control dialog box, click Yes, wait for the script to finish.
3.
Start SQL Server Management Studio, and in Object Explorer, expand Databases, expand
HumanResources, expand Tables, right-click dbo.Employee, and then click Select Top 1000 Rows.
4.
Check that the query returns the rows without any errors.
5.
On the host computer, in Hyper-V Manager, right-click 20465C-MIA-FCI-CLUST2, and click Turn
Off. Then if the Turn Off Machine dialog box is displayed, click Turn Off. This action simulates
failure of the node that is the owner of the SQL Server (SQL1) role.
6.
On the MIA-CLUST1 virtual machine, on the taskbar, click Failover Cluster Manager, click Roles, and
then watch the SQL Server (SQL1) role fail over from the MIA-CLUST2 node to either the MIA-CLUST3
node or the MIA-CLUST1 node.
7.
When failover has finished, in the lower part of the window, click the Resources tab, and review the
available resources, checking that they all report a status of Online.
8.
In SQL Server Management Studio, in the query window, highlight the Transact-SQL statement that
returns the top 1000 rows from the dbo.Employees table, and then click Execute. If the server
reports a transport error, click Execute again.
9.
L10-6
1.
On the taskbar, click Failover Cluster Manager, right-click the SQL Server (SQL1) role, and then
click Properties.
2.
In the SQL Server (SQL1) Properties dialog box, on the General tab, in the Preferred Owners box,
click the MIA-CLUST2 check box, click the Failover tab, click Allow Failback radio button, ensure
that the Immediately radio button is selected, and then click OK.
3.
On the host computer, in Hyper-V Manager, right-click 20465C-MIA-FCI-CLUST2, and then click
Start.
4.
On the MIA-CLUST1 virtual machine, in Failover Cluster Manager, click Roles, and then wait for the
SQL Server (SQL1) role to failback to the MIA-CLUST2 role.
5.
Close Failover Cluster Manager, and then close SQL Server Management Studio. Do not save any
changes if prompted to do so.
6.
On the host machine, in Hyper-V Manager, right-click 20465C-MIA-FCI-CLUST1, and then click Shut
Down.
7.
20465C-MIA-FCI-CLUST2
20465C-MIA-FCI-CLUST3
Start only the 20465C-MIA-DC, 20465C-MIA-AG-CLUST1, 20465C-MIA-AG-CLUST2, and 20465CMIA-AG-CLUST3 virtual machines. Then log on to 20465C-MIA-AG-CLUST1as
ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the D:\Labfiles\Lab11\Starter folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes, and wait for the script to finish.
On the Start screen, type Failover. Then start the Failover Cluster Manager app.
2.
In Failover Cluster Manager, in the left pane, expand MIA-CLUSTER.adventureworks.msft, and then
click Nodes. Verify that the following cluster nodes are listed with a status value of Up:
o
MIA-CLUST1
MIA-CLUST2
MIA-CLUST3
3.
4.
On the Start screen, type SQL Server. Then start the SQL Server 2014 Configuration Manager app.
5.
6.
In the left pane, click SQL Server Services, in the right pane, double-click SQL Server
(MSSQLSERVER).
7.
In the SQL Server (MSSQLSERVER) Properties dialog box, on the AlwaysOn High Availability tab,
verify that AlwaysOn Availability Groups are enabled for the MIA-CLUSTER failover cluster, and then
click Cancel. This setting has been applied to all the servers in the cluster.
8.
Start SQL Server Management Studio and connect to the MIA-CLUST1 database engine instance
using Windows authentication.
2.
In Object Explorer, expand Databases, right-click the HumanResources database, point to Tasks,
and then click Back Up.
3.
In the Back Up Database HumanResources dialog box, in the Destination list, click the existing
backup file path, click Remove, and then click Add.
4.
In the Select Backup Destination dialog box, in the File name box, type
D:\Labfiles\Lab11\Starter\DataShare\HumanResources.bak, and then click OK.
5.
In the Back Up Database HumanResources dialog box, in the Backup type list, ensure that Full is
selected, and then click OK.
6.
In the Microsoft SQL Server Management Studio dialog box, click OK.
7.
Repeat steps 3 to 7 to perform a full database backup of the ResellerSales database, using the
backup path D:\Labfiles\Lab11\Starter \DataShare\ResellerSales.bak.
In SQL Server Management Studio, in Object Explorer, expand AlwaysOn High Availability, rightclick Availability Groups, and then click New Availability Group Wizard.
2.
In the New Availability Group wizard, on the Introduction page, click Next.
3.
On the Specify Availability Group Name page, in the Availability group name box, type MIASQL-AG, and then click Next.
4.
On the Select Databases page, select the HumanResources and ResellerSales database check
boxes, and then click Next.
5.
On the Specify Replicas page, on the Replicas tab, click Add Replica.
6.
In the Connect to Server dialog box, in the Server name box, type MIA-CLUST2, in the
Authentication list, ensure Windows Authentication is selected, and then click Connect.
7.
8.
On the Replicas tab, select the Automatic Failover check box for MIA-CLUST1 and MIA-CLUST2.
This automatically selects the Synchronous Commit check box for these replicas.
9.
On the Replicas tab, in the Readable Secondary list for MIA-CLUST2, click Read-intent only.
10. On the Replicas tab, in the Readable Secondary list for MIA-CLUST3, click Yes.
11. Review the default settings on the Endpoints and Backup Preferences tabs, and then click the
Listener tab.
12. Click Create an availability group listener, and then specify the following settings:
o
Port: 1433
13. Click Add, and in the Add IP Address dialog box, in the IPv4 Address box, type 10.10.0.40, and
then click OK.
L11-2
Note: If Add is not visible, you may need to maximize the dialog box or increase the screen resolution of
the virtual machine).
14. On the Specify Replicas page, click Next.
15. On the Select Initial Data Synchronization page, ensure that Full is selected.
16. In the Specify a shared network location accessible by all replicas box, type \\MIACLUST1\DataShare, and then click Next.
17. On the Validation page, review the validation results, and then click Next.
18. On the Summary page, click Finish.
19. On the Results page, you may see a warning about the cluster quorum. This will not affect the lab. If
you do see this warning, review it, and then click Close.
In Object Explorer, expand Availability Groups, expand MIA-SQL-AG (Primary) and all its
subfolders.
2.
3.
Wait for the dashboard to connect to the availability group, and then review the status information.
In the Availability replica table, note that the replicas MIA-CLUST1 and MIA-CLUST2 should show
as Synchronized and MIA-CLUST3 should show as Synchronizing.
4.
Minimize SQL Server Management Studio. You will use it again in a later exercise.
In the lower-left corner of the task bar, click Start, type cmd, and then click Command Prompt.
2.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-SQL-CLUST availability group listener, and then press Enter:
sqlcmd -E -S MIA-SQL-CLUST
3.
At the command prompt, type the following commands to verify that the SQLCMD session is
connected to the primary replica (MIA-CLUST1):
SELECT @@ServerName
GO
4.
At the command prompt, type the following commands to retrieve rows from the Employee table in
the HumanResources database, and then view the results:
SELECT * FROM HumanResources.dbo.Employee
GO
5.
At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
6.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-CLUST3 replica, and then press Enter:
sqlcmd -E -S MIA-CLUST3
2.
At the command prompt, type the following commands to retrieve rows from the Employee table in
the HumanResources database, and then view the results:
SELECT * FROM HumanResources.dbo.Employee
GO
3.
At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
4.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-CLUST2 replica, and then press Enter:
sqlcmd -E -S MIA-CLUST2
2.
L11-4
At the command prompt, type the following commands to attempt to retrieve rows from the
Employee table in the HumanResources database:
SELECT * FROM HumanResources.dbo.Employee
GO
3.
4.
At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
5.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-CLUST2 replica with a read-intent connection, and then press Enter:
sqlcmd -E -S MIA-CLUST2 -K ReadOnly
6.
At the command prompt, type the following commands to attempt to retrieve rows from the
Employee table in the HumanResources database, and then view the results:
SELECT * FROM HumanResources.dbo.Employee
GO
7.
At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
8.
Minimize the command prompt window. You will use it again in the next exercise.
Maximize SQL Server Management Studio and view the dashboard for the MIA-SQL-AG availability
group.
2.
3.
In the Fail Over Availability Group: MIA-SQL-AG wizard, on the Introduction page, click Next.
4.
On the Select New Primary Replica page, select the MIA-CLUST2 check box, and then click Next.
5.
6.
7.
8.
9.
10. In the dashboard, note that the primary instance is now MIA-CLUST2 and that MIA-CLUST1 is a
secondary replica.
11. Minimize SQL Server Management Studio.
12. At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-SQL-CLUST availability group listener, and then press Enter:
sqlcmd -E -S MIA-SQL-CLUST
13. At the command prompt, type the following commands to verify that the SQLCMD session is
connected to the new primary replica (MIA-CLUST2):
SELECT @@ServerName
GO
14. At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
15. Keep the command prompt open for the next task.
On the taskbar, click SQL Server Management Studio, and then in Object Explorer, click Connect
Object Explorer.
2.
In the Connect to Server dialog box, in the Server name box, type MIA-CLUST2, in the
Authentication list, ensure Windows Authentication is selected, and then click Connect.
3.
4.
In the Use Account Control dialog box, click Yes, and then in the Microsoft SQL Server
Management Studio dialog box, click Yes.
5.
If a second Microsoft SQL Server Management Studio dialog box appears, click Yes.
6.
7.
8.
At the command prompt, type the following command to open a SQLCMD session and connect to
the MIA-SQL-CLUST availability group listener, and then press Enter:
sqlcmd -E -S MIA-SQL-CLUST
9.
L11-6
At the command prompt, type the following commands to verify that automatic failover has resulted
in MIA-CLUST1 resuming the primary replica role:
SELECT @@ServerName
GO
10. At the command prompt, type the following command to exit the SQLCMD session, and then press
Enter:
Exit
11. Close the command prompt, but leave SQL Server Management Studio open.
In SQL Server Management Studio, in Object Explorer, right-click MIA-CLUST2, and then click Start.
2.
3.
In the MIA-SQL-AG Availability Group dashboard, click View Cluster Quorum Information.
4.
In the Cluster Quorum Information dialog box, review the information, and note that each of the
three nodes in the cluster has one vote.
5.
2.
Click MIA-CLUSTER.Adventureworks.msft, in the Actions pane, click More Actions, and then click
Configure Cluster Quorum Settings.
3.
4.
On the Select Quorum Configuration Option page, click the Advanced quorum configuration and
witness selection radio button, and then click Next.
5.
On the Select Voting Configuration page, click Select Nodes, clear the MIA-CLUST3 check box,
and then click Next.
6.
7.
On the Select Quorum Witness page, click the Configure a file share witness radio button, and
then click Next.
8.
On the Configure File Share Witness page, in the File Share Path field, type \\MIADC\WitnessShare, and then click Next.
9.
On the Confirmation page, click Next, wait for the configuration to complete, and then click Finish.
10. In SQL Server Management Studio, on the MIA-SQL-AG dashboard, click View Cluster Quorum
Configuration Information.
11. In the Cluster Quorum Information dialog box, review the information, noting that MIA-CLUST3
does not have a vote and that the file share witness has one vote. Click Close.
12. Close SQL Server Management Studio, and then close Failover Cluster Manager.
13. In the D:\Labfiles\Lab11\Starter, folder right-click Cleanup.cmd, click Run as administrator, in the
User Account Control dialog box, click Yes.
Viewed the quorum configuration by using Failover Cluster Manager and SQL Server Management Studio.
Removed the quorum vote from the MIA-CLUST3 node and configured a file share witness.
Start the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines, and log on to 20465C-MIA-SQL as
ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
Review the current high availability and disaster recovery setup as described in the exercise scenario.
3.
Devise a plan to update or replace the existing solution to meet the stated requirements. Include the
following in your plan:
4.
The details of the configuration. For example, if using a Window Server Failover Cluster, what
quorum configuration will you use? If using an Availability Group, how will the replicas
synchronize? How many sites will there be?
The anticipated RPO for disaster recovery. This should not be an actual value; approximations
such as the solution enables recovery to the point of last synchronization are adequate.
Work with a partner. Take turns to describe your proposed solutions to each other, and explain why
you have chosen the solution that you have. As you listen to your partners solution, make notes, ask
questions to clarify details, and offer your opinion if you think that you might be able to add to the
solution.
2.
Compare your solutions and decide together which you think would be the best one to present to
management. Alternatively, you might decide on a third solution that is different to the ones that you
discussed in step 1.
On the MIA-SQL virtual machine, browse to D:\Labfiles\Lab12\Solution folder, and then double-click
Exercise1_Suggested_Solution.doc.
2.
Review the suggested solution and compare it to the one you discussed in the previous task.
L12-2
1.
Review the current high availability and disaster recovery setup as described in the exercise scenario.
2.
Devise a plan to update or replace the existing solution to meet the stated requirements. Include the
following in your plan:
3.
The details of the configuration. For example, if using a Window Server Failover Cluster, what
quorum configuration will you use? If using an Availability Group, how will the replicas
synchronize? How many sites will there be?
The anticipated RPO for disaster recovery. This should not be an actual value; approximations
such as the solution enables recovery to the point of last synchronization are adequate.
Work with a partner. Take turns to describe your proposed solutions to each other, and explain why
you have made your choices. As you listen to your partners solution, make notes, ask questions to
clarify details, and offer your opinion if you think you might be able to add to the solution.
2.
Compare your solutions and decide together which you think would be the best one to present to
management. Alternatively, you might decide on a third solution that is different to the ones you
discussed in step 1.
On the MIA-SQL virtual machine, browse to D:\Labfiles\Lab12\Solution folder, and then double-click
Exercise2_Suggested_Solution.doc.
2.
Review the suggested solution and compare it to the one you discussed in the previous task.
Review the current high availability and disaster recovery setup as described in the exercise scenario.
2.
Devise a plan to update or replace the existing solution to meet the stated requirements. Include the
following in your plan:
o
The details of the configuration. For example, if using a Windows Server Failover Cluster, what
quorum configuration will you use? If using an Availability Group, how will the replicas
synchronize? How many sites will there be?
The anticipated RPO for disaster recovery. This should not be an actual value; approximations
such as the solution enables recovery to the point of last synchronization are adequate.
3.
Work with a partner. Take turns to describe your proposed solutions to each other, and explain why
you made your choices. As you listen to your partners solution, make notes, ask questions to clarify
details, and offer your opinion if you think that you might be able to add to the solution.
2.
Compare your solutions and decide together which you think would be the best one to present to
management. Alternatively, you might decide on a third solution that is different to the ones that you
discussed in step 1.
On the MIA-SQL virtual machine, browse to D:\Labfiles\Lab12\Solution folder, and then double-click
Exercise3_Suggested_Solution.doc.
2.
Review the suggested solution and compare it to the one you discussed in the previous task.
Ensure that the 20465C-MIA-DC and 20465C-MIA-SQL virtual machines are both running, and then
log on to 20465C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa$$w0rd.
2.
In the D:\Labfiles\Lab13\Starter folder, right-click Setup.cmd, and then click Run as administrator.
3.
In the User Account Control dialog box, click Yes, and then wait for the script to finish.
2.
3.
Discuss with your group the key information that will enable you to choose a strategy.
4.
5.
6.
Start SQL Server Management Studio and connect to the MIA-SQL database engine instance using
Windows authentication.
2.
In SQL Server Management Studio, in Object Explorer, expand Replication, right-click Local
Publications, and then click New Publication.
3.
In the New Publication Wizard, on the New Publication Wizard page, click Next.
4.
On the Distributor page, ensure that MIA-SQL will act as its own Distributor is selected, and then
click Next.
5.
On the Snapshot Folder page, review the default location for the snapshot folder, and then click
Next.
6.
On the Publication Database page, click HumanResources, and then click Next.
7.
On the Publication Type page, click Merge publication, and then click Next.
L13-2
8.
On the Subscriber Types page, ensure that only SQL Server 2008 or later is selected, and then click
Next.
9.
On the Articles page, in the Objects to publish box, select the Tables check box, expand Tables,
clear the following tables, and then click Next:
o
EmployeePayHistory (Payment)
10. On the Article Issues page, review the information about uniqueidentifier columns, and then click
Next.
11. On the Filter Table Rows page, click Next.
12. On the Snapshot Agent page, review the settings, and then click Next.
13. On the Agent Security page, click Security Settings.
14. In the Snapshot Agent Security dialog box, in the Process account box, type
ADVENTUREWORKS\ServiceAcct, in the Password and Confirm Password boxes, type Pa$$w0rd,
and then click OK.
15. On the Agent Security page, click Next.
16. On the Wizard Actions page, ensure that only Create the publication is selected, and then click
Next.
17. On the Complete the Wizard page, in the Publication name box, type HumanResourcesPub, and
then click Finish.
18. On the Creating Publication page, wait for the operation to complete, and then click Close.
2.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL\SQL2, and then click
Connect.
3.
In Object Explorer, under MIA-SQL\SQL2, expand Replication, right-click Local Subscriptions, and
then click New Subscriptions.
4.
In the New Subscription Wizard, on the New Subscription Wizard page, click Next.
5.
On the Publication page, in the Publisher list, click Find SQL Server Publisher.
6.
In the Connect to Server dialog box, in the Server name box, type MIA-SQL, and then click
Connect.
7.
8.
On the Merge Agent Location page, ensure that Run each agent at its Subscriber (pull
subscriptions) is selected, and then click Next.
9.
On the Subscribers page, in the Subscription Database column, click New database.
10. In the New Database dialog box, in the Database name box, type HumanResources, and then click
OK.
11. On the Subscribers page, click Next.
12. On the Merge Agent Security page, click the ellipsis button.
13. In the Merge Agent Security dialog box, in the Process account box, type
ADVENTUREWORKS\ServiceAcct, in the Password and Confirm Password boxes, type Pa$$w0rd,
and then click OK.
14. On the Merge Agent Security page, click Next.
15. On the Synchronization Schedule page, in the Agent Schedule column, click Define Schedule.
16. In the New Job Schedule dialog box, in the Frequency area, in the Occurs drop-down list, click
Daily, in the Daily Frequency section, click Occurs every, in the Occurs every drop-down list, click
minute(s), click OK, and then click Next.
17. On the Initialize Subscriptions page, ensure that Immediately is selected, and then click Next.
18. On the Subscription Type page, in the Subscription Type column, ensure that Client is selected,
and then click Next.
19. On the Wizard Actions page, ensure that only Create the subscription(s) is selected, and then click
Next.
20. On the Complete the Wizard page, review the configuration steps, and then click Finish.
21. On the Creating Subscription(s) page, wait for the operation to complete, and then click Close.
In Object Explorer, click the MIA-SQL\SQL2 instance. Then open TestReplication.sql in the
D:\Labfiles\Lab13\Starter folder.
2.
Select the Transact-SQL statement under the comment Check that data replicated, click Execute,
and then review the results, noting that the query returned four rows.
3.
Select the Transact-SQL statement under the comment Insert a new timesheet row, click Execute,
and then repeat step 2 to view the new row.
4.
In Object Explorer, under MIA-SQL\SQL2, under Replication, expand Local Subscriptions, rightclick [HumanResources] [MIA-SQL].[HumanResources]:HumanResourcesPub, and then click
View Synchronization Status.
5.
In the View Synchronization Status dialog box, in the Last status message field, review the
message, which should read Merge completed after processing 1 data change(s) (1 insert(s), 0
update(s), 0 delete(s), 0 conflict(s)). If this message is not displayed, wait until it does. Then click
Close.
6.
In Object Explorer, under MIA-SQL, expand Databases, right-click HumanResources and click New
Query.
7.
In the query window, type the following Transact-SQL statement, and then click Execute:
Use HumanResources;
GO
SELECT * FROM Payment.Timesheet;
GO
8.
Review the results, noting that the row you added on MIA-SQL\SQL2 in step 3 has been replicated to
the HumanResources database on the MIA-SQL instance.
9.
Close SQL Server Management Studio, and do not save any changes.
10. In the D:\Labfiles\Lab13\Starter folder, right-click Cleanup.cmd, and then click Run as
administrator.
11. In the User Account Control dialog box, click Yes, and then wait for the script to finish.
L13-4