Best Practices SQL Server For OpenText Content Server 16 and 16.2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

Product: OpenText Content Server

Version: 16.x, 16.2.x


Task/Topic: Deployment, Administration, Performance
Audience: Administrators
Platform: SQL Server 2012, 2014, 2016 and Azure
Document ID: 500287
Updated: May 29, 2020

Best Practices
Microsoft® SQL Server for OpenText™
Content Server™ 16.x
John Postma, Director, Engineering Services

Rose Liang, Senior Performance Analyst

Feng Xiao, Senior Performance Engineering Analyst

Amarendra Kondra, Performance Architect

Scott Tindal, Performance Architect

Michelle Schwan, Database Architect

OpenText Performance Engineering


Best Practices –SQL Server for Content Server 16.x

Contents
Audience ...................................................................................................................... 4
Executive Summary .................................................................................................... 5
General Overview of Content Server Database Connections ................................ 6
Configuration Recommendations ............................................................................. 7
Installation Options ................................................................................................. 7
SQL Server Setup Best Practices.............................................................................. 9
Antivirus Software................................................................................................... 9
Backup Compression ............................................................................................. 9
Instant Database File Initialization ........................................................................ 10
Lock Pages in Memory ......................................................................................... 11
Maximum Degree of Parallelism (MaxDOP) ........................................................ 12
Min and Max Memory ........................................................................................... 13
Perform Volume Maintenance Task ...................................................................... 14
Storage Best Practices ......................................................................................... 14
tempDB Configuration .......................................................................................... 15
SQL Server Configuration Settings......................................................................... 16
Allocate Full Extent ............................................................................................... 16
AlwaysOn Availability Groups ............................................................................... 16
Cost Threshold for Parallelism ............................................................................. 17
Optimize for Ad hoc Workloads ............................................................................ 18
Content Server Database Settings .......................................................................... 19
Clustered Indexes ................................................................................................. 19
Collation ................................................................................................................ 20
Compatibility Level ............................................................................................... 21
Data Compression ................................................................................................ 22
Database Data, Log File Size, and AutoGrowth ................................................... 23
Recovery Model .................................................................................................... 24
Statistics ............................................................................................................... 25
Table and Index Fragmentation, Fill factor ........................................................... 26
Monitoring and Benchmarking ................................................................................ 27
Benchmark ........................................................................................................... 28
SQL Server Performance Monitoring Tools .......................................................... 30
Activity Monitor ............................................................................................... 30
Azure Monitoring ............................................................................................ 31
Management Data Warehouse ...................................................................... 31
Identifying Worst-Performing SQL .......................................................................... 32
Content Server Connect Logs .............................................................................. 32
Performance Analyzer ............................................................................. 32
SQL Server DMVs ................................................................................................ 32
Locking ...................................................................................................................... 34

The Information Company™ 2


Best Practices –SQL Server for Content Server 16.x

Lock Escalation .................................................................................................... 34


Transaction Isolation ............................................................................................ 35
Appendices ................................................................................................................ 36
Appendix A – References ..................................................................................... 36
Tools ............................................................................................................... 36
For additional guidance and help, please join the community of
experts: .................................................................................................... 36
Appendix B – Dynamic Management Views (DMVs) ........................................... 37
Blocking sessions .................................................................................... 38
Cached Query Plans (sys.dm_exec_cached_plans) .............................. 39
Index Usage (sys.dm_db_index_usage_stats) ....................................... 40
Lock Escalations (sys.dm_db_index_operational_stats) ........................ 40
PAGELATCH_XX Waits........................................................................... 41
Performance of Cached Query Plans
(sys.dm_exec_query_stats) .............................................................. 42
Queries using Parallelism ........................................................................ 44
SQL Server and Database Information Queries...................................... 45
Table and index size and fragmentation
(sys.dm_db_index_physical_stats,
sys.dm_db_stats_properties) ............................................................ 46
Virtual File Latency (sys.dm_io_virtual_file_stats) .................................. 48
Waits (sys.dm_os_wait_stats) ................................................................. 49
Appendix C – Collation Fix ................................................................................... 50
Appendix D – When to Update Statistics and Rebuild Indexes ........................... 51

The Information Company™ 3


Best Practices –SQL Server for Content Server 16.x

Audience
The document is intended for a technical audience that is planning an
implementation of OpenText™ products. OpenText recommends consulting with
OpenText Professional Services who can assist with the specific details of
individual implementation architectures.

Disclaimer
The tests and results described in this document apply only to the OpenText
configuration described herein. For testing or certification of other configurations,
contact OpenText Corporation for more information.
All tests described in this document were run on equipment located in the
OpenText Performance Laboratory and were performed by the OpenText
Performance Engineering Group. Note that using a configuration similar to that
described in this document, or any other certified configuration, does not
guarantee the results documented herein. There may be parameters or variables
that were not contemplated during these performance tests that could affect
results in other test environments.
For any OpenText production deployment, OpenText recommends a rigorous
performance evaluation of the specific environment and applications to ensure
that there are no configuration or custom development bottlenecks present that
hinder overall performance.

NOTE: This document is specific to OpenText Content Server 16.


If your environment has OpenText Content Server 10.5 deployed,
refer to:
Best Practices – SQL Server for OpenText Content Server 10.5

The Information Company™ 4


Best Practices –SQL Server for Content Server 16.x

Executive Summary
This white paper is intended to explore aspects of Microsoft® SQL Server that may
be of value when configuring and scaling OpenText Content Server™ 16.x. It is
relevant to SQL Server Azure, 2016, 2014, and 2012, and is based on customer
experiences, performance lab tests with a typical document management workload,
and technical advisements from Microsoft.
Most common performance issues can be solved by ensuring that the hardware used
to deploy SQL Server has sufficient CPU, RAM and fast I/O devices, properly
balanced.
These topics explore non-default options available when simple expansion of
resources is ineffective, and discuss best practices for administration of Content
Server’s database. It concentrates on non-default options, because as a
recommended starting point, Content Server on SQL Server installations uses
Microsoft’s default deployment options. Usage profiles vary widely, so any actions
taken based on topics discussed in this paper must be verified in your own
environment prior to production deployment, and a rollback plan must be available in
case adverse effects are detected.
These recommendations are not intended to replace the services of an experienced
and trained SQL Server database administrator (DBA), and do not cover standard
operational procedures for SQL Server database maintenance, but rather offer
advice specific to Content Server on the SQL Server platform.

DISCLAIMER: This Best Practices document contains


procedures and sample SQL code. All procedures and sample
code in this area is specific to the scenario presented in the
document and delivered as is and is for educational purposes
only. It is presented as a guide to supplement official OpenText
product documentation. Procedures on deployments using other
operating systems and database application platforms may differ
in some areas. Deployments with additional optional modules
may also add to the complexity.

The Information Company™ 5


Best Practices –SQL Server for Content Server 16.x

General Overview of Content Server Database


Connections
Content Server 16.x creates three persistent database connections per thread.
These are threads that handle user requests to the application. All user requests are
pooled in this manner, and use the database sessions established using the
database schema owner account. For example, perhaps your Content Server site
supports 10,000 users. Each of these application users has their own account in
most cases, but at the database level all requests are coming in on perhaps a few
dozen connections, all owned by the schema owner. This is a typical web application
architecture.
In addition to the front-end threads, there is also a database connection for certain
background threads, such as the notifications/agents thread.
In Content Server 10.5 and above, there is a framework called "Distributed Agents".
This is a framework of threads used to parallelize large batch operations in the
background, such as the calculation of facet counts for the new faceted browse
functionality. Distributed Agents will also create database connections.

The Information Company™ 6


Best Practices –SQL Server for Content Server 16.x

Configuration Recommendations
This section provides recommendations for configuring an SQL Server instance to
host a Content Server ECM repository.

Installation Options
Consider the following tips and guidelines when initially installing Content Server for
use with SQL Server:
• Use the configuration options in SQL Server Configuration Settings and Content
Server Database Settings
• For Microsoft Azure SQL Databases
o The database must be created in the Microsoft Azure Portal (see article
Azure Create a SQL Database)
o The database can be managed in SQL Server Management Studio (SSMS)
2016 and in the Microsoft Azure Portal
The user account for the Azure database should be created using SQL Server
Management Studio (SSMS). Please note: database user, database name and
password should be altered.
CREATE SCHEMA dbuser AUTHORIZATION dbuser
CREATE USER dbuser WITH PASSWORD = 'pwd' WITH
DEFAULT_SCHEMA = dbname
ALTER ROLE db_owner ADD MEMBER dbuser
GRANT CONNECT TO dbuser
Apply database configurations.
ALTER DATABASE dbname SET COMPATIBILITY_LEVEL = 110
The core Content Server database is installed using Create Tables In Existing
Database option in Content Server using the Azure user account.
• For SQL Server 2016 and prior
Core Content Server database is installed using the sa SQL Server account. For
SQL Server DBAs who are uncomfortable allowing an application to log into the
SQL Server database with the sa account, the SQL Server DBA can create the
database and user account.
To create the database, the SQL Server DBA can follow the SQL command
below or can use SQL Server Management Studio (SSMS) to create the
database GUI. Please note: database name, name, filename, size and
password should be altered.
CREATE DATABASE dbname
ON PRIMARY (NAME = 'livelink_Data',
FILENAME = 'C:\databases\dbname.mdf', SIZE = 5MB,
MAXSIZE = UNLIMITED)
LOG ON (NAME = ‘livelink_Log',
FILENAME = 'C:\databases\dbname.ldf', SIZE = 1MB,
MAXSIZE = UNLIMITED)

The Information Company™ 7


Best Practices –SQL Server for Content Server 16.x

Apply database configurations.


ALTER DATABASE dbname SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE dbname SET READ_COMMITTED_SNAPSHOT ON
ALTER DATABASE dbname SET RECOVERY FULL (need to set directories –
SIMPLE is Content Server default setting)
ALTER DATABASE dbname SET COMPATIBILITY_LEVEL = 110
Create user account.
CREATE LOGIN dbuser WITH PASSWORD = 'pwd',
DEFAULT_DATABASE=dbname
The core Content Server database will need to be installed using Create Tables
In Existing Database option in Content Server.

The Information Company™ 8


Best Practices –SQL Server for Content Server 16.x

SQL Server Setup Best Practices


OpenText recommends that you install and configure SQL Server following
Microsoft’s recommendations for best performance. This section covers many SQL
Server settings, and refers to Microsoft documentation where applicable.
In addition to configuring the settings described in this section, OpenText
recommends that you install the latest SQL Server Service Pack that is supported by
your Content Server Update level (see Content Server Release Notes).
Some of these settings may be version specific.

Antivirus Software
Description Antivirus software scans files and monitors activity to prevent,
detect, and remove malicious software. Guidelines for antivirus
software configuration are provided in the Microsoft support article:
File Locations for Default Named Instances of SQL Server.

Default Depends on vendor.

Recommendation Exclude all database data and log files from scanning (including
tempDB). Exclude SQL Server engine process from active
monitoring.

Notes Follow the Microsoft support article for SQL Server version-specific
details.

Backup Compression
Description A backup that has been compressed to take up less space and use
less I/O which increases backup speed, see Backup Compression.

Default Compression is not set

Recommendation Microsoft recommends that backups be configured individually and


compressed. SQL Server editions from 2008 and higher can
restore a compressed backup.

The Information Company™ 9


Best Practices –SQL Server for Content Server 16.x

Instant Database File Initialization


Versions SQL Server 2012, 2014 and 2016.

Description Allows faster creation and auto-growth of database and log files by
not filling reclaimed disk space with zeroes before use. See the
article, Database Instant File Initialization.

Default Not enabled.

Recommendation Enable this feature by assigning the SE_MANAGE_VOLUME_NAME


user right to the SQL Server service account. (It appears as
Perform Volume Maintenance Tasks in the Local Security Policy
tool User Rights Assignment list.)

Notes Microsoft states that, because deleted disk data is overwritten only
when data is written to files, an unauthorized principal who gains
access to data files or backups may be able to access the deleted
content. Ensure that access to these files is secured, or disable this
setting when potential security concerns outweigh the performance
benefit.
If the database has Transparent Data Encryption enabled, it
cannot use instant initialization.

Permissions To set this for the SQL Server service, you must have
administrative rights on the Windows server.

The Information Company™ 10


Best Practices –SQL Server for Content Server 16.x

Lock Pages in Memory


Versions SQL Server 2012, 2014 and 2016.

Description Memory for the buffer pool is allocated in a way that makes it non-
page-able, avoiding delays that can occur when information has to
be loaded from the page file. See the article: Enable the Lock
Pages in Memory Option.

Default Not set.

Recommendation Enable this feature by assigning the Lock pages in memory to the
SQL Server Service account in the User Rights Assignment folder
in Windows Security Local Policies.
If you enable this setting, be sure to also set max memory
appropriately to leave sufficient memory for the operating system
and other background services.

Notes If SQL Server is running in a virtual environment, be aware that, if


max memory is set too high, there is a potential for memory
overcommits. This can expand and decrease available memory,
leading to memory pressure and potential problems.

Permissions To set this for the SQL Server service, you must have
administrative rights on the Windows server.

The Information Company™ 11


Best Practices –SQL Server for Content Server 16.x

Maximum Degree of Parallelism (MaxDOP)


Description MaxDOP controls the maximum number of processors that are
used for the execution of a query in a parallel plan. See the article,
Configure the max degree of parallelism.
Parallelism is often beneficial for longer-running queries or for
queries that have complicated execution plans. However, OLTP-
centric application performance can suffer, especially on higher-end
servers, when the time that it takes SQL Server to coordinate a
parallel plan outweighs the advantages of using one.

Default 0 (unlimited)

Recommendation Consider modifying the default value when SQL Server experiences
excessive CXPACKET wait types.
For non-NUMA servers, set MaxDOP no higher than the number of
physical cores, to a maximum of 8.
For NUMA servers, set MaxDOP to the number of physical cores
per NUMA node, to a maximum of 8.
Note: Non-uniform memory access (NUMA) is a processor
architecture that divides system memory into sections that are
associated with sets of processors (called NUMA nodes). It is
meant to alleviate the memory-access bottlenecks that are
associated with SMP designs. A side effect of this approach is that
each node can access its local memory more quickly than it can
access memory on remote nodes, so you can improve performance
by ensuring that threads run on the same NUMA node.
Also see the Cost Threshold for Parallelism section for related
settings that restrict when parallelism is used, to allow best
performance with Content Server.
Note: Any value that you consider using should be thoroughly
tested against the specific application activity or pattern of queries
before you implement that value on a production server.

Notes Several factors can limit the number of processors that SQL Server
will utilize, including:
• licensing limits related to the SQL Server edition
• custom processor affinity settings and limits defined in a
Resource Governor pool
These factors may require you to adjust the recommended
MaxDOP setting. See related reference items in Appendix A –
References for background information.
See Appendix B – Dynamic Management Views (DMVs) for
examples of monitoring SQL Server wait types.

Permissions Changing this setting requires the ALTER SETTINGS server-level


permission.

The Information Company™ 12


Best Practices –SQL Server for Content Server 16.x

Min and Max Memory


Versions SQL Server 2012, 2014 and 2016

Description The min server memory and max server memory settings
configure the amount of memory that is managed by the SQL
Server Memory Manager. SQL Server will not release memory
below the min. memory setting and will not allocate more than the
max memory while it runs. See the article, Server Memory
Configuration Options.

Default The default setting for min server memory is 0, and the default
setting for max server memory is 2,147,483,647 MB. SQL
Server dynamically determines how much memory it will use, based
on current activity and available memory.

Recommendation On a server dedicated to a single SQL Server instance, leaving


SQL Server to dynamically manage its memory usage can provide
the best results over time, but min and max memory should be set
when:
• The Lock pages in memory setting is enabled. You should
set the max memory to a value that leaves sufficient
memory for the parts of SQL Server that are not included
in the max server memory setting (thread stacks,
extended SPs, and so on), the operating system, and other
services.
• More than one SQL Server instance, or other services, are
hosted on the server. Setting max memory for each
instance will ensure balanced memory use.
• Memory pressure could cause SQL Server memory use to
drop to a level that affects performance. Set minimum
memory to a value that maintains stable performance.

Notes Monitor available memory and adjust as needed.


SQL Server is not guaranteed to allocate the amount of memory
specified in min server memory. If the load on the server never
requires allocating the amount of memory specified in min server
memory, SQL Server will run with less memory.
For more information about the amount of RAM and how to set max
server memory, see Max Server Memory Matrix.

Permissions Changing this setting requires the ALTER SETTINGS server-level


permission.

The Information Company™ 13


Best Practices –SQL Server for Content Server 16.x

Perform Volume Maintenance Task


Version SQL Server 2016.

Description This is a replacement for Instant Database File Initialization. It can


now be handled on the install of SQL Server. For more information
see the MSDN article Instant File Initialization.

Default Not enabled.

Permissions To set this for the SQL Server service, you must have
administrative rights on the Windows server.

Storage Best Practices


Description SQL Server is an I/O-intensive application. Proper configuration of
I/O subsystems is critical to achieve optimal performance.
With the wide variety of storage types available, it is difficult to make
specific recommendations. Microsoft provides some guidelines in a
Hardware and Software Requirements and Azure Performance Best
Practices.
For the purposes of characterizing expected I/O patterns, Content
Server is primarily an OLTP-type application.
This section covers a few specific topics related to storage, but it is
not meant to be a comprehensive guide for storage planning.

Default Windows NTFS default cluster size is 4 KB.

Recommendation Microsoft recommends a cluster size of 64 KB for partitions that


house SQL Server data, log, and tempDB files.
Transaction log and tempDB data files have the most impact on
query performance, so Microsoft recommends placing them on
RAID 10 storage. (This provides the best performance compared to
other RAID levels that provide data protection.)
If you use a SAN, increase the host bus adapter (HBA) queue depth
as needed to support the amount of IOPS generated by SQL
Server.
Use a tool such as the DiskSpd Utility to benchmark and
understand the I/O characteristics of available storage, and to aid in
planning the location of the Content Server database, transaction
log, and tempDB files.

Notes Also, see the sections on tempDB Configuration, Database Data,


Log File Size, and AutoGrowth for other recommendations related
to data and log files.

The Information Company™ 14


Best Practices –SQL Server for Content Server 16.x

tempDB Configuration
Versions SQL Server 2012, 2014 and 2016.

Description The tempDB is a global resource that stores user objects (such as
temp tables), internal objects (such as work tables, work files,
intermediate results for large sorts and index builds). When
snapshot isolation is used, the tempDB stores the before images of
blocks that are being modified, to allow for row versioning and
consistent committed read access.

Default Single data file.

Recommendation The tempDB has a large impact on Content Server performance.


Follow these guidelines for best results:
• Create one data file per physical core, up to a maximum of
eight. If the tempDB continues to have memory
contention, add four files at a time (up to the total number
of logical processors).
• Make each data file the same size. Each one should be
large enough to accommodate a typical workload. (As a
general rule, set it to one-and-a-half times the size of the
largest single table in any database used by the instance.)
Allow auto-growth to accommodate usage spikes.
• Place these files on your fastest available storage.
• Ensure that the tempDB has the same collation as the
Content Server database.

Notes Be mindful of factors that can limit the number of processors SQL
Server will utilize. Appropriately set the number of tempDB files.

Monitoring Monitor latch waits related to pages in tempDB. PAGELATCH_XX


wait types can indicate tempDB contention. Appendix B – Dynamic
Management Views (DMVs) provides sample queries that you can
use to monitor waits and identify active tasks that are blocked on
tempDB.
Monitor the space used in and the growth of tempDB, and adjust
tempDB size as needed.

Permissions Adding or modifying tempDB data files requires the ALTER


permission on the tempDB database.

The Information Company™ 15


Best Practices –SQL Server for Content Server 16.x

SQL Server Configuration Settings


Global server settings that affect all databases on an instance.

Allocate Full Extent


Description Trace flag 1118 enables SQL Server to allocate a full extent to each
database object. Rather than one page at a time, which can reduce
contention on SGAM pages.
In SQL Server 2016 and Azure, this can be handled via the
following ALTER commands. See the article, Changes in default
behavior:
ALTER DATABASE <dbname> SET MIXED_PAGE_ALLOCATION ON
ALTER DATABASE <dbname> MODIFY FILE GROUP
<filegroupname> AUTOGROW_ALL_FILES
See the article: Recommendations to reduce allocation contention
in SQL Server tempDB database.

Default Not enabled

Recommendation Consider enabling this flag if latch waits on pages in tempDB cause
long delays that are not resolved by the recommendations in the
tempDB section.

AlwaysOn Availability Groups


Description AlwaysOn Availability Groups are a high-availability and
disaster recovery solution that support a failover environment for a
set of user databases. See the article, AlwaysOn Availability
Groups.

Default Disabled

Recommendation Content Server supports SQL Server 2014 and 2016 AlwaysOn
solution with the ODBC SQL Server Native Client 11.0 driver.
Follow the documentation suggestions from Microsoft to configure
AlwaysOn: Overview Always On Availability Groups; Step By Step
Creating AlwaysOn. The Content Server system should connect to
the availability group listener (virtual IP).

Notes When using the SQL Server Native Client driver, an entry in the
opentext.ini is required.
The SQL Server Native Client driver with the INI entry behaves
differently – it appears leave a transaction open. It, in fact, closes
the transaction and opens a new transaction.
High workspace creation with Always On Synchronous commit will
be less performant.

The Information Company™ 16


Best Practices –SQL Server for Content Server 16.x

Cost Threshold for Parallelism


Description The Cost Threshold for Parallelism is the threshold at which SQL
Server will create and run a parallel plan for a query. If the
estimated cost for a serial plan is higher than this value, SQL Server
uses a parallel plan. This setting is ignored and a serial plan is
always used if:
• the server has only one processor
• affinity settings limit SQL Server to one processor
• MaxDOP is set to 1

Default 5

Recommendation Content Server mainly issues small OLTP-type queries where the
overhead of parallelism outweighs the benefit, but it does issue a
small number of longer queries that may run faster with parallelism.
OpenText recommends that you increase the cost threshold setting
in combination with configuring the Maximum Degree of Parallelism
(MaxDOP) setting. This reduces the overhead for smaller queries,
while still allowing longer queries to benefit from parallelism. See
the article, Configure the cost threshold.
The optimal value depends on a variety of factors including
hardware capability and load level. Load tests in the OpenText
performance lab achieved improved results with a Cost Threshold
of 50, which is a reasonable starting point. Monitor the following and
adjust the cost threshold as needed:
• CXPACKET wait type: When a parallel plan is used for a
query there is some overhead coordinating the threads
that are tracked under the CXPACKET wait. It is normal to
have some CXPACKET waits when parallel plans are used.
However, if it is one of the highest wait types, further
changes to this setting may be warranted. See Appendix
B – Dynamic Management Views (DMVs) for examples of
querying DMVs for wait info.
• See Appendix B – Dynamic Management Views (DMVs)
for examples of querying DMVs for queries using
Parallelism.
• THREADPOOL wait type: If many queries are using a
parallel plan, there can be periods when SQL Server uses
all of its available worker threads. Time spent by a query
waiting for an available worker thread is tracked under the
THREADPOOL wait type. If this is one of the highest wait
types, it may be an indication that too many queries are
using parallel plans. In this case, the cost threshold for
parallelism should be increased, or maximum worker
threads increased if the system is not experiencing CPU
pressure. However, there can be other causes for an
increase in this wait type (blocked or long running queries),
so it should only be considered in combination with a more
comprehensive view of query performance and locking.

Permissions Changing this setting requires the ALTER SETTINGS server-level

The Information Company™ 17


Best Practices –SQL Server for Content Server 16.x

permission.

Optimize for Ad hoc Workloads


Description The ad hoc caching mechanism can reduce stress on memory-
bound systems. It caches a stub of the query plan, and stores the
full plan only if a query is issued more than once. This prevents the
cache from being dominated by plans that are not reused, freeing
space for more frequently accessed plans.
Turning this on does not affect plans already in the cache, only new
plans created after enabling the setting. For more details, see
article Optimize for ad hoc workloads.

Default Off

Recommendation When there is memory pressure, and the plan cache contains a
significant number of single-use plans, enable this setting.

Monitoring Check the portion of the plan cache used by single use queries:
see Appendix B – Dynamic Management Views (DMVs) Cached
Query Plans.

The Information Company™ 18


Best Practices –SQL Server for Content Server 16.x

Content Server Database Settings


These settings are specific to the Content Server database.
Please give notice to the section on Compatibility Level. This will affect the
performance of Content Server on SQL Server 2014 and later.

Clustered Indexes
Description Clustered indexes store data rows for the index columns in sorted
order. In general, the primary key or the most frequently used index
on each table is a good candidate for a clustered index. This is
especially important for key highly-active core tables. Only one
clustered index can be defined per table.

Default In Content Server 10.5 and later, many tables in the Content Server
database have a clustered index.

Recommendation OpenText does not recommend making schema changes such as


adding clustered indexes to tables in the Content Server database.
Additional clustered indexes may be added in future releases of
Content Server.

Notes One benefit of clustered indexes is to avoid potential blocking by the


ghost record cleanup process when there is a high volume of
deletes (such as with Records Management Disposition). Without
clustered indexes, the cleanup process may require a table lock to
scan for ghost records, blocking other operations.
There are other situations where the ghost record cleanup process
might fall behind the rate of row deletions. You can monitor this
using the sys.dm_db_index_physical_stats (see Appendix B
– Dynamic Management Views (DMVs)). Review columns
Ghost_Record_Count (ghost records ready for cleanup) and
Version_Ghost_Record_Count (ghost records retained by an
outstanding snapshot isolation transaction).

The Information Company™ 19


Best Practices –SQL Server for Content Server 16.x

Collation
Description The collation for a database defines the language and character set
used to store data, sets rules for sorting and comparing characters.
It also determines case-sensitivity, accent-sensitivity, and kana-
sensitivity.

Default When SQL Server is installed, it derives its default server-level


collation from the Windows system locale. See article, Collation
and Unicode Support.
The default collation for a new database is the same as the SQL
Server default setting. It is inherited from the MODEL database,
where it cannot be changed.
Databases restored from a backup originating from another server
will retain their original collation.

Recommendation To avoid potential issues, provide best performance, and ensure


compatibility with other products on the same server, OpenText
recommends the following:
• For new SQL Server installations, select a collation that is
case-sensitive and accent-sensitive for compatibility with
other suite products (such as Archive Server).
• Ensure that the Content Server database has the same
collation as the server (and hence the same as system
databases like tempDB).
• Ensure that the collation for all objects in the Content
Server database are the same as the database collation.
The exception to this is the WebNodesMeta_XX tables,
which may derive a different collation from settings on the
Configure Multilingual Metadata administration page.
• Contact customer support for assistance if you have an
existing deployment that has a database collation that is
different from the server’s, or that has tables or columns
(other than WebNodesMeta_XX tables) that have a
collation that is different from the database collation.

Notes Refer to scripts in SQL Server and Database Information Queries


SQL Server and Database Information Queries for database
collation and column collation that differs from database.
Refer to Appendix C – Collation Fix on suggestions to fix collation
issues.

The Information Company™ 20


Best Practices –SQL Server for Content Server 16.x

Compatibility Level
Description The database compatibility level sets certain database behaviors to
be compatible with the specified version of SQL Server.

Default The compatibility level for newly created databases is the same as
the MODEL database which, by default, is the same as the installed
version of SQL Server.
When upgrading the database engine, compatibility level for user
databases is not altered, unless it is lower than the minimum
supported. Restoring a database backup to a newer version also
does not change its compatibility level.

Recommendation In general, using the latest compatibility mode allows the Content
Server database to benefit from all performance improvements in
the installed SQL Server version.
However, because of performance issues with SQL Server 2014
and later, set the Compatibility Level to the version of SQL Server
2012 (see article Alter Database Compatibility Level).
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 110
When you change the compatibility level of the Content Server
database, be sure to update statistics on the database after making
the change.
As an alternative, you can use trace flag 9481 and leave the
compatibility level unchanged. Using trace flag 9481 affects all of
the databases on the SQL Server instance. For more details, see
Technical Alert.

The Information Company™ 21


Best Practices –SQL Server for Content Server 16.x

Data Compression
Description SQL Server offers data compression at the row and page level, but
only in the Enterprise Edition. Compression reduces I/O and the
amount of storage and memory used by SQL Server. It only adds a
small amount of overhead in the form of additional CPU usage.

Default Not compressed.

Recommendation When storage space, available memory, or disk I/O are under
pressure, and the database server is not CPU-bound, consider
using compression on selected tables and indexes.
Microsoft recommends compressing large objects that have either a
low ratio of update operations, or a high ratio of scan operations.
See articles Data Compression, Enable Compression on a Table
or Index, and sp_estimate_data_compression_savings.
You can automate the process using a script. (An example of this
type of approach and a sample script, which was used for internal
testing, is covered in the Practical Data Compression article.) The
script analyzes the usage of Content Server tables and indexes that
have more than 100 pages and selects candidates for compression.
It estimates the savings from row or page compression, and
generates a command to implement the recommended
compression. The script relies on usage data from the DMVs, so it
should be run after a period of representative usage.
Overall impact from compression on performance, storage,
memory, and CPU will depend on many factors related to the
environment and product usage. Testing in the OpenText
performance lab has demonstrated the following:
Performance: For load tests involving a mix of document-
management operations, with a small set of indexes compressed
based on only high-read-ratio indexes, there was minimal
performance impact. When a larger set of tables and indexes was
compressed, performance was less consistent, and degraded by up
to 20%. For high-volume ingestion of documents with metadata,
there was no impact on ingestion throughput.
CPU: CPU usage increased by up to 8% in relative terms.
MDF File Storage: Reduced by up to 40% depending on what was
compressed. Specific large tables like LLAttrData were reduced by
as much as 82%.
I/O: Read I/O on MDF files reduced up to 30%; write I/O up to 18%.
Memory Usage: SQL Buffer memory usage reduced up to 25%. As
with any configuration change, test the performance impact of any
compression changes on a test system prior to deploying on
production systems.

Notes It can take longer to rebuild indexes when they are compressed.

The Information Company™ 22


Best Practices –SQL Server for Content Server 16.x

Database Data, Log File Size, and AutoGrowth


Versions SQL Server 2012, 2014 and 2016.

Description Data files contain data and. Log files contain the information to
recover transactions. These files can grow automatically from their
original size. This growth adds to the size of the database (see
article Database Files and Filegroups).
Autogrowth of log files can cause delays. Frequent growth of
data or log files can cause them to become fragmented, which may
lead to performance issues.

Default SQL Server 2014 and later:


Data File: Initial size: 4 MB; Autogrowth: By 1 MB, unlimited
Log File: Initial size: 2 MB; Autogrowth: By 10 percent, unlimited

Recommendation Optimal data and log file sizes really depend on the specific
environment. In general, it is preferable to size the data and log
files to accommodate expected growth so that you avoid frequent
autogrowth events.
Leave Autogrowth enabled to accommodate unexpected growth.
A general rule is to set autogrow increments to about one-eighth the
size of the file. See article Defining Auto-Growth Settings.
Leave the autoshrink parameter set to False for the Content
Server database.

Monitoring Monitor autogrowth events to understand when the database grows.


The database_file_size_change extended event can be used
to track growth events, or use the PerfMon SQL Server Databases -
Data File Size counter.

Notes Refer to script Database File Size in Appendix B – Dynamic


Management Views (DMVs).

The Information Company™ 23


Best Practices –SQL Server for Content Server 16.x

Recovery Model
Versions SQL Server 2012, 2014 and 2016.

Description The Recovery Model is how SQL Server controls how transaction
logs are maintained per database. For more information, see article
Recovery Models.

Default SIMPLE model is the default. This does not cover backups of the
transaction logs.

Recommendation Content Server supports both SIMPLE and FULL recovery models.
SIMPLE recovery is initially set up when creating the database with
Content Server. FULL recovery requires setting the recovery
directories and can only be handled manually by a SQL Server
DBA. The DBA can change the recovery setting once the database
has been created or by creating the database using the steps in
(Installation Options).
The Content Server database should be configured to FULL. This
requires the DBA to set the transaction log backups to prevent the
log file from growing too large. BULK LOGGED is not
recommended as it is not compatible with many of the operations
that Content Server can perform.

Notes SIMPLE Recovery Model is typically used in test systems or in


databases that are stagnant. The transaction log space is
reclaimed once the transaction has completed.
FULL Recovery Model is typically used for critical data. Transaction
logs will need to be backed up on a regular basis - controlled by the
DBA.
BULK LOGGED Recovery Model is typically used to avoid logging
irregular bulk insert, create index, and select into statements. It
works the same as Full with that exception.

The Information Company™ 24


Best Practices –SQL Server for Content Server 16.x

Statistics
Description The query optimizer uses statistics to aid in creating high-quality
query plans that improve performance. Statistics contain
information about the distribution of values in one or more columns
of a table or view. They are used to estimate the number of rows in
a query result. See articles, Statistics.
The following settings control the automation of the statistics:
AUTO_CREATE_STATISTICS: When set to TRUE, automatically
build any missing statistics needed by a query for optimization.
AUTO_UPDATE_STATISTICS: When set to TRUE, automatically
build any out-of-date statistics needed by a query for optimization.
AUTO_UPDATE_STATISTICS_ASYNC: When set to TRUE and
AUTO_UPDATE_STATISTICS is set to TRUE, queries that initiate
an automatic update of out-of-date statistics will not wait for the
statistics to be updated before compiling.

Default The first two settings above are on by default, and the third is off.
All settings can be changed in the MODEL database.

Recommendation Maintaining up-to-date statistics is a key factor in allowing SQL


Server to generate plans with optimal performance.
OpenText recommends using the default values for these settings,
to allow the query optimizer to automatically create and update
statistics as needed.
Trace Flag 2371 should be used to control auto update statistics
better for database compatibility of 120 or less. See the article,
Default Auto Statistics.
In SQL Server 2016 the features of trace flag 2371 are used
automatically and do not need to be set if in database compatibility
of 130.
You can update statistics with the UPDATE STATISTICS statement
or the sp_updatestats stored procedure. This is good to do
when your site has a large amounts of update activity, or if you want
to update statistics on a more predictable schedule.
Note that the sp_updatestats stored procedure updates
statistics on all tables that have one or more rows modified. It is
normally preferable to use an UPDATE STATISTICS statement to
update statistics on specific tables, as needed.

Monitoring Use the sys.dm_db_stats_properties DMV to track the


number of rows updated in each table, and which statistics may
need to be updated during maintenance windows. See the article,
Object Statistics.

Notes Updating statistics causes queries to recompile, which can be time-


consuming. Any strategy to update statistics must balance the
benefit of generating improved query plans with the need to avoid
overly frequent query recompilation.
A rebuild of an index will also update statistics.

Permissions Update Statistics requires the ALTER permission on the table or

The Information Company™ 25


Best Practices –SQL Server for Content Server 16.x

view.

Table and Index Fragmentation, Fill factor


Description As data is modified, index and table pages can become fragmented,
leading to reduced performance. You can mitigate this by regularly
reorganizing or rebuilding indexes that have fragmentation levels
above a certain threshold.
Fragmentation can be avoided, or reduced, by setting a fill factor for
indexes. This leaves space for the index to grow without needing
page splits that cause fragmentation. This is a tradeoff, because
setting a fill factor leaves empty space in each page, consuming
extra storage space and memory.
See articles Specify Fill Factor and Reorganize and Rebuild
Indexes for more information.

Default Server index fill factor default is 0 (meaning fill leaf-level pages to
capacity).

Recommendation Index Fragmentation


When index fragmentation is between 5% and 30% - reorganize,
and when it is greater than 30% - rebuild. See Appendix B –
Dynamic Management Views (DMVs) for a sample queries that list
fragmentation levels for tables and indexes, and that automatically
generate reorganize or rebuild commands for indexes.
Fill Factor
Index fill factor can be set based on an analysis of how often each
table is updated. Set static tables at the default value of 0. For
infrequently and frequently updated tables, you can refer to the SQL
Server Pro blog post, What is the Best Value for the Fill Factor.
The Index Usage section of Appendix B – Dynamic Management
Views (DMVs) has a sample query that shows the counts of index
operations per index. These are based on the last SQL Server
restart, so you should run it after a period of representative usage.
Frequently updated tables will encounter fragmentation over time.
Index rebuilds reset the fill factor, but index reorganize does not.

Notes A table lock is held for the duration of an index rebuild by default,
preventing user access to the table. Specifying ONLINE=ON in the
command avoids the table lock, allowing user access to the table
during the rebuild. However, this feature is available only in
Enterprise editions of SQL Server. In SQL Server 2012 and 2014,
corruption issues can occur in certain cases. See article, FIX: Data
corruption.

Monitoring To observe the rate of page splits and to help evaluate the
effectiveness of fill factor settings, use the PerfMon SQLServer:
AccessMethods:PageSplits/Sec. This includes mid-page
(cause fragmentation) and end-page splits (increasing index).

Permissions Rebuilding or reorganizing index requires the ALTER permission on

The Information Company™ 26


Best Practices –SQL Server for Content Server 16.x

the table.

Target Recovery Time


Versions SQL Server 2016, 2014, 2012

Description Change the settings on the parameter to reduce I/O spikes. For
more information, see article Change Target Recovery Time.

Default 0 (SQL Server 2016 default is 60)

Recommendation The default setting of 0 seconds may be too low for that database.
Adjust the value to a lower value like 60 seconds.
ALTER DATABASE <databasename>
SET target_recovery_time = 60 seconds;
Check the Windows Performance Monitor to see if the I/O spikes
have decreased. If they have no, adjust again.
For more information, see article Adjust Target Recovery Time.

Notes When upgrading a SQL Server, that is prior to 2016, to a newer


version, the target_recovery_time will remain at the old default
of 0. Make sure the value is reviewed and/or updated after that
type of upgrade.
SELECT name, target_recovery_time_in_seconds
FROM sys.databases;

The Information Company™ 27


Best Practices –SQL Server for Content Server 16.x

Monitoring and Benchmarking


To conduct a comprehensive health and performance check of OpenText Content
Server on SQL Server, you should collect a number of metrics for a pre-defined
monitored period. This monitored period should represent a reasonably typical
usage period for the site, and include the site’s busiest hours. Performing a
benchmark establishes a baseline of expected response times and resource usage
for typical and peak loads in your Content Server environment. You can use the
baseline to identify areas for potential improvement and for comparisons to future
periods as the site grows, and you apply hardware, software, or configuration
changes.

Benchmark
Collect the following as the basis for a benchmark and further analysis of worst-
performing aspects:
• For Azure, the server is not directly available to you. Review Microsoft
Documentation for Azure SQL Database Benchmark Overview.

• For SQL Server 2016/2014/2012:

o Collect operating-system and resource-level operating statistics,


including CPU, RAM, I/O, and network utilization on the database server.
How you collect these statistics depends on the hardware and operating
system that you use and the monitoring tools that are available to you.
Performance Monitor (PerfMon) is a tool that is natively available on
Windows servers. If you use PerfMon, include the counters in the
following table as a minimum. Consider also using the PAL tool with the
SQL Server 2012 threshold file to generate a PerfMon template. This
template will contain relevant SQL Server performance counters and will
analyze captured PerfMon log files.

The Information Company™ 28


Best Practices –SQL Server for Content Server 16.x

Memory Pages/sec, Pages Input/sec, Available MBytes.


In general, available memory should not drop below 5% of physical
memory. Depending on disk speed, pages/sec should remain
below 200.

Physical Disk Track the following counters per disk or per partition: % Idle Time,
Avg. Disk Read Queue Length, Avg. Disk Write Queue Length, Avg.
Disk sec/Read, Avg. Disk sec/Write, Disk Reads/sec, Disk
Writes/sec, Disk Write Bytes/sec, and Disk Read Bytes/sec.
In general, % Idle Time should not drop below 20%. Disk queue
lengths should not exceed twice the number of disks in the array.
Disk latencies vary based on the type of storage. General
guidelines:
Reads: Excellent < 8 msec, Good < 12 msec, Fair < 20 msec, Poor
> 20 msec;
Non-cached writes: Excellent < 08 msec, Good < 12 msec, Fair <
20 msec, Poor > 20 msec;
Cached writes: Excellent < 01 msec, Good < 02 msec, Fair < 04
msec, Poor > 04 msec
To show I/O requests and virtual file latency data per data/log file,
review sys.dm_io_virtual_file_stats in Appendix B –
Dynamic Management Views (DMVs).

Processor % Processor Time (total and per processor), % Privileged Time,


Processor Queue Length.
In general, % Processor Time for all processors should not exceed
80% and should be evenly distributed on all processors. %
Privileged Time should remain below 30% of total processor time.
You should try to keep Processor Queue Length below 4 per CPU.
Processor Queue Length for standard servers with long Quantums:
Excellent: <= 4 per CPU, Good: < 8 per CPU, Fair: < 12 per CPU

Network Bytes Received/sec, Bytes Sent/sec.


In general, these metrics should remain below 60% bandwidth
utilization.

SQL Server The SQL Server Buffer cache hit ratio should be > 90%. In OLTP
Counters applications, this ratio should exceed 95%. Use the PAL tool and
the SQL Server 2012 template for additional counters and related
thresholds.

o Note any Windows event log errors present after or during the monitored
period.

• Generate Content Server summary timing and connect logs.


Generate summary timing logs while you collect the operating system statistics
noted above. In addition, generate at least one day of Content Server connect
logs during the larger period covered by the summary timings, for as typical a
period of activity as possible.

The Information Company™ 29


Best Practices –SQL Server for Content Server 16.x

Note that connect logging requires substantial space. Depending on the activity
level of the site, the connect log files may be 5 to 10 GB, so adequate disk space
should be planned. Content Server logs can be redirected to a different file
system if necessary. There is also an expected performance degradation of 10%
to 25% while connect logging is on. If the system is clustered, you should enable
connect logging on all front-end nodes.

• Use Trace Capture/Replay (Azure and SQL Server 2016) or collect SQL Server
profiling events (SQL Server 2012/2014) to trace files for periods of three to four
hours during core usage hours that fall within the monitored period.

• Use the Tuning template to restrict events captured to Stored Procedures –


RPC:Completed and TSQL—SQL:BatchCompleted. Ensure data columns
include Duration (data needs to be grouped by duration), Event Class, Textdata,
CPU, Writes, Reads, SPID. Don’t collect system events, and filter to only the
Content Server database ID. If the site is very active, you may also want to filter
duration > 2000 ms to limit the size of the trace logs and reduce overhead.

You can use SQL Server Extended Events to monitor activity. For more
information, see articles Extended Events and Convert an Existing SQL Trace
Script.

• Obtain the results of a Content Server Level 5 database verification report (run
from the Content Server Administration page, Maintain Database section). To
speed up the queries involved in this verification, ensure there is an index
present on DVersData.ProviderID. Note that for a large site this may take
days to run. If there is a period of lower activity during the night or weekends,
that would be an ideal time to run this verification.

• Gather feedback from Content Server business users that summarizes any
current performance issues or operational failures that might be database-
related.

SQL Server Performance Monitoring Tools

Activity Monitor
The Activity Monitor offers real-time views of system activity and wait states. It lets
you drill down on specific slow or blocking queries. It also provides historical
information on Resource Waits, Data File I/O, and Expensive Queries.

The Activity Monitor is available in Microsoft SQL Server Management Studio


(SSMS) when you right click on the connected server.

This tool is not available for Azure databases within SSMS.

The Information Company™ 30


Best Practices –SQL Server for Content Server 16.x

Azure Monitoring
To monitor Azure databases, you will need to log onto the Azure portal. From there,
you have many options available under Monitoring and Support + Troubleshooting.
For information on Azure Monitoring tools, see the articles: Microsoft Azure Monitor a
Cloud Service or Microsoft Azure Web Application Performance.

Management Data Warehouse


In SQL Server 2012/2014/2016, you can use the Management Data Warehouse to
collect performance data on system resources and query performance, and to report
historical data. Disk usage, query activity, and server activity are tracked by default;
user-defined collections are also supported. A set of graphical reports show data
from the collections and allows you to drill down to specific time periods. See the
article, Management Data Warehouse.

The Information Company™ 31


Best Practices –SQL Server for Content Server 16.x

Identifying Worst-Performing SQL


There are several ways to identify poorly-performing SQL.

Content Server Connect Logs


Content Server connect logging (SQL Connect logs) generates logs that include
every Content Server transaction and the SQL statements that they issue. The logs
provide timings in microseconds for each statement. Note that connect logs can use
substantial space if left enabled for significant periods of time, which can add up to
25% overhead.
The connect logs can be found in Content Server Administration – Server
Configuration – Configure Log Settings. A log level of 3 – Debug will give you the
most detail about the SQL statements generated.
Please note: you may wish to only have the log level at the high setting to capture
the performance issues. Set the level to a lower level or 0 – off once the issue has
been captured.

Performance Analyzer
OpenText Performance Analyzer is a tool that works with Content Server Logs. This
tool allows you to open connect logs, see the percentage mixture, and see the
relative importance of your site’s usage profile.
The Raw Data tab allows you to sort the transactions by overall execution time and
SQL time. This will show you which Content Server transactions are taking the most
SQL time (not just the individual statements that they issue).
To see the individual SQL statements (from the Raw Data tab), right-click on the line
and select Show SQL. Performance Analyzer will display each SQL statement that
the transaction issues, how long each one took to execute, and how many rows were
affected.

Indexes Over Maximum Length


SQL Server has a maximum length for its Indexes. In SQL Server 2014 and below it
is 900 bytes. For Azure and SQL Server 2016 and higher clustered Indexes are 900
bytes and all other Indexes 1700 bytes. For more information see Index Key Size.
Run the query below to find if any of the Indexes in the database are over the
maximum.

The Information Company™ 32


Best Practices –SQL Server for Content Server 16.x

Description Check size of Indexes over 900 Bytes comply with SQL Server.

Sample SELECT ss.name schema_name,


st.name table_name, si.name index_name,
IIF(si.type_desc = 'CLUSTERED',
si.type_desc + IIF(si.is_unique = 0, '',
' - Unique'), IIF(si.is_unique = 0, '',
'Unique')) index_type,
IIF(si.is_primary_key = 1, 'Primary Key', '')
primary_key,
SUM(max_length) row_length,
COUNT(ic.index_id) column_count
FROM sys.tables st INNER JOIN (SELECT *
FROM sys.indexes WHERE type_desc != 'HEAP') si
ON st.object_id = si.object_id
INNER JOIN sys.schemas ss
ON st.schema_id = ss.schema_id
INNER JOIN sys.index_columns ic
ON si.object_id = ic.object_id
AND si.index_id = ic.index_id
INNER JOIN sys.columns sc
ON ic.object_id = sc.object_id
AND ic.column_id = sc.column_id
WHERE ss.name = '<databasename>'
AND ic.is_included_column = 0
GROUP BY ss.name, st.name, si.name, si.type_desc,
si.is_unique, si.is_primary_key
HAVING SUM(max_length) > 900
ORDER BY ss.name, st.name, si.name

Integrity Check
Corruption in the database can cause performance issues. It is suggested that
before and after a backup is performed, an integrity check is done on the database.
The command DBCC CHECKDB will check the logical and physical integrity of all of
the objects in the specified database. Follow the suggestions made by Microsoft.

SQL Server DMVs


The SQL Server sys.dm_exec_query_stats DMV collects performance data
about cached query plans. Appendix B – Dynamic Management Views (DMVs)
includes a sample query that returns the top 500 queries ordered by total elapsed
time, along with various metrics about physical and logical reads and writes. Modify
the query as needed, to alter the number of queries returned, and to order the results
by the desired column. By default, this data is aggregated over the period since SQL
Server was last restarted.
As described in SQL Server Performance Monitoring Tools, the Data Management
Warehouse can store this and other performance-related data to allow drilling down
on specific time periods.

The Information Company™ 33


Best Practices –SQL Server for Content Server 16.x

Locking

Lock Escalation
Description Some bulk operations, such as copying or moving a large subtree,
or changing permissions on a tree, can cause SQL Server resource
thresholds to be exceeded. Lock escalation is triggered when one
of the following conditions exists:
• A single Transact-SQL statement acquires at least 5,000
locks on a single non-partitioned table or index.
• A single Transact-SQL statement acquires at least 5,000
locks on a single partition of a partitioned table and the
ALTER TABLE SET LOCK_ESCALATION option is set to
AUTO.
• The number of locks in an instance of the Database
Engine exceeds memory or configuration thresholds. (The
thresholds vary depending on memory usage and the
Locks server setting.)
Although escalation to a lower granularity of lock can free
resources, it also affects concurrency. This means that other
sessions accessing the same tables and indexes can be put in a
wait state and degrade performance.

Default Locks defaults to 0. This means that lock escalation occurs when
memory used by lock objects is 24% of the memory used by the
database engine.
All objects have a default lock escalation value of TABLE. That
means when lock escalation is triggered, it is done at the table level.

Recommendation Use the lock escalation DMV example in Appendix B – Dynamic


Management Views (DMVs) to monitor for lock escalation attempts
and successes.
For objects experiencing frequent lock escalations, consider using
the SET LOCK_ESCALATION clause in the ALTER TABLE
statement to change the escalation algorithm from the default
TABLE to either AUTO or DISABLE.
• AUTO means that escalation can happen at the partition
level of a partitioned table, and thus not affect concurrency
on other partitions. (This introduces the potential of
deadlocks - when transactions locking different partitions
each want to expand an exclusive lock to the other
partitions.)
• DISABLE does not guarantee that no escalation will occur.
It puts the thresholds much higher, so that only a stress on
memory resources will trigger escalation.

Notes For a description of the Lock Escalation process in SQL Server, see
the article, Lock Escalation (Database Engine).

Permissions Changing this setting requires the ALTER permission on the table.

The Information Company™ 34


Best Practices –SQL Server for Content Server 16.x

Transaction Isolation
Description When snapshot isolation is enabled, all statements see a snapshot
of data as it existed at the start of the transaction. This reduces
blocking contention and improves concurrency since readers do not
block writers and vice-versa. It also reduces the potential for
deadlocks. See the article, Snapshot Isolation in SQL Server.

Default In Content Server 10.5 and later, ALLOW_SNAPSHOT_ISOLATION


and READ_COMMITTED_SNAPSHOT are both automatically set to
ON for the Content Server database.

Recommendation To manually set ALLOW_SNAPSHOT_ISOLATION and


READ_COMMITTED_SNAPSHOT use the following commands in SQL
Management Studio:
ALTER DATABASE <Content_Server_DB>
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE <Content_Server_DB>
SET READ_COMMITTED_SNAPSHOT ON

Permissions Changing this setting requires the ALTER permission on the


database.

The Information Company™ 35


Best Practices –SQL Server for Content Server 16.x

Appendices

Appendix A – References
Buffer Manager Object
https://msdn.microsoft.com/en-us/library/ms189628.aspx

Compute Capacity Limits by Edition of SQL Server:


https://msdn.microsoft.com/en-us/library/ms143760(v=sql.130).aspx

Disk Partition Alignment Best Practices for SQL Server:


https://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx
https://msdn.microsoft.com/en-us/library/ms190787.aspx

SQL Server Indexes:


https://msdn.microsoft.com/en-us/library/ms175049.aspx

SQL Server Lock Escalation:


https://technet.microsoft.com/en-us/library/ms184286(v=sql.105).aspx

SQL Server Resource Governor:


https://msdn.microsoft.com/en-us/library/bb933866(v=sql.130).aspx

SQL Server Statistics:


https://msdn.microsoft.com/en-us/library/ms190397(v=sql.130).aspx

Tools
Diskspd Utility (SQLIO)
https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223

PAL Tool
https://pal.codeplex.com/

For additional guidance and help, please join the community of


experts:

Content Server Discussion Forum

Performance Engineering Forum

The Information Company™ 36


Best Practices –SQL Server for Content Server 16.x

Appendix B – Dynamic Management Views (DMVs)


SQL Server Dynamic Management Views (DMVs) provide information used to
monitor the health of the server, diagnose problems, and tune performance. Server-
scoped DMVs retrieve server-wide information and require VIEW SERVER STATE
permission to access. Database-scoped DMVs retrieve database information and
require VIEW DATABASE STATE permission.
This appendix provides a description of some DMVs that may be helpful for
monitoring SQL Server performance, along with samples for querying those DMVs.
All procedures and sample code in this appendix are delivered as is, and are for
educational purposes only. They are presented as a guide to supplement official
OpenText product documentation.

The Information Company™ 37


Best Practices –SQL Server for Content Server 16.x

Blocking sessions
Description Methods on how to view the blocking sessions and the cause.
There is also SSMS reports available:
Reports–Standard Reports–Activity–All Blocking Transactions
Management-Extended-Events-Sessions-Watch Live Data
You can also create your own Blocked Process Report – see below.
More can be found: Identify the cause of SQL Server blocking

Sample --view any pending changes


SELECT * FROM sys.configurations WHERE value <>
value_in_use;
GO

--set blocking threshold (should be 5 seconds or higher)


EXEC sys.sp_configure N'blocked process threshold (s)',
N'5'
GO
RECONFIGURE
GO

exec sp_configure 'show advanced options', 1;


GO
RECONFIGURE
GO

exec sp_configure 'blocked process threshold (s)', 20;


GO
RECONFIGURE
GO

CREATE EVENT SESSION [BlockedProcessReport] ON SERVER


ADD EVENT sqlserver.blocked_process_report
ADD TARGET package0.ring_buffer(SET
max_events_limit=(0), max_memory=(2048))
WITH (MAX_MEMORY=4096 KB,
EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,
MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF)
GO
ALTER EVENT SESSION [BlockedProcessReport] ON SERVER
STATE=START
GO

Clean up (turn off the blocked process report when


you're not using it):
SELECT * from sys.traces; --Make sure your trace is gone
GO

--view any pending changes


SELECT * FROM sys.configurations WHERE value <>
value_in_use;
GO

exec sp_configure 'blocked process threshold (s)', 0;


GO
RECONFIGURE
GO

exec sp_configure 'blocked process threshold (s)';


GO

The Information Company™ 38


Best Practices –SQL Server for Content Server 16.x

Cached Query Plans (sys.dm_exec_cached_plans)


Description Contains one row per query plan in the cache, showing the amount
of memory used and the re-use count.

Sample Show total plan count and memory usage, highlighting single-use
plans:
SELECT objtype AS [CacheType],
count_big(*) AS [Total Plans],
sum(cast(size_in_bytes as decimal(18,2)))
/1024/1024 AS [Total MBs],
avg(usecounts) AS [Avg Use Count],
sum(cast((CASE WHEN usecounts = 1
THEN size_in_bytes ELSE 0 END)
AS decimal(18,2)))/1024/1024
AS [Total MBs - USE Count 1],
sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END)
AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC

Database Memory (sys.dm_os_process_memory)


Description Returns a list of all the memory sizes in MB for the database server.
SELECT physical_memory_in_use_kb/1024 AS
Sample
physical_memory_in_use_MB,
large_page_allocations_kb/1024 AS
large_page_allocations_MB,
locked_page_allocations_kb/1024 AS
locked_page_allocations_MB,
virtual_address_space_reserved_kb/1024 AS
VAS_reserved_MB,
virtual_address_space_committed_kb/1024 AS
VAS_committed_MB,
virtual_address_space_available_kb/1024 AS
VAS_available_MB,
page_fault_count AS page_fault_count,
memory_utilization_percentage AS
memory_utilization_percentage,
process_physical_memory_low AS
process_physical_memory_low,
process_virtual_memory_low AS
process_virtual_memory_low
FROM sys.dm_os_process_memory;

The Information Company™ 39


Best Practices –SQL Server for Content Server 16.x

Index Usage (sys.dm_db_index_usage_stats)


Description Returns counts of different types of operations on indexes.
SELECT DB_NAME(ddius.database_id) AS [db_name],
Sample
OBJECT_NAME(ddius.object_id) AS tb_name,
ind.name AS ind_name,
ddius.*
FROM sys.dm_db_index_usage_stats AS ddius
INNER JOIN sys.indexes AS ind
ON ddius.index_id = ind.index_id
AND ddius.object_id = ind.object_id
WHERE ddius.database_id = DB_ID()
ORDER BY tb_name

Lock Escalations (sys.dm_db_index_operational_stats)


Description Returns a variety of low-level information about table and index
access. This sample shows lock escalation attempts and
successes for each object in a database.
SELECT DB_NAME(dios.database_id) AS db_name,
Sample
OBJECT_NAME(dios.object_id) AS object_name,
ind.name AS ind_name,
dios.partition_number,
dios.index_lock_promotion_attempt_count,
dios.index_lock_promotion_count,
(cast(dios.index_lock_promotion_count AS real) /
dios.index_lock_promotion_attempt_count)
AS percent_success
FROM sys.dm_db_index_operational_stats(
DB_ID(), NULL, NULL, NULL) dios
INNER JOIN sys.indexes ind
ON dios.object_id = ind.object_id
AND dios.index_id = ind.index_id
WHERE dios.database_id = DB_ID()
AND dios.index_lock_promotion_count > 0
ORDER BY index_lock_promotion_count DESC

The Information Company™ 40


Best Practices –SQL Server for Content Server 16.x

PAGELATCH_XX Waits
Description The following script can help you identify active tasks that are
blocked on tempDB PAGELATCH_XX waits.
SELECT session_id, wait_type, wait_duration_ms,
Sample
blocking_session_id, resource_description,
ResourceType = CASE
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 1 % 8088 = 0 THEN 'Is PFS Page'
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 2 % 511232 = 0 THEN 'Is GAM Page'
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 3 % 511232 = 0 THEN 'Is SGAM Page'
ELSE 'Is Not PFS, GAM, or SGAM page'
END
FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'PAGE%LATCH_%'
AND resource_description LIKE '2:%'

The Information Company™ 41


Best Practices –SQL Server for Content Server 16.x

Performance of Cached Query Plans (sys.dm_exec_query_stats)


Description Shows aggregate performance statistics for cached query plans.

SELECT TOP 500 -- change as needed for top X


Sample
-- NULL for ad hoc and prepared batches
DB_Name(qp.dbid) dbname, qp.dbid, qp.objectid,
qp.number,
--
qt.text,
SUBSTRING(qt.text, qs.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(qt.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1)
AS statement_text,
qs.creation_time, qs.last_execution_time,
qs.execution_count,
qs.total_worker_time/qs.execution_count
AS avg_worker_time,
qs.total_physical_reads/qs.execution_count
AS avg_physical_reads,
qs.total_logical_reads/qs.execution_count
AS avg_logical_reads,
qs.total_logical_writes/qs.execution_count
AS avg_logical_writes,
qs.total_elapsed_time/qs.execution_count
AS avg_elapsed_time,
qs.total_clr_time/qs.execution_count
AS avg_clr_time,
qs.total_worker_time, qs.last_worker_time,
qs.min_worker_time, qs.max_worker_time,
qs.total_physical_reads, qs.last_physical_reads,
qs.min_physical_reads, qs.max_physical_reads,
qs.total_logical_reads, qs.last_logical_reads,
qs.min_logical_reads, qs.max_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.min_logical_writes, qs.max_logical_writes,
qs.total_elapsed_time, qs.last_elapsed_time,
qs.min_elapsed_time, qs.max_elapsed_time,
qs.total_clr_time, qs.last_clr_time,
qs.min_clr_time, qs.max_clr_time,
qs.plan_generation_num, qp.encrypted, qs.total_rows,
qs.total_rows/qs.execution_count AS average_rows,
qs.last_rows, qs.min_rows, qs.max_rows,
qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle)
AS qp
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY qs.total_elapsed_time DESC --Duration
--sample WHERE clauses
WHERE
WHERE last_execution_time > '20070507 15:00'
Alternates
WHERE execution_count = 1
WHERE
SUBSTRING(qt.text, qs.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(qt.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1)
AS statement_text
LIKE '%MyText%'

The Information Company™ 42


Best Practices –SQL Server for Content Server 16.x

The Information Company™ 43


Best Practices –SQL Server for Content Server 16.x

--alternate ORDER BY clauses


ORDER BY
ORDER BY qs.execution_count DESC --Frequency
Alternates
ORDER BY qs.total_worker_time DESC --CPU
ORDER BY qs.total_logical_reads DESC --Reads
ORDER BY qs.total_logical_writes DESC --Writes
ORDER BY qs.total_physical_reads DESC --PhysicalReads
ORDER BY avg_worker_time DESC --AvgCPU
ORDER BY avg_elapsed_time DESC --AvgDurn
ORDER BY avg_logical_reads DESC --AvgReads
ORDER BY avg_logical_writes DESC --AvgWrites
ORDER BY avg_physical_reads DESC --AvgPhysicalReads

Queries using Parallelism


Description Search the plan cache for existing parallel plans and see the cost
associations to these plans.

Sample SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;


SET QUOTED_IDENTIFIER ON;
WITH XMLNAMESPACES
(DEFAULT 'http://schemas.microsoft.com/sqlserver/
2004/07/showplan')
SELECT query_plan AS CompleteQueryPlan,
n.value('(@StatementText)[1]', 'VARCHAR(4000)')
AS StatementText,
n.value('(@StatementOptmLevel)[1]', 'VARCHAR(25)')
AS StatementOptimizationLevel,
n.value('(@StatementSubTreeCost)[1]','VARCHAR(128)')
AS StatementSubTreeCost,
n.query('.') AS ParallelSubTreeXML, ecp.usecounts,
ecp.size_in_bytes
FROM sys.dm_exec_cached_plans AS ecp
CROSS APPLY sys.dm_exec_query_plan(plan_handle) eqp
CROSS APPLY query_plan.nodes('/ShowPlanXML/
BatchSequence/Batch/Statements/StmtSimple') qn(n)
WHERE n.query('.').exist(
'//RelOp[@PhysicalOp="Parallelism"]') = 1

Notes This DMV query shows data about parallel cached query plans,
including their cost and number of times executed. It can be helpful
to identify a new cost threshold for parallelism setting that will strike
a balance between letting longer queries use parallelism, while
avoiding the overhead for shorter queries.
Note that the cost threshold for parallelism is compared to the serial
plan cost for a query when determining whether to use a parallel
plan.
The above DMV query shows the cost of the generated parallel plan
and is typically different (smaller) than the serial plan cost.
Consider the parallel plan costs as a general guideline towards
setting cost threshold for parallelism.

The Information Company™ 44


Best Practices –SQL Server for Content Server 16.x

SQL Server and Database Information Queries


Description The following queries return information about SQL Server and database
configuration, which can be helpful when investigating issues, or as part
of a benchmark exercise to document the state of the system.
SELECT object_name(object_id), name, collation_name
Column
FROM sys.columns
Collation WHERE collation_name IS NOT NULL
different AND object_name(object_id) NOT LIKE ‘sys%’
from DB AND object_name(object_id) NOT LIKE ‘sql%’
AND collation_name NOT IN (SELECT collation_name
FROM sys.databases WHERE name = ‘tempdb’)
ORDER BY object_name(object_id)
SELECT name, snapshot_isolation_state_desc,
DB Collation,
CASE is_read_committed_snapshot_on
Recovery WHEN 0 THEN 'OFF' WHEN 1 THEN 'ON'
Model, END AS is_read_committed_snapshot_on,
Snapshot recovery_model, recovery_model_desc, collation_name
Isolation FROM sys.databases
ORDER BY name
SELECT object_schema_name(object_id) + '.' +
DB Row
object_name(object_id) AS [name], row_count,
Count, Data 8*SUM(CASE WHEN index_id < 2
Size, Index THEN in_row_data_page_count + lob_used_page_count +
Size row_overflow_used_page_count
ELSE lob_used_page_count +
row_overflow_used_page_count
END) AS data_size,
8*(SUM(used_page_count) –
SUM(CASE WHEN index_id < 2
THEN in_row_data_page_count + lob_used_page_count +
row_overflow_used_page_count
ELSE lob_used_page_count +
row_overflow_used_page_count
END)) AS index_size
FROM sys.dm_db_partition_stats
WHERE object_schema_name(object_id) != 'sys'
GROUP BY object_id, row_count
ORDER BY data_size desc, index_size DESC
SELECT db_name(database_id) DBName, [name],
DB File Size
physical_name 'FileName',
max_size * 8/1024 'Max Size(MB)',
size * 8/1024 'Size(MB)',
FILEPROPERTY([name], 'SpaceUsed')
* 8/1024 'UsedSpace(MB)',
(size - FILEPROPERTY([name], 'SpaceUsed')) * 8/1024
'AvailableFreeSpace(MB)'
FROM sys.master_files
WHERE type = 0
AND database_id = DB_ID ()
AND drop_lsn IS NULL

Full Version SELECT @@VERSION

The Information Company™ 45


Best Practices –SQL Server for Content Server 16.x

SELECT name AS FileName, size*1.0/128 AS FileSizeinMB,


TempDB
CASE max_size
Growth WHEN 0 THEN 'Autogrowth is off.'
WHEN -1 THEN 'Autogrowth is on.'
ELSE 'Log file will grow to a maximum size of 2 TB.'
END AutogrowthStatus, growth AS 'GrowthValue',
CASE WHEN growth = 0
THEN 'Size is fixed and will not grow.'
WHEN growth > 0 AND is_percent_growth = 0
THEN 'Growth value is in 8-KB pages.'
ELSE 'Growth value is a percentage.'
END GrowthIncrement
FROM tempdb.sys.database_files

Table and index size and fragmentation (sys.dm_db_index_physical_stats,


sys.dm_db_stats_properties)
Description The first sample below returns size and fragmentation information
for each table and index.
The second sample returns the most frequently modified database
object.
The last sample generates ALTER INDEX commands for indexes
with more than 1000 pages, and fragmentation greater than 5%.

Fragmentation SELECT dbs.[name] AS 'Schema', dbt.[name] AS 'Table',


and size dbi.[name] as 'Index', dbi.index_id as IdxID,
CAST(indstat.avg_fragmentation_in_percent
information for AS DECIMAL(6,2)) AS 'Percent',
each table and indstat.page_count,
index REPLACE(index_type_desc,' index','') AS IndxType,
fragment_count AS Fragments, index_depth AS IdxDeep,
CAST(avg_fragment_size_in_pages AS DECIMAL(10,2))
AS AvgFragSize
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL,
NULL, NULL, NULL) AS indstat
INNER JOIN sys.tables dbt
ON dbt.[object_id] = indstat.[object_id]
INNER JOIN sys.schemas dbs
ON dbt.[schema_id] = dbs.[schema_id]
INNER JOIN sys.indexes dbi
ON dbi.[object_id] = indstat.[object_id]
AND indstat.index_id = dbi.index_id
WHERE indstat.database_id = DB_ID()
AND CAST(indstat.avg_fragmentation_in_percent
AS DECIMAL(6,2)) > 0
ORDER BY indstat.avg_fragmentation_in_percent DESC

Returning SELECT obj.name, obj.object_id, stat.name,


statistics stat.stats_id, last_updated, modification_counter
FROM sys.objects AS obj
properties for INNER JOIN sys.stats AS stat
frequently ON stat.object_id = obj.object_id
modified objects CROSS APPLY sys.dm_db_stats_properties
(stat.object_id,stat.stats_id) AS sp
WHERE modification_counter > 1000
order by last_updated ASC

The Information Company™ 46


Best Practices –SQL Server for Content Server 16.x

SET NOCOUNT ON;


Generate ALTER
DECLARE @objectid int;
INDEX on DECLARE @indexid int;
Indexes larger DECLARE @partitioncount bigint;
than 1000 pages DECLARE @schemaname nvarchar(130);
and larger than DECLARE @objectname nvarchar(130);
5% fragmentation DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
-- Conditionally select tables and indexes from the
-- sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names
SELECT object_id, index_id, partition_number,
avg_fragmentation_in_percent
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL,
NULL, NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 5.0
AND index_id > 0
AND page_count > 1000;
-- Declare the cursor for the list of partitions
-- to be processed
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do;
-- Open the cursor
OPEN partitions;
-- Loop through the partitions
WHILE (1=1) BEGIN;
FETCH NEXT FROM partitions
INTO @objectid, @indexid, @partitionnum, @frag;

IF @@FETCH_STATUS < 0 BREAK;

SELECT @objectname = QUOTENAME(o.name),


@schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s
ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid;

SELECT @indexname = QUOTENAME(name)


FROM sys.indexes
WHERE object_id = @objectid AND index_id = @indexid;

SELECT @partitioncount = count (*)


FROM sys.partitions
WHERE object_id = @objectid AND index_id = @indexid;

-- 30 is an arbitrary decision point at which to switch


-- between reorganizing and rebuilding
IF @frag < 5.0
SET @command = '';
IF @frag < 30.0
SET @command = N'ALTER INDEX ' + @indexname +
N' ON ' + @schemaname + N'.' + @objectname +
N' REORGANIZE';
IF @frag >= 30.0
SET @command = N'ALTER INDEX ' + @indexname +
N' ON ' + @schemaname + N'.' + @objectname +
N' REBUILD';

The Information Company™ 47


Best Practices –SQL Server for Content Server 16.x

IF @partitioncount > 1
SET @command = @command + N' PARTITION=' +
CAST(@partitionnum AS nvarchar(10));
-- EXEC (@command);
IF LEN( @command ) > 0
PRINT @command;
END;
-- Close and deallocate the cursor
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table
DROP TABLE #work_to_do;
--GO

Virtual File Latency (sys.dm_io_virtual_file_stats)


Description For each data and log file, shows aggregate data about number,
average size, and latency of reads and writes.
SELECT -- @CaptureID,
Sample
GETDATE(),
CASE WHEN num_of_reads = 0 THEN 0
ELSE (io_stall_read_ms/num_of_reads)
END ReadLatency,
CASE WHEN io_stall_write_ms = 0 THEN 0
ELSE (io_stall_write_ms/num_of_writes)
END WriteLatency,
CASE WHEN (num_of_reads = 0 AND num_of_writes = 0)
THEN 0
ELSE (io_stall/(num_of_reads + num_of_writes))
END Latency,
CASE WHEN num_of_reads = 0 THEN 0
ELSE (num_of_bytes_read/num_of_reads)
END AvgBPerRead, --avg bytes per IOP
CASE WHEN io_stall_write_ms = 0 THEN 0
ELSE (num_of_bytes_written/num_of_writes)
END AvgBPerWrite,
CASE WHEN (num_of_reads = 0 AND num_of_writes = 0)
THEN 0
ELSE ((num_of_bytes_read + num_of_bytes_written)/
(num_of_reads + num_of_writes))
END AvgBPerTransfer,
LEFT(mf.physical_name,2) Drive,
DB_NAME(vfs.database_id) DB, vfs.database_id,
vfs.file_id, vfs.sample_ms, vfs.num_of_reads,
vfs.num_of_bytes_read, vfs.io_stall_read_ms,
vfs.num_of_writes, vfs.num_of_bytes_written,
vfs.io_stall_write_ms, vfs.io_stall,
vfs.size_on_disk_bytes/1024/1024 size_on_disk_MB,
vfs.file_handle, mf.physical_name
FROM sys.dm_io_virtual_file_stats(NULL,NULL) AS vfs
JOIN sys.master_files mf
ON vfs.database_id = mf.database_id
AND vfs.file_id = mf.file_id
ORDER BY Latency DESC

The Information Company™ 48


Best Practices –SQL Server for Content Server 16.x

Waits (sys.dm_os_wait_stats)
Description Shows aggregate time spent on different wait categories.
SELECT wait_type, wait_time_ms, waiting_tasks_count,
Sample
max_wait_time_ms, signal_wait_time_ms,
wait_time_ms/waiting_tasks_count AS AvgWaitTimems
FROM sys.dm_os_wait_stats
WHERE waiting_tasks_count > 0
ORDER BY wait_time_ms desc

Notes Consider excluding wait types that do not impact user query
performance. See details about wait statistics in article
sys.dm_os_wait_stats and in the SQL Skills blog post, SQL Server
Wait Statistics.

The Information Company™ 49


Best Practices –SQL Server for Content Server 16.x

Appendix C – Collation Fix


To avoid potential issues, OpenText recommends that the Content Server database
matches the tempDB collation. For more information, see Collation.
To view collation mismatch, you can run the Level 1 Diagnostic – Verify Collation from
the Verify Content Server Database in the Database Administration – Maintain
Current Database section on the Content Server Administration page. For more
queries on the database collations, see SQL Server and Database Information
Queries.
There are several options (all of these require outages):
1. Correct the collation on upgrading - Make sure you go onto a database
server with a matching collation to the database and tempDB
2. Rebuild the database server and change the collation to match the tempDB
3. Restart the service using a new collation
a. Detach all non-System Databases
b. Stop SQL Server service
c. In cmd type: net start "SQL Server (MSSQLSERVER)" /m /T4022
/T3659 /q"<collation_name>"
d. Stop and Start SQL Server Service
e. Verify collation on tempDB
f. Attach all databases
g. Verify collation on databases
4. Move to another database server with tempDB collation of Content Server
database
The options below show how to change a collation on a column in a table.
1. Create a new table (<tablename_new>) with columns to the correct collation
and import the data from the old mixed collation one
2. Drop the old table <tablename>
3. Rename the new table to the old table and rebuild the indexes

The Information Company™ 50


Best Practices –SQL Server for Content Server 16.x

Appendix D – When to Update Statistics and Rebuild


Indexes
The query optimizer uses statistics to create query plans which help in database
query performance.
Statistics are created/recreated when an index is created/rebuilt, when
AUTO_UPDATE_STATISTICS is set to TRUE (determined by factors), or when
statistics are manually updated using the UPDATE STATISTICS or
sp_updatestats functions. See Statistics and Table and Index Fragmentation, Fill
factor.
Microsoft recommends not updating statistics too frequently due to the performance
tradeoff between improving query plans and the time it takes to recompile queries.
Microsoft recommends that very low levels of fragmentation (less than 5 percent)
would not benefit from an index rebuild. Microsoft recommends that index
fragmentation greater than 30% should have an index rebuild. It should be noted
that small indexes can have large fragmentation due to the possible page storage on
mixed extents. Fragmentation in a small index might not be reduced after rebuilding
the index.
Due to the high-volume ingestion in Content Server, fragmentation of an index can
occur frequently – suggesting the need to rebuild indexes (and update statistics).
To determine when to rebuild an index, these factors should be taken into
consideration:
• Table and index size and fragmentation (sys.dm_db_index_physical_stats,
sys.dm_db_stats_properties) queries should be run to obtain trends on the
database objects
• Time it takes to Reorganize or Rebuild the fragmented indexes
• Time it takes to recompile the queries associated with the indexes requiring
rebuild
• AUTO_UPDATE_STATISTICS is running (refer to the query run above for
the trend of the statistics)
• UPDATE STATISTICS or sp_updatestats with FULLSCAN or SAMPLE
100 PERCENT (see article Update Statistics)
• Check over all database performance
• Update statistics, using query below, before rebuilding indexes

NOTE: It is not recommended that you manually update statistics


or rebuild indexes on a daily basis.

The Information Company™ 51


Best Practices –SQL Server for Content Server 16.x

Description Get a list of objects where the statistics may need to be updated.
This query is based on the large data tables where the suggested
percent to update statistics is 20% - this number can be changed.
This query will return a SQL statement for each object so that you
can update the statistics.
SELECT rowmodcounter.modPercent, ss.name schema_name,
Sample
st.name table_name, sst.name stats_name,
sst.auto_created, sst.no_recompute,
sp.last_updated, sp.rows, sp.rows_sampled, sp.steps,
sp.unfiltered_rows, sp.modification_counter,
sampleRate = (1.0 * sp.rows_sampled / sp.rows)
* 100,
'UPDATE STATISTICS ' + ss.name + '.' + st.name +
'(' + sst.name + ')'
FROM sys.stats sst
CROSS APPLY sys.dm_db_stats_properties(sst.object_id,
sst.stats_id) sp
INNER JOIN sys.tables st
ON sst.object_id = st.object_id
INNER JOIN sys.schemas ss
ON st.schema_id = ss.schema_id
CROSS APPLY (SELECT (1.0 * sp.modification_counter /
NULLIF(sp.rows, 0)) * 100) AS
rowmodcounter(modPercent)
WHERE ss.name = '<schema>'
AND rowmodcounter.modPercent >= 20
ORDER BY rowmodcounter.modPercent DESC;

The Information Company™ 52


Best Practices –SQL Server for Content Server 16.x

Appendix E – When to Flush the Cache


Flushing the cache clears out the plan cache and will cause stored procedures, etc.
to be recompiled.
If you have a “bad plan” for a SQL query causing poor performance, flushing the
cache will release this plan allowing a new and possibly better plan to be used.
However, flushing the cache will cause a temporary decrease in query performance
and a spike in the CPU as all query plans will be new.
If there is a performance issue, it is recommended that an investigation takes place
as to the cause of the performance issue before considering the cache to be flushed.
This should be done with a Content Server support person.
Flushing the cache due to an upgrade may be considered as there may be
performance improvements in the newer version of Content Server. Again, this
should be done with a Content Server support person (or representative).
If it has been decided that flushing the cache may help create a better plan for a poor
performing SQL query, then a decision needs to be made on how the cache will be
flushed.
The commands to flush the cache are DBCC FREEPROCCACHE and DBCC
FREESYSTEMCACHE. Permissions required are membership in the DB_OWNER server
role.
There are several ways to flush the cache. Choose the correct one for your situation.
• Remove all elements for the entire instance
DBCC FREEPROCCACHE;

• Remove all elements and suppress the regular completion message


DBCC FREEPROCCACHE WITH NO_INFOMSGS;

• Remove the ad hoc and prepared plan cache for the entire instance
DBCC FREESYSTEMCACHE ('SQL Plans');

• Flush the plan cache for a resource pool


Find all the resource pools on a SQL Server
SELECT name, statistics_start_time,
cache_memory_kb/1024, used_memory_kb/1024
FROM sys.dm_resource_governor_resource_pools;
Free the resource pool cache
DBCC FREEPROCCACHE ('<name>');

• Flush the ad hoc and prepared plan cache for a resource pool
Find all the resource pools on a SQL Server (query above)
Free the resource pool cache
DBCC FREESYSTEMCACHE ('SQL Plans', '<name>');

• Remove all query plan caches from the Compute nodes. This can be done
with or without the regular completion message (WITH NO_INFOMSGS).
USE <databasename>
DBCC FREEPROCCACHE (COMPUTE);

The Information Company™ 53


Best Practices –SQL Server for Content Server 16.x

• Remove all elements for one database. Note: this will not work in Azure
Get the database id
SELECT name, dbid
FROM master.dbo.sysdatabases
WHERE name = N'<databasename>';
Flush the plan cache for that database
DBCC FLUSHPROCINDB (<dbid>);

• Remove all the plan cache for the current database. This is only for SQL
Server 2016 and Azure.
USE <databasename>
ALTER DATABASE SCOPED CONFIGURATION
CLEAR PROCEDURE_CACHE;

• Remove one query plan from the cache


Find the plan_handle for a query. This is best done just after a query has
been executed.
SELECT plan_handle, st.text
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE text LIKE N'<beginning_of__query>%';
Remove the specific plan from the cache
DBCC FREEPROCCACHE (<plan_handle>);

The Information Company™ 54


Best Practices –SQL Server for Content Server 16.x

About OpenText
OpenText enables the digital world, creating a better way for organizations to work with information, on premises or in the
cloud. For more information about OpenText (NASDAQ: OTEX, TSX: OTC) visit opentext.com.
Connect with us:

OpenText CEO Mark Barrenechea’s blog


Twitter | LinkedIn

55
www.opentext.com/contact
Copyright © 2020 Open Text SA or Open Text ULC (in Canada).
All rights reserved. Trademarks owned by Open Text SA or Open Text ULC (in Canada).

You might also like