High Performance BI

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 320

DEPLOYING MICROSTRATEGY HIGH PERFORMANCE BI

Course Guide
Version: HPBI-921-Oct11-Color
20002011 MicroStrategy, Incorporated. All rights reserved.
This Course (course and course materials) and any Software are provided as is and without express or limited
warranty of any kind by either MicroStrategy, Inc. or anyone who has been involved in the creation, production, or
distribution of the Course or Software, including, but not limited to, the implied warranties of merchantability and
fitness for a particular purpose. The entire risk as to the quality and performance of the Course and Software is with
you. Should the Course or Software prove defective, you (and not MicroStrategy, Inc. or anyone else who has been
involved with the creation, production, or distribution of the Course or Software) assume the entire cost of all
necessary servicing, repair, or correction.
In no event will MicroStrategy, Inc. or any other person involved with the creation, production, or distribution of the
Course or Software be liable to you on account of any claim for damage, including any lost profits, lost savings, or other
special, incidental, consequential, or exemplary damages, including but not limited to any damages assessed against or
paid by you to any third party, arising from the use, inability to use, quality, or performance of such Course and
Software, even if MicroStrategy, Inc. or any such other person or entity has been advised of the possibility of such
damages, or for the claim by any other party. In addition, MicroStrategy, Inc. or any other person involved in the
creation, production, or distribution of the Course and Software shall not be liable for any claim by you or any other
party for damages arising from the use, inability to use, quality, or performance of such Course and Software, based
upon principles of contract warranty, negligence, strict liability for the negligence of indemnity or contribution, the
failure of any remedy to achieve its essential purpose, or otherwise.
The information contained in this Course and the Software are copyrighted and all rights are reserved by
MicroStrategy, Inc. MicroStrategy, Inc. reserves the right to make periodic modifications to the Course or the Software
without obligation to notify any person or entity of such revision. Copying, duplicating, selling, or otherwise
distributing any part of the Course or Software without prior written consent of an authorized representative of
MicroStrategy, Inc. are prohibited. U.S. Government Restricted Rights. It is acknowledged that the Course and
Software were developed at private expense, that no part is public domain, and that the Course and Software are
Commercial Computer Software provided with RESTRICTED RIGHTS under Federal Acquisition Regulations and
agency supplements to them. Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set
forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFAR 252.227-7013
et. seq. or subparagraphs (c)(1) and (2) of the Commercial Computer SoftwareRestricted Rights at FAR 52.227-19, as
applicable. Contractor is MicroStrategy, Inc., 1850 Towers Crescent Plaza, Tysons Corner, Virginia 22182. Rights are
reserved under copyright laws of the United States with respect to unpublished portions of the Software.
Copyright Information
All Contents Copyright 2011 MicroStrategy Incorporated. All Rights Reserved.
Trademark Information
MicroStrategy, MicroStrategy 6, MicroStrategy 7, MicroStrategy 7i, MicroStrategy 7i Evaluation Edition,
MicroStrategy 7i Olap Services, MicroStrategy 8, MicroStrategy 9, MicroStrategy Distribution Services, MicroStrategy
MultiSource Option, MicroStrategy Command Manager, MicroStrategy Enterprise Manager, MicroStrategy Object
Manager, MicroStrategy Reporting Suite, MicroStrategy Power User, MicroStrategy Analyst, MicroStrategy Consumer,
MicroStrategy Email Delivery, MicroStrategy BI Author, MicroStrategy BI Modeler, MicroStrategy Evaluation Edition,
MicroStrategy Administrator, MicroStrategy Agent, MicroStrategy Architect, MicroStrategy BI Developer Kit,
MicroStrategy Broadcast Server, MicroStrategy Broadcaster, MicroStrategy Broadcaster Server, MicroStrategy
Business Intelligence Platform, MicroStrategy Consulting, MicroStrategy CRM Applications, MicroStrategy Customer
Analyzer, MicroStrategy Desktop, MicroStrategy Desktop Analyst, MicroStrategy Desktop Designer, MicroStrategy
eCRM 7, MicroStrategy Education, MicroStrategy eTrainer, MicroStrategy Executive, MicroStrategy Infocenter,
MicroStrategy Intelligence Server, MicroStrategy Intelligence Server Universal Edition, MicroStrategy MDX Adapter,
MicroStrategy Narrowcast Server, MicroStrategy Objects, MicroStrategy OLAP Provider, MicroStrategy SDK,
MicroStrategy Support, MicroStrategy Telecaster, MicroStrategy Transactor, MicroStrategy Web, MicroStrategy Web
Business Analyzer, MicroStrategy World, Application Development and Sophisticated Analysis, Best In Business
Intelligence, Centralized Application Management, Information Like Water, Intelligence Through Every Phone,
Intelligence To Every Decision Maker, Intelligent E-Business, Personalized Intelligence Portal, Query Tone, Rapid
Application Development, MicroStrategy Intelligent Cubes, The Foundation For Intelligent E-Business, The Integrated
Business Intelligence Platform Built For The Enterprise, The Platform For Intelligent E-Business, The Scalable
Business Intelligence Platform Built For The Internet, Industrial-Strength Business Intelligence, Office Intelligence,
MicroStrategy Office, MicroStrategy Report Services, MicroStrategy Web MMT, MicroStrategy Web Services, Pixel
Perfect, Pixel-Perfect, MicroStrategy Mobile, MicroStrategy Integrity Manager and MicroStrategy Data Mining
Services are all registered trademarks or trademarks of MicroStrategy Incorporated.
All other company and product names may be trademarks of the respective companies with which they are associated.
Specifications subject to change without notice. MicroStrategy is not responsible for errors or omissions.
MicroStrategy makes no warranties or commitments concerning the availability of future products or versions that
may be planned or under development.
Patent Information
This product is patented. One or more of the following patents may apply to the product sold herein: U.S. Patent Nos.
6,154,766, 6,173,310, 6,260,050, 6,263,051, 6,269,393, 6,279,033, 6,567,796, 6,587,547, 6,606,596, 6,658,093,
6,658,432, 6,662,195, 6,671,715, 6,691,100, 6,694,316, 6,697,808, 6,704,723, 6,741,980, 6,765,997, 6,768,788,
6,772,137, 6,788,768, 6,798,867, 6,801,910, 6,820,073, 6,829,334, 6,836,537, 6,850,603, 6,859,798, 6,873,693,
6,885,734, 6,940,953, 6,964,012, 6,977,992, 6,996,568, 6,996,569, 7,003,512, 7,010,518, 7,016,480, 7,020,251,
7,039,165, 7,082,422, 7,113,993, 7,127,403, 7,174,349, 7,181,417, 7,194,457, 7,197,461, 7,228,303, 7,260,577, 7,266,181,
7,272,212, 7,302,639, 7,324,942, 7,330,847, 7,340,040, 7,356,758, 7,356,840, 7,415,438, 7,428,302, 7,430,562,
7,440,898, 7,486,780, 7,509,671, 7,516,181, 7,559,048, 7,574,376, 7,617,201, 7,725,811, 7,801,967, 7,836,178, 7,861,161,
7,861,253, 7,881,443, 7,925,616, 7,945,584, 7,970,782 and 8,005,870. Other patent applications are pending..
How to Contact Us
MicroStrategy Education Services
1850 Towers Crescent Plaza
Tysons Corner, VA 22182
Phone: 877.232.7168
Fax: 703.848.8602
E-mail: education@microstrategy.com
http://www.microstrategy.com/education
MicroStrategy Incorporated
1850 Towers Crescent Plaza
Tysons Corner, VA 22182
Phone: 703.848.8600
Fax: 703.848.8610
E-mail: info@microstrategy.com
http://www.microstrategy.com
2011 MicroStrategy, Inc. 5
TABLE OF CONTENTS
Preface Course Description.................................................................... 11
Who Should Take this Course ............................................... 13
Course Prerequisites ............................................................. 13
Follow-Up Courses ................................................................ 13
Course Objectives ................................................................. 14
About the Course Materials ......................................................... 15
Content Descriptions ............................................................. 15
Learning Objectives ............................................................... 15
Lessons ................................................................................. 15
Opportunities for Practice ...................................................... 16
Typographical Standards....................................................... 16
Other MicroStrategy Courses ...................................................... 19
Core Courses......................................................................... 19
1. Introduction to High
Performance
Lesson Description ................................................................... 21
Lesson Objectives ................................................................. 22
The High Performance Initiative .................................................. 23
Steps Taken in the High Performance Direction.................... 24
Best Practices Lead to Optimal Performance........................ 26
MicroStrategy High Performance Highlights................................ 27
Components of Performance....................................................... 29
Course Structure.................................................................... 30
Lesson Summary......................................................................... 33
Table of Contents Deploying MicroStrategy High Performance BI
6 2011 MicroStrategy, Inc.
2. Caching and
Intelligent Cubes
Lesson Description ................................................................... 35
Lesson Objectives ................................................................. 36
Computational Distance............................................................... 38
The Importance of 64-Bit Systems .............................................. 40
Introduction to Caching................................................................ 41
Report Caches............................................................................. 42
Report Caching Overview...................................................... 42
Report Caching Best Practices.............................................. 44
Document Caching ...................................................................... 50
Document Caching Overview ................................................ 50
Document Caching Best Practices ........................................ 52
Object Caches ............................................................................. 53
Object Caching Overview ...................................................... 53
Object Caching Best Practices .............................................. 53
Element Caches .......................................................................... 54
Element Caching Overview.................................................... 54
Element Caching Best Practices............................................ 55
Cache Sizing Recommendations................................................. 57
Introduction to Intelligent Cubes .................................................. 59
The Intelligent Cube Publication Process.................................... 61
High Level Steps.................................................................... 61
Peak Memory Usage ............................................................. 62
Intelligent Cube Data Normalization Techniques................... 63
Intermediate Table Type VLDB Property............................... 67
When to Use Intelligent Cubes? .................................................. 68
Using Cubes for Highly Used Prompted Reports................... 69
Using Cubes for Highly Used Overlapping Reports............... 70
Using Cubes for Frequently Used Ad-hoc Analysis Reports . 72
Cube Sizing and Memory Usage Considerations........................ 73
Cube Loading and Swapping................................................. 73
Cube Size Constraints ........................................................... 73
Cube Size and System Scalability ......................................... 74
Other Cube and Report Design Best Practices ..................... 77
Incremental Cube Refresh........................................................... 78
Incremental Refresh Methods................................................ 78
Report Execution against Cubes ................................................. 85
View Report Execution .......................................................... 85
Dynamic Sourcing Execution................................................. 85
View Reports vs. Dynamic Sourcing Reports ........................ 88
Deploying MicroStrategy High Performance BI Table of Contents
2011 MicroStrategy, Inc. 7
Lesson Summary......................................................................... 89
3. Data Transfer Lesson Description ................................................................... 91
Lesson Objectives ....................................................................... 92
Introduction to Network Performance .......................................... 93
Network in a Typical Business Implementation ..................... 93
Case ExampleNetwork Impact on Cube Publication Times94
Key Network Concepts ................................................................ 96
Network Terminology............................................................. 96
Network Recommendations for High Performance ..................... 97
Network Recommendations................................................... 97
Place All Server Components in the Same Segment ............ 97
Consider Bandwidth and Latency Requirements................... 98
Use HTTP Compression........................................................ 99
Setting Up Web Proxy Server .............................................. 102
Distribution Services Performance ............................................ 103
Number of Recipients .......................................................... 103
Report or Document Size..................................................... 103
Delivery Method................................................................... 104
Delivery Format.................................................................... 105
Data Source......................................................................... 106
Concurrency ........................................................................ 108
Alerting................................................................................. 109
Data Personalization Method............................................... 109
Clustering............................................................................. 109
Lesson Summary....................................................................... 110
4. System Architecture
and Configuration
Lesson Description ................................................................. 113
Lesson Objectives ............................................................... 114
Server Specifications................................................................. 115
Processor............................................................................. 115
Memory................................................................................ 119
Disk Storage ........................................................................ 121
Operating Systems .............................................................. 123
Virtualization ........................................................................ 124
Intelligence Server Configuration............................................... 126
User Management ............................................................... 126
Resource Management ....................................................... 129
Workload Management........................................................ 132
Clustering............................................................................. 137
Table of Contents Deploying MicroStrategy High Performance BI
8 2011 MicroStrategy, Inc.
Web Server Configuration ......................................................... 143
JVM Settings........................................................................ 143
MicroStrategy Web Pool Sizes ............................................ 144
Using a Separate Web Server for Static Content ................ 146
Logging and Statistics.......................................................... 147
JavaScript ............................................................................ 147
Lesson Summary....................................................................... 148
5. Data Presentation Lesson Description ................................................................. 151
Lesson Objectives ............................................................... 152
High Performance Reports ........................................................ 153
Report Execution Flow......................................................... 153
Report Configuration Techniques to Optimize Performance 154
High Performance Dashboards ................................................. 159
The Dashboard Execution Flow........................................... 159
Dataset Techniques to Optimize Performance.......................... 162
Dashboard Data Preparation Steps..................................... 162
Reducing the Number of Datasets in a Dashboard ............. 163
Reducing the Amount of Data in a Dataset.......................... 167
Using Intelligent Cubes........................................................ 169
Design Techniques to Optimize Performance ........................... 172
General Performance Topics............................................... 172
DHTML Performance Topics ............................................... 184
Flash Performance Topics................................................... 187
Optimizing Performance for Mobile ........................................... 194
Execution Workflow for Mobile Devices............................... 194
Improving Execution Time for a Mobile Document or Dashboard
195
General Design Best Practices for Documents and Dashboards
200
MicroStrategy Mobile 9.2.1 Performance Optimizations...... 201
Lesson Summary....................................................................... 203
6. Data Warehouse
Access
Lesson Description ................................................................. 207
Lesson Objectives ............................................................... 208
Introduction to Data Warehouse Access ................................... 209
Database Query Performance ............................................. 209
SQL Generation Algorithm................................................... 210
Data Architecture Optimizations ................................................ 213
Report and Schema Design Optimizations................................ 219
Deploying MicroStrategy High Performance BI Table of Contents
2011 MicroStrategy, Inc. 9
Eliminating Unnecessary Table Keys .................................. 219
Including Filter Conditions In Fact Definitions...................... 220
Substituting Custom Groups with Consolidations................ 221
Designing Summary Metrics from Base Metrics.................. 222
SQL Generation Optimizations.................................................. 224
Logical Query Layer............................................................. 224
Database Optimization Layer............................................... 227
Query Optimization Layer .................................................... 238
Other Query Performance Optimizations................................... 243
Multi Source......................................................................... 243
ODBC .................................................................................. 246
Lesson Summary....................................................................... 250
7. Performance Testing
Methodology
Lesson Description ................................................................. 253
Lesson Objectives ............................................................... 254
Introduction to Performance Testing.......................................... 255
Why is Performance Testing Important?.............................. 255
What is Performance in a BI Environment?......................... 256
Why Is Having a Methodology Important?........................... 258
Performance Testing Methodology............................................ 260
Define System Goals ........................................................... 261
Quantify Performance.......................................................... 263
Profile the Action.................................................................. 264
Optimize the Action.............................................................. 275
Monitor the Environment...................................................... 277
Lesson Summary....................................................................... 278
Workshop Dashboard Overview ........................................................... 282
The Design Strategy ............................................................ 282
High-level Steps................................................................... 282
Detailed Steps ..................................................................... 283
Index ......................................................................................... 315
Table of Contents Deploying MicroStrategy High Performance BI
10 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 11
PREFACE
Course Description
After companies reach their business goals, the next area to focus on is
increasing productivity. MicroStrategy software is designed and implemented
with performance in mind and delivers top performance to scale.
MicroStrategy out-of-the-box software is optimized for a typical use case
reflecting the most frequently requested hardware and functionality. Some
tuning may be required to achieve optimal performance.
This course includes the best practices of implementing MicroStrategy BI and
provides the information necessary to build your applications with
performance in mind, tuning MicroStrategy for optimal performance.
In this course, you will first be introduced to the MicroStrategy High
Performance initiative, the main factors behind it, and the most important
outcomes of this initiative to date. Then, you will learn about the components
of performance and about recommendations on how to deploy a High
Performance MicroStrategy platform by using features and settings in the five
main areas of a BI system: In-memory caching and cubes, data transfer, system
configuration, reports, dashboards, mobile applications and data source
access.
Preface Deploying MicroStrategy High Performance BI
12 2011 MicroStrategy, Inc.
Finally, you will learn the basics about performance testing methodology,
gaining solid understanding of performance testing to choose the right type of
tests for their specific performance requirements.
The goal of this course is to provide you with recommendations on features,
settings, and parameters throughout the BI system, which will help you deploy
your MicroStrategy projects, achieving High Performance every step of the
way.
Deploying MicroStrategy High Performance BI Preface
2011 MicroStrategy, Inc. Who Should Take this Course 13
Who Should Take this Course
This course is designed for:
Administrators
Project Managers
Developers
Architects
Course Prerequisites
Before starting this course, you should know all topics covered in the following
courses:
MicroStrategy Administration: Configuration and Security
MicroStrategy Administration: Application Management
MicroStrategy Report Services: Dynamic Dashboards
Follow-Up Courses
After taking this course, you might consider taking the following course:
MicroStrategy Mobile for Apple iPad and iPhone
Preface Deploying MicroStrategy High Performance BI
14 Course Objectives 2011 MicroStrategy, Inc.
Course Objectives
After completing this course, you will be able to:
Describe the MicroStrategy High Performance initiative. Understand the
main impact to your environment provided by the MicroStrategy initiative.
Learn the high-level topics covered by the chapters of this course. (Page 22)
Describe different levels of caching in a MicroStrategy environment.
Understand best practices for leveraging caching for high performance.
(Page 36)
Understand the different instances of data transfer and their impact on the
BI system performance. Describe key network concepts and network
performance recommendations. Apply best practices techniques when
working with Distribution Services. (Page 92)
List the components of performance, understand the main performance
recommendations for server specification, system configuration, and the
Web environment. (Page 114)
Describe report and dashboard execution flow. Understand the
recommendations for designing high performance reports and dashboards.
(Page 152)
Apply the learned skills to optimize report queries to reduce the database
execution time. (Page 208)
Understand performance testing, so you can choose the right type of tests for
specific performance requirements. Design, implement, execute, and
analyze performance tests in a correct way. (Page 254)
Deploying MicroStrategy High Performance BI Preface
2011 MicroStrategy, Inc. About the Course Materials 15
About the Course Materials
This course is organized in to lessons and reference appendices. Each lesson
focuses on major concepts and skills that help you to better understand
MicroStrategy products and use them to implement MicroStrategy projects.
The appendices provide you with supplemental information to enhance your
knowledge of MicroStrategy products.
Content Descriptions
Each major section of this course begins with a Description heading. The
Description introduces you to the content contained in that section.
Learning Objectives
Learning objectives enable you to focus on the key knowledge and skills you
should obtain by successfully completing this course. Objectives are provided
for you at the following three levels:
CourseYou will achieve these overall objectives by successfully
completing all the lessons in this course. The Course Objectives heading in
this Preface contains the list of course objectives.
LessonYou will achieve these main objectives by successfully completing
all the topics in the lesson. You can find the primary lesson objectives
directly under the Lesson Objectives heading at the beginning of each
lesson.
Main TopicYou will achieve this secondary objective by successfully
completing the main topic. The topic objective is stated at the beginning of
the topic text. You can find a list of all the topic objectives in each lesson
under the Lesson Objectives heading at the beginning of each lesson.
Lessons
Each lesson sequentially presents concepts and guides you with step-by-step
procedures. Illustrations, screen examples, bulleted text, notes, and definition
tables help you to achieve the learning objectives.
Preface Deploying MicroStrategy High Performance BI
16 About the Course Materials 2011 MicroStrategy, Inc.
Opportunities for Practice
A Workshop is a reinforcement and assessment activity that follows two or
more lessons. Because a Workshop covers content and applied skills presented
in several lessons, it is a separate section on the level of a lesson.
The following sections within lessons provide you with opportunities to
reinforce important concepts, practice new product and project skills, and
monitor your own progress in achieving the lesson and course objectives:
Review
Case Study
Business Scenario
Exercises
Typographical Standards
The following sections explain the font style changes, icons, and different types
of notes that you see in this course.
Actions
References to screen elements and keys that are the focus of actions are in bold
Arial font style. The following example shows this style:
Click Select Warehouse.
Code
References to code, formulas, or calculations within paragraphs are formatted
in regular Courier.New font style. The following example shows this style:
Sum(sales)/number of months
Deploying MicroStrategy High Performance BI Preface
2011 MicroStrategy, Inc. About the Course Materials 17
Data Entry
References to literal data you must type in an exercise or procedure are in bold
Arial typeface. References to data you type in that could vary from user to user
or system to system is in bold italic Arial font style. The following example
shows this style:
Type copy c:\filename d:\foldername\filename.
Keyboard Keys
References to a keyboard key or shortcut keys are in uppercase letters in bold
Arial font style. The following example shows this style:
Press CTRL+B.
New Terms
New terms to note are in regular italic font style. These terms are defined when
they are first encountered in the course material. The following example shows
this style:
The aggregation level is the level of calculation for the metric.
Notes and Warnings

A note icon indicates helpful information.

A warning icon calls your attention to very important information that


you should read before continuing the course.
Heading Icons
The following heading icons are used to indicate specific practice and review
sections:

Precedes a Review section


Preface Deploying MicroStrategy High Performance BI
18 About the Course Materials 2011 MicroStrategy, Inc.

Precedes a Case Study

Precedes a Business Scenario

Precedes Exercises
Deploying MicroStrategy High Performance BI Preface
2011 MicroStrategy, Inc. Other MicroStrategy Courses 19
Other MicroStrategy Courses
Core Courses
Implementing MicroStrategy: Development and Deployment
MicroStrategy Architect: Project Design Essentials
MicroStrategy Desktop: Advanced Reporting
MicroStrategy Desktop: Reporting Essentials
MicroStrategy Mobile for Apple iPad and iPhone
MicroStrategy Report Services: Document Essentials
MicroStrategy Report Services: Dynamic Dashboards
MicroStrategy Web for Professionals
MicroStrategy Web for Reporters and Analysts
All courses are subject to change. Please visit the MicroStrategy website for the latest education
offerings.
Preface Deploying MicroStrategy High Performance BI
20 Other MicroStrategy Courses 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 21
1
INTRODUCTION TO HIGH
PERFORMANCE
Lesson Description
This lesson provides an introduction to the MicroStrategys High Performance
initiative. In this lesson, you will learn about the factors that drove
MicroStrategy to pursue the High Performance initiative and about the
benefits that the results of this initiative can bring to your environment. In
addition, you will learn about the MicroStrategy components of performance
and how this course is structured.
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
22 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the MicroStrategy High Performance initiative. Understand the main
impact to your environment provided by the MicroStrategy initiative. Learn
the high-level topics covered by the chapters of this course.
After completing the topics in this lesson, you will be able to:
Understand the factors behind MicroStrategys push for developing the
High Performance initiative. (Page 23)
List the most important achievements of the MicroStrategy High
Performance initiative. (Page 27)
Understand how this course is structured based on the MicroStrategy
components of performance. (Page 29)
Deploying MicroStrategy High Performance BI Introduction to High Performance 1
2011 MicroStrategy, Inc. The High Performance Initiative 23
The High Performance Initiative
After completing this topic, you will be able to:
Understand the factors behind MicroStrategys push for developing the High
Performance initiative.
Performance is a combination of speed and scale. Speed is important because it
affects the user experience. Speed is typically measured as user wait time or
response time. Scale is important because it determines how many people can
submit requests or reports to a business intelligence (BI) system, how many
reports can be supplied by the system, and how much data can be accessed.
Most BI technologies can do one or the other, but not both effectively.
MicroStrategy has always been focused on high performance, with the
combination of high speed at high scale.
Recent third-party reports have documented increased dissatisfaction with the
BI system performance
1
. While MicroStrategy is recognized as the leader in
high performance, the company continues to re-evaluate its performance to
ensure customers are achieving the highest performance possible with the
MicroStrategy platform.
MicroStrategy Positioning on Performance Matters
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
24 The High Performance Initiative 2011 MicroStrategy, Inc.
Starting in 2007, MicroStrategy surveyed customer applications. It quickly
became clear that there is a continuing trend and inherent demand towards
improved BI performance. This insight led to the creation of a dedicated high
performance initiative. Based on the results of this survey, the goals of this
initiative were to:
Deliver up to 10x faster BI applicationsTodays BI applications must
efficiently access terabytes of data. Since most competing BI tools do not
include performance acceleration engines, average BI application query
response times often range from 10 seconds to one minute or more.
MicroStrategys high performance initiative will set a new performance
standard, aiming to deliver up to 10x faster query response time at any data
scale.
Provide faster than 3-second response time for most predictable
queries and analysesMicroStrategy research has found that most
business queries are predictable. Business people often run similar reports
on a daily, weekly, or monthly basis to understand operational
performance. Using its memory technology to cache computations and
place the results into server memory, MicroStrategy can dramatically
accelerate repetitive operational reports as well as most subsequent
analyses.
Provide faster than 5-second response time for the majority of ad hoc
queriesBy optimizing and accelerating all aspects of its BI platform, from
SQL generation to SQL execution to data rendering, MicroStrategy seeks to
enable 50% of all ad hoc queries to return in less than 5 seconds.
Steps Taken in the High Performance Direction
MicroStrategys high performance initiative includes the formation of its High
Performance and Scalability Lab, the creation of a dedicated Performance
Engineering team, and specific R&D efforts solely focused on providing
MicroStrategy customers with the highest levels of performance for BI
applications of all sizes.
Deploying MicroStrategy High Performance BI Introduction to High Performance 1
2011 MicroStrategy, Inc. The High Performance Initiative 25
MicroStrategy High Performance and Scalability Lab
MicroStrategy built a state-of-the-art research laboratory equipped with the
latest database hardware, software, and performance testing tools.
MicroStrategys multi-million dollar research laboratory has more than 100
servers running tests 24x7 on all supported platforms, including Linux,
Microsoft Win64, Oracle Solaris, IBM AIX, and HP-UX.
MicroStrategys standard benchmark test lines have two Intel
Nehalem-based Intelligence Servers with eight cores and 144 GB RAM each
for Intelligence Server, four Web servers with eight cores each, eight cores for
the MicroStrategy metadata, and eight cores to simulate user load. The entire
test station contains 112 cores in total.
In addition, MicroStrategy currently has at least two enterprise-class database
servers to support its testing efforts, including a four-node Oracle RAC
cluster with 96 cores in total, 256 GB RAM on each node, and 18 TB of usable
space; as well as a Teradata 13 5555H server, with eight cores, 48 GB RAM, and
18 TB of usable space.
MicroStrategy Performance Engineering Team
MicroStrategy has a dedicated team of performance engineers who work
closely with selected customers to understand and document the current
performance development of BI system configurations. This team runs
hundreds of performance tests each week to identify and eliminate system
bottlenecks and to build accurate system performance profiles. In addition, the
team is authoring technical notes, articles, and best practice documents to help
customers maximize performance of their BI applications.
Professional Services
MicroStrategy consulting services can help customers build high performance
BI applications as well as audit and fine-tune existing applications through:
Capacity PlanningMicroStrategys capacity planning-related services
involve a holistic approach to matching existing infrastructure to current
and growing BI requirements, including a quantitative assessment of
current system capacity and potential bottlenecks.
High Performance TuningMicroStrategys high performance-related
services involve a methodical approach to tuning MicroStrategy software
for maximum performance, including all components of the BI ecosystem
such as relational database management systems.
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
26 The High Performance Initiative 2011 MicroStrategy, Inc.
Best Practices Lead to Optimal Performance
For most business intelligence customers, their initial requirement is to satisfy
their functional requirements and solve the problem at hand. MicroStrategys
BI platform is recognized as one of the most complete and technically
advanced in the industry.
The second requirement is stellar performance. After companies reach their
business goals, they want to reach them faster and do more in the same
amount of time for increased productivity. MicroStrategys software is
designed and implemented with performance in mind and delivers top
performance at scale. MicroStrategy customers implement a wide range of BI
functionality in their applicationsfrom the mainstream to the specialized,
from the conservative to the innovative use of BI. It is important to recognize
that depending on the application, different optimizations are needed for
optimal performance.
MicroStrategy software often allows multiple ways to implement a certain
reporting requirement providing our customers with the utmost flexibility.
Designing BI applications with performance in mind improves application
performance. MicroStrategy out-of-the-box software is optimized for a typical
use case reflecting the most frequently requested hardware and functionality.
Some additional tuning may be required to achieve optimal performance.
Deploying MicroStrategy High Performance BI Introduction to High Performance 1
2011 MicroStrategy, Inc. MicroStrategy High Performance Highlights 27
MicroStrategy High Performance Highlights
After completing this topic, you will be able to:
List the most important achievements of the MicroStrategy High Performance
initiative.
By optimizing and accelerating all aspects of its BI platform, from SQL
generation to data rendering, MicroStrategy seeks to enable 50% of all ad hoc
queries to return in less than 5 seconds. Specific areas of R&D include:
Faster Database QueriesMicroStrategys ROLAP technology leverages
database engines for complex calculations and data joins, employing the
latest techniques to reduce processing time and optimize overall query
performance, including: database-specific optimizations, workload
balancing across multiple databases using improved aggregate awareness,
and reduction on the number of SQL passes for sophisticated analyses to
improve database query time by 75%.
Larger Data CachesMicroStrategy is further enhancing its data loading
algorithms. The latest enhancements in MicroStrategy show performance
improvements of over 30% in cube data load time.
Dynamic Sourcing OptimizationsThese optimizations improve system
performance by increasing the number of reports that can be directed to
Intelligent Cubes and result in dynamic sourcing of reports from Intelligent
Cubes to be up to 80% faster.
Faster Metadata ResponsesOptimizations have been made to enable
faster project start-up and administration. Projects are loaded in parallel
rather than serially, reducing the Intelligence Server startup time.
Additional enhancements that have resulted in faster metadata operations
are as follows:
Configuration Wizard has been enhanced to support the process of
upgrading MicroStrategy projects and environments to the most recent
version of MicroStrategy.
Architect operations, such as dragging tables from the warehouse into
the Graphical Architect interface and schema manipulations, are up to
20% faster.
Object migration processes have been optimized to support migration of
objects across unrelated projects.
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
28 MicroStrategy High Performance Highlights 2011 MicroStrategy, Inc.
A new algorithm optimizes the processing of security filters.
Project upgrade process is significantly faster.
Reduced threshold XML size results in less data to transfer.
New index optimizations on metadata tables reduce the metadata access
time.
Health Center Integration with High Performance InitiativeHealth
Center proactively collects customer data from customer systems. Based on
this data, Health Center identifies performance bottlenecks and
recommends steps to resolve those bottlenecks. This integration brings you
closer to MicroStrategy Technical Support, while providing you a better
understanding of your environment.

For more information on Health Center, refer to the MicroStrategy


Administration: Configuration and Security course.
Faster Web InteractivityThe objective is to continue to provide the
richest user interactivity with rapid performance on Web browsers and
mobile devices by minimizing data transfers, streamlining processing,
optimizing page construction, rendering pages faster, and loading
on-demand Javascript.
Faster DashboardsDashboardApps can contain an entire days worth of
information within a single dashboard for even the most novice BI users.
Further enhancements include on-demand data transfer technology,
enhanced data compression algorithms, and optimized rendering
algorithms.
Providing fast speed and greater capacity to customers deployments have been
and continue to be at the core of MicroStrategy technology.
Deploying MicroStrategy High Performance BI Introduction to High Performance 1
2011 MicroStrategy, Inc. Components of Performance 29
Components of Performance
After completing this topic, you will be able to:
Understand how this course is structured based on the MicroStrategy
components of performance.
The typical BI query must go through the following five key layers or
components:
Caching options
Data transfer
System architecture and configuration
Client rendering or Data Presentation
Data warehouse access

The components above are not listed in any specific order of access
during the execution of a query. The following image illustrates the five
components:
The Components of High Performance
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
30 Components of Performance 2011 MicroStrategy, Inc.
Course Structure
The remainder of this course is divided into the following chapters:
Caching and Intelligent Cubes
Data transfer
System architecture and configuration
Data presentation
Data warehouse access
Performance testing methodology
Workshop
Caching and Intelligent Cubes
MicroStrategys memory technology is engineered to meet the increased
demand for higher BI performance, which is driven by the rapid expansion of
both data volumes and the number of BI users in organizations across
industries. MicroStrategy accelerates performance by pre-calculating
computations and placing the results into its memory acceleration engine to
dramatically improve real-time query performance. In this chapter, you will
learn about the main performance recommendations when using caches and
Intelligent Cubes.
Data Transfer
Data transfers over one or more networks are a very important component of a
BI implementation. A slow or poorly tuned network performance in any of
those transfers will translate into poor performance from a report or
dashboard execution perspective. In this chapter, you will learn about the main
recommendations to improve network performance and distribution services
executions.
Deploying MicroStrategy High Performance BI Introduction to High Performance 1
2011 MicroStrategy, Inc. Components of Performance 31
System Architecture and Configuration
Successful BI applications accelerate user adoption and enhance productivity,
resulting in demand for more users, data, and reports. MicroStrategy provides
the ability to adapt quickly to constant changes and evolve along with business
requirements. MicroStrategy Intelligence Server has been proven in real-world
scenarios to deliver the highest performance at scale with the fewest servers
and minimum IT overhead. This lesson introduces you to important concepts
regarding tuning your MicroStrategy platform, including recommendations for
the Intelligence Server, Web server, client machines, and configuring
clustering.
Data Presentation
Dashboards provide graphical, executive views into KPIs, enabling quick
business insights. MicroStrategy enables higher performing dashboards,
averaging 30-45% faster execution and interactivity. Using new compression
methods, MicroStrategy dashboards have a smaller footprint than ever
beforeup to 55% smallerresulting in faster delivery using less network
bandwidth. Dashboards deliver ever more analysis and data for end-users. In
this chapter, you will learn about the report and dashboard execution flows,
and about the key recommendations for data set and design optimization for
presenting data in Web and mobile devices. These series of design techniques
will help you develop reports and dashboards, while keeping performance in
mind.
Data Warehouse Access
High performance BI starts with optimizing SQL queries to retrieve results
from the database as quickly as possible. BI performance is dependent largely
on the time that the queries take to execute in the database. An average
reporting request usually takes 40 seconds to complete, out of which 34
seconds, or 85% of the query time, is spent executing in the database.
Therefore, it is critical to optimize report queries to reduce database execution
time. In this chapter, you will learn about the main recommendations to
retrieve source data for reports and documents in an efficient manner.
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
32 Components of Performance 2011 MicroStrategy, Inc.
Performance Testing Methodology
Questions about the performance of the system need to be answered with
various performance testing. Unlike the straightforward feature testing that
verifies the functional correctness of the product, performance testing needs
special design and is more complex to execute and analyze. The goal of this
lesson is to provide you with a good understanding of performance testing, so
you can choose the right type of tests for specific performance requirements.
Workshop
The three-hour workshop will enable you to experience the performance gains
of redesigning a MicroStrategy dashboard, by following the methodology
presented in this course.
This course manual includes the best practices of implementing MicroStrategy
BI and provides the information necessary to build your applications with
performance in mind and to tune MicroStrategy for optimal performance.
Deploying MicroStrategy High Performance BI Introduction to High Performance 1
2011 MicroStrategy, Inc. Lesson Summary 33

Lesson Summary
In this lesson, you learned:
MicroStrategy continues to remain focused on delivering high
performance, with the combination of high speed at high scale.
Starting in 2007, MicroStrategy surveyed customers and found out there
was an inherent demand towards improved BI performance. This insight
led to the creation of a dedicated High Performance initiative.
The goals of the High Performance initiative are to deliver up to 10x faster
BI applications, provide faster than three-second response time for most
predictable queries and analyses, and provide faster than five-second
response time for the majority of ad hoc queries.
MicroStrategys High Performance initiative includes the formation of its
High Performance and Scalability Lab, the creation of a dedicated
Performance Engineering team, and specific R&D efforts solely focused on
providing MicroStrategy customers with the highest levels of performance
for BI applications of all sizes.
Introduction to High Performance Deploying MicroStrategy High Performance BI 1
34 Lesson Summary 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 35
2
CACHING AND INTELLIGENT
CUBES
Lesson Description
In this lesson, you will learn about the concept of computational distance and
the advantages of the 64-bit applications. Next, you will learn how to reduce
the computational distance and increase performance by applying best practice
techniques to the different levels of caching in a MicroStrategy
environmentreport, document, element, and object caching.
Additionally, you will learn about Intelligent Cubes, understand the process of
publishing cubes in memory, and identify the usage scenarios that represent
best candidates for taking advantage of Intelligent Cubes. You will aso learn
about the memory required for Intelligent Cubes in Intelligence Server, and the
tuning techniques that must be applied when designing cubes and reports.
Finally, you will be presented with the performance considerations when
designing reports that utilize Intelligent Cubes.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
36 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
Describe different levels of caching in a MicroStrategy environment.
Understand best practices for leveraging caching for high performance.
After completing the topics in this lesson, you will be able to:
Understand the concept of computational distance and its impact on the
system response time. (Page 38)
Understand the importance of 64-bit systems in reducing the computational
distance. (Page 40)
Define caching, list and define cache types, and set cache properties. (Page
41)
Understand the best practices to leverage report caching. (Page 42)
Understand the best practices to leverage document caching. (Page 50)
Understand object caching and the best practices to leverage object caching.
(Page 53)
Understand element caching and the best practices to leverage element
caching. (Page 54)
Understand the rules of thumb to size caches in your system. (Page 57)
Understand the Intelligent Cubes functionality and how it complements the
MicroStrategy caching strategy. (Page 59)
Understand the process of publishing an Intelligent Cube. (Page 61)
Identify the usage scenarios that represent best candidates to take
advantage of Intelligent Cubes. (Page 68)
Understand the memory requirements for Intelligent Cubes and the tuning
techniques that must be applied when designing cubes and reports. (Page
73)
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Lesson Objectives 37
Refresh cube data using the different options available in the incremental
refresh feature. (Page 78)
Understand the performance considerations when designing reports to
utilize Intelligent Cubes. (Page 85)
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
38 Computational Distance 2011 MicroStrategy, Inc.
Computational Distance
After completing this topic, you will be able to:
Understand the concept of computational distance and its impact on the
system response time.
Any BI system consists of a series of processes and tools that take raw data at
the very bottomat the transactional level in a databaseand use varying
technologies to transform that data to the finished answer that the user needs.
At every step along the way, some kind of processing is done on the following
componentsthe database, network, BI application, or the browser.
The concept of computational distance refers to the length in terms of
systems, transformations, and other processes that the data must undergo
from its lowest level, all the way to being rendered on a browser as shown in
the image below:
Computational Distance
The longer the computational distance is for a given report, the longer it will
take to execute and render. The preceding image shows a hypothetical example
of a report that runs in 40 seconds. Each processing step on that report, such as
aggregation, formatting, and rendering, adds to the report's computational
distance, increasing the report overall execution time.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Computational Distance 39
Computational distance offers a useful framework from a performance
optimization perspective because it tells us that to improve the performance of
a report or dashboard, you must focus on reducing its overall computational
distance. The following image shows different techniques such as caching,
cubes, and aggregation that can be used to optimize performance for the 40
second hypothetical report:
Reducing the Computational Distance of a Report
This lesson focuses on the following two key computational distance reduction
techniques offered in the MicroStrategy platformcaching and Intelligent
Cubes.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
40 The Importance of 64-Bit Systems 2011 MicroStrategy, Inc.
The Importance of 64-Bit Systems
After completing this topic, you will be able to:
Understand the importance of 64-bit systems in reducing the computational
distance.
The server memory (RAM) is a critical resource for effective performance
optimization strategies involving caching and Intelligent Cubes. A system with
large memory resources enables architects to design more caches and
Intelligent Cubes that can source an increasingly larger set of reports and
dashboards. In turn, as more reports and dashboards are sourced by either
caches or Intelligent Cubes, the average user wait time is reduced. The
following image illustrates the increase in the % of reports sourced from
in-memory BI. Average response time drops with an increase in the amount of
server RAM:
Average Wait Time Versus In Memory BI Usage
By dramatically increasing the amount of addressable RAM in a server, the
64-bit technology is the ultimate high performance enabler. MicroStrategy
project implementations will have significantly more opportunities for
performance optimization using both caching and Intelligent Cubes when they
are deployed using 64-bit server technologies.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Introduction to Caching 41
Introduction to Caching
After completing this topic, you will be able to:
Define caching, list and define cache types, and set cache properties.
Caching is the retention of a recently used object in memory for the purpose of
improving query response time for future requests. Caching enables users to
retrieve results from in-memory stored files rather than executing queries
against a database. The MicroStrategy architecture offers several caching levels
within the different functional layers. This section covers the key caching
levelsreport, document, element, and object caching from the perspective of
performance optimization.
The following image displays the different levels of caching in the
MicroStrategy platform:
Cache Levels in the MicroStrategy Environment
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
42 Report Caches 2011 MicroStrategy, Inc.
Report Caches
After completing this topic, you will be able to:
Understand the best practices to leverage report caching.
Report Caching Overview
MicroStrategy report caches store all report results in Intelligence Server for
fast retrieval. Because they reduce computational distance to a bare minimum,
reports that can execute against caches will have better response times than
reports that execute against cubes, aggregate tables, or fact tables on the data
warehouse. Reports that are very frequently run, and have fixed and
predictable characteristics are perfect candidates for using MicroStrategy's
caching technology. User experience can be further enhanced by scheduling
the reports to run during batch window times. This way, by pre-caching reports
in batch, even the first report execution wait time can be minimized.
Report Cache Flow
The following image illustrates how report caching works:
How Report Caching Works?
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Report Caches 43
The basic workflow is as follows:
1 A user runs a report for the first time.
2 The report runs against the data warehouse database.
3 Intelligence Server caches the result set and also returns them to the user.
4 A second user subsequently runs the same report.
5 Intelligence Server searches its report caches.
6 Intelligence Server uses the report cache matching algorithm to determine
if it can reuse an existing cache.
7 If the report cache matching algorithm shows that the cache is not valid,
Intelligence Server queries the data warehouse. However, if the cache is
determined to be valid, Intelligence Server retrieves the result set from the
cache.
The greater the number of long-running reports in the project, the greater the
impact is on the performance. However, with report caching, you can reduce
the number of report requests that have to be processed against the data
warehouse and thereby improve processing time. The following image
illustrates how you can achieve better performance through report caching:
Report Caching Example
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
44 Report Caches 2011 MicroStrategy, Inc.
In this example, 20 users need to view the same data from the same report
each day. This report ranks product sales to display the best-selling and
worst-selling items, and it runs against a data warehouse table that contains
two million rows. Given the qualifications it is performing and the volume of
data in the table, this report takes 10 minutes to run. If each user queries the
data warehouse for the result set, collectively, their requests consume over two
hours of processing time. However, if you configure caching for this report,
only the first user who runs the report queries the data warehouse. The other
19 users can access the cache created by the first user. The processing time is
reduced to 10 minutes plus the time for cache retrieval, which in most cases
takes only a few seconds.
Being able to cache report results ensures that redundant queries are not sent
to the data warehouse and raises the level of performance as it reduces
processing time.
Report Caching Best Practices
In an ideal scenario, all reports that a user in a system could possibly need
within any given day should be pre-cached. This type of implementation would
practically guarantee very fast report response times for all users in the system.
However, more practically, there are three key factors that make pre-caching
every single report an impossibility:
the degree of personalization and security for the reports
the amount of RAM available on the Intelligence Server
the available batch window time
In general, MicroStrategy has found that because of the above three factors, in
a typical BI implementation about 10% of reports will benefit from caching.
The rest of this subsection explains in detail some important considerations
and the best practices to leverage report caching for performance optimization.
Enabling Caching for Frequently Used Reports Only
From a performance perspective, given that not every report can be cached, it
is important to prioritize and choose which reports are good candidates to be
cached. The frequency of use, especially for shared reports is a good
prioritization criteria. Report cache creation and usage consume server
resources such as RAM and disk space. As a result, MicroStrategy recommends
using true cache reuse as a criteria for choosing which reports to cache.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Report Caches 45
Typical MicroStrategy implementations involve shared reports that are
managed through a development, test, and production cycle; and personal
reports that are created by users directly in the production environment. Given
the typical configuration described above, the following report caching
recommendations should be considered:
1 Caching all reports at the project level is only practical for small projects
involving limited personal object creation.
2 Not all shared reports and documents are good caching candidates if the
diversity of content to be viewed by users is high. For example, a prompted
report that is heavily personalized by each user will not be a good candidate
for caching, but a prompted report in which 90% of job executions use the
same prompt answers could be a candidate for caching.
3 Caching should be disabled at the report object template level for report
wizard, report builder, and blank report templates. These templates are
used to create and save personal reports.
4 The Maximum number of caches setting in the Project Configuration Editor
enables administrators to control cache creation per project. It is
recommended to use this setting to limit the number of caches allowed.
You can use MicroStrategy Enterprise Manager to analyze system usage data
and identify the reports that fall within the frequently used shared reports
category and will therefore be good caching candidates.
Disabling Caches for Highly Prompted Reports
Caching for prompted reports is tied to specific prompt answers. This means
that for a prompted report cache to be reused, the exact same prompt answer
as the one with which the cache was created must be provided by the user. Any
other answer will cause Intelligence Server to reject the cache and issue a SQL
statement against the data warehouse. For a highly prompted report to be
effectively cached, every possible combination of prompt answers would need
to be pre-cached, making this a highly impractical, if not impossible,
proposition given RAM and batch window limitations.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
46 Report Caches 2011 MicroStrategy, Inc.
Avoiding Per-User Cache Generation
When caches are created for each user, by definition they cannot be shared
across the project user community. If those users tend to hit similar reports or
even identical reports but with slightly different prompt answers, caching on a
per-user basis means that most of the caching resources would be spent
creating and maintaining quasi-redundant sets of memory structures with very
little potential for reuse. Therefore, you should avoid per-user cache
generation. Where needed, this type of use case can be typically satisfied
through the use of security filters.
Enabling XML Caching for Reports
Report caches are stored in binary structures both in file and in memory. Given
that the Web server requires an XML representation of the report to generate
the necessary HTML code to ultimately render it on the client, a Binary to
XML conversion step is required as part of processing a cached report.
When XML caching is enabled, this extra conversion step can be completely
skipped, making the rendering of a cached report even faster. When cached,
the XML representation of the cached report takes up space both on disk and
in memory. As a result, turning XML caching on has an impact in terms of
increasing RAM usage on Intelligence Server.
To enable XML caching for reports:
1 In Desktop, launch the Project Configuration Editor.
2 In the Project Configuration Editor, in the Categories list, expand Caching,
expand Result Caches, and select Creation.
3 Under Project Default Behavior, select the Enable XML caching for
reports check box.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Report Caches 47
4 Click OK to save your settings.
XML Caching

You can view the sizes of both the binary and XML caches for a given
report in the Cache Monitor.
Allocating Enough Memory for Report Caches
Report caches are first created in memory within Intelligence Server. They are
later backed up to disk and reloaded into memory when accessed.
If Intelligence Server memory is insufficient to hold all new requested report
caches, the least recently used caches are unloaded automatically from
memory to make space for the new caches.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
48 Report Caches 2011 MicroStrategy, Inc.
The process of loading a cache from disk can add significant delays to the user
wait time. Large cache sizes take longer time to load and may displace other
caches that will need to be moved back from memory to disk to make space.
This process degrades performance both for the user who requested the cached
report and for users who would have been able to hit the now displaced caches.
Cache Load/Unload From Disk

Based on the above considerations, from a performance perspective it is
important to run MicroStrategy in a 64-bit environment whenever possible
given the significantly larger amounts of addressable memory it offers. If this is
not possible, you must at least ensure that all available Intelligence Server
memory is used optimally.
Memory capacity planning for report caches can be extremely challenging, but
at a high level, a good estimate of the amount of memory needed to effectively
cache a given set of reports can be obtained as follows:
1 Determine which reports should be cached on a given project. For example,
you can use Enterprise Manager to determine the most frequently used
shared reports as discussed earlier.
2 Generate caches for those reports and record cache sizes (both binary and
XML) using Cache Monitor. Note that in the case of prompted reports, you
will need to generate caches for each prompt answer you wish to cache.
3 If you plan to generate some user-specific report caches, you must add this
variable into your total cache memory utilization estimates.
After you have estimated the total amount of RAM that will be used for report
caching you can define it in Intelligence Server.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Report Caches 49
To define the memory available for report caching:
1 In Desktop, launch the Project Configuration Editor.
2 In the Project Configuration Editor, in the Categories list, expand Caching,
expand Result Caches, and select Storage.
3 Under Datasets, in the Maximum RAM usage (MBytes) text box, type the
amount of Intelligence Server RAM you want to allocate for report cache
storage.

The maximum value you can define is 64 GB.


4 Click OK to save your settings
Maximum RAM Usage Setting

Practicing Cache Maintenance
Implementing the right cache strategy is very important for a successful BI
deployment. The list below contains recommendations for cache maintenance:
Deleting Obsolete CachesEstablish a cache deletion/regeneration
strategy to ensure users can access the most recent data available. Use
Schedule Administrative Tasks in Desktop to schedule cache deletion.
Allocating Enough Disk Space for CachesEnsure enough disk space is
allocated to store all cache files. Otherwise, new caches will not be created.
In addition, in a clustered environment, ensure that all nodes have the
correct read and write permission to the location where caches reside.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
50 Document Caching 2011 MicroStrategy, Inc.
Document Caching
After completing this topic, you will be able to:
Understand the best practices to leverage document caching.
Document Caching Overview
Report Services documents can be cached to improve system performance and
maintain a low memory footprint of the Intelligence Server. Document caches
are created at run time or on a schedule and behave in a manner similar to
report caches. At run time, document caches are created only in MicroStrategy
Web. When you execute a prompted document in MicroStrategy Web, an XML
document cache is generated. When you reprompt the document, a new XML
cache is generated with the data reflecting new prompt answers.
To enable document caching, you must first enable report caching. For a
specific document cache to be generated, all underlying dataset caches must
exist or be generated at the document run time. If caching for one of the
datasets is disabled or fails for any reason, the document cache for that
document will not be generated.
Unlike report caches, which are created only as binary files, document caches
can exist in the following formats: PDF, Excel, XML/Flash, and HTML. The
following image shows the document caching options in the Project
Configuration Editor:
Document Caching - Creation Subcategory
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Document Caching 51
For Dashboards, the creation of new caches is basically based on the different
views you choose on the interfaceEditable, Interactive, Flash, and so forth.
This logic is followed for any type of documentssingle-layout and
multi-layout, and single panel and multi-panel. Having these differences does
not change the cache creation behavior.

In multi-layout dashboards, a cache for Flash view is created for each


layout. However, switching between layouts in Flash view will not
increase the hit count of the caches as it would if you do the same thing
in Editable or Interactive view. This is due to an optimization for Flash
content, where the Flash content is kept in the Intelligence Server
memory for faster processing and rendering.
In addition to the document cache storage location, you can specify the
maximum RAM usage and the maximum number of caches, as shown in the
following image:
Document Caching - Storage Subcategory
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
52 Document Caching 2011 MicroStrategy, Inc.
Document Caching Best Practices
In terms of performance, the most important considerations for document
caching are listed below:
Unless there is a low number of relatively simple documents in a project, it
is not recommended to enable project-level document caching.
Allocate sufficient memory for document caches to avoid file swapping,
which negatively affects performance. To determine how much memory
might be needed, you must consider the cache format that will be created:
XML caches consume more memory than report binary caches.
Depending on the operating system, the memory consumption can vary
from two to four times the size of the file. For example, in an
Intelligence Server running on Oracle Solaris, a 10 KB file cache will use
40 KB of RAM.
Binary, PDF, and Excel caches usually use the same amount of memory
as their file sizes
Avoid generating document caches on a per-user basis.
Schedule the documents to run during the batch window. By pre-caching
these documents, the wait time for the first user accessing these documents
is completely eliminated.
Avoid enabling the Automatic option to send documents to History List.
When you send the document to a History List, a cached document instance
is stored directly in the user's History List. This cache will not be reused
during regular execution of the document. Therefore, if you send the
documents automatically to History List, the execution of these documents
will not create caches that can be reused across users.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Object Caches 53
Object Caches
After completing this topic, you will be able to:
Understand object caching and the best practices to leverage object caching.
Object Caching Overview
An object cache is a recently used object definition stored in the memory of
Desktop and Intelligence Server. You can create object caches for both
application and schema objects. The example below illustrates how object
cache works:
1 A user opens the Report Editor.
2 The collection of objects displayed in the Report Editor make up the
reports definition. If no object cache for the report exists in the Desktop or
the Intelligence Server memory, the object request is sent to the metadata.
3 The report object definition is retrieved from the metadata and displayed to
the user in the Report Editor.
4 An object cache is created in memory of the Intelligence Server and the
Desktop machines. If the same user requests the same object, the object
cache on the Desktop machine satisfies that request. If a different user
requests the same object, the object cache in Intelligence Server satisfies
that request.
Object Caching Best Practices
Allocate enough memory in Intelligence Server for the storage of object caches.
Tuning this setting to allocate memory greater than the 100 MB default is
recommended when dealing with complex schemas with a large number of
objects.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
54 Element Caches 2011 MicroStrategy, Inc.
Element Caches
After completing this topic, you will be able to:
Understand element caching and the best practices to leverage element
caching.
Element Caching Overview
An element cache is a recently used attribute element list stored in memory for
faster retrieval. When it hits a cache, a request for attribute elements runs
against Intelligence Server memory instead of issuing a SQL statement against
the database, thereby reducing response times significantly.
Between 25% and 30% of all queries issued against the database in a typical BI
implementation are element queries as the following figure shows:
Element Queries as a % of All Database Queries

This means that performance gains on element browsing will have a significant
impact on overall system performance.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Element Caches 55
Element caches tend to be especially useful when users execute reports or
documents that contain attribute element list prompts. The speed at which
users see the options for elements in lists and thus, the overall usability of the
report or dashboard is greatly enhanced by caching the elements present in the
prompt.

For more details on the element browsing query flow, refer to the
MicroStrategy Administration: Configuration and Security course.
Element Caching Best Practices
Using Intelligent Cubes to Eliminate Element Database
Queries
The execution of a typical prompted report sourced directly against the data
warehouse generates two types of querieselement queries to populate the
prompt and report queries to calculate the report results. On the other hand, a
report sourced from an Intelligent Cube retrieves both the prompt elements
and the report dataset directly from the cube, alleviating the load on the data
warehouse and improving overall system performance.
Setting the Same Element Display Size for MicroStrategy Web
and MicroStrategy Desktop
The amount of memory that element caches use on Intelligence Server (set at
the project level) and on the MicroStrategy Desktop machine can be controlled.
To avoid the risk of memory swapping and performance degradation, it is
important to optimize the use of the memory assigned to element caches. If
different element display sizes are set on Intelligence Server and MicroStrategy
Desktop, the element caches cannot be shared between those two components,
and the risk of memory swapping increases.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
56 Element Caches 2011 MicroStrategy, Inc.
Instead, defining identical element display sizes on both Intelligence Server
and MicroStrategy Desktop allows element cache sharing and optimizes the
use of a limited resource, as illustrated in the following image:
Inefficient and Efficient Use of Element Caching
Assigning Enough Memory for Element Caching
You should allocate sufficient memory in the server for the storage of element
caches. Default values tend to be low. The table below shows the recommended
settings for element caching. You should consider allocating even more
memory for cases when some attributes contain large amounts of elements.
Element Caching Recommended Settings
Default Setting Recommended Setting
Web Desktop Web Desktop
Element Display Size 30 1000 50 50
Element Cache Size 120 4000 200 200
Avg. Amount of Data
(512 bytes/element)
60 KB 2,000 KB 100 KB
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Cache Sizing Recommendations 57
Cache Sizing Recommendations
After completing this topic, you will be able to:
Understand the rules of thumb to size caches in your system.
MicroStrategy recommends the following cache sizing to achieve high
performance on a MicroStrategy deployment:
Report CachesFor each application, counting 10 reports per user at an
average report size of 50 KB, the total memory for result caches per user per
project should be about 500 KB.
Document CachesDocument cache would require about 100 MB for 100
documents cached in a project with an average document size of 1 MB.
Element CachesAn average element cache memory requirement is
about 10 MB, for a display size of 50 elements. For 100 attributes, as seen in
a typical implementation, it is 512 bytes per element.
Object CachesObject cache calculation is based on the metadata size.
Object caches should range from 20-50% of the metadata size. A metadata
size of 1 GB should have an object cache of 500 MB, which corresponds to
50% of the metadata size.
Intelligent CubesCubes would take about 25 GB for an average of 50
cubes per project at 500 MB per cube. Intelligent Cubes have query
characteristics like databases, but because the cubes reside in the main
memory, they have the performance characteristics of caches. Thus, they
are not a part of the caching layer but are extremely important to provide
superior performance.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
58 Cache Sizing Recommendations 2011 MicroStrategy, Inc.
The following image summarizes the recommended cache sizes:
Recommended Cache Sizes
Use the following Caching Size Worksheet to estimate initial cache settings
based on your BI Implementation size:
Caching Size Worksheet
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Introduction to Intelligent Cubes 59
Introduction to Intelligent Cubes
After completing this topic, you will be able to:
Understand the Intelligent Cubes functionality and how it complements the
MicroStrategy caching strategy.
In the production environment with thousands of reports, it may not be
feasible to create caches for every single report. First of all, to create caches,
each report has to first run against data warehouse. In addition, many of the
reports may return overlapping data. Therefore, many caches may contain
redundant data. To overcome these types of challenges, you utilize Intelligent
Cubes.

Intelligent Cubes are also called In-memory Cubes.


You can create Intelligent Cubes that contain large datasets calculated at a very
granular level. You can then create multiple Intelligent Cube reports that
access the data in the Intelligent Cube. These reports may aggregate data at the
cube level or any higher level, without generating new SQL and running
queries on the data warehouse. You can even embed prompts inside the
Intelligent Cube reports to enable users to further limit the result set.
For example, you can create an Employee Compensation Intelligent Cube that
holds compensation data for all employees across all regions. You can then
create multiple Intelligent Cube reports, each with a different set of attributes
and metrics on the report, or different months or regions in the result set. You
can even perform analysis beyond the data stored by creating derived metrics
and elements.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
60 Introduction to Intelligent Cubes 2011 MicroStrategy, Inc.
Each time one of those reports is requested by a user, it uses the Intelligent
Cube instead of running against the data warehouse, as shown in the following
image:
Intelligent Cube Example
In this example, 20 users need to see employee compensation-related data
from 20 different reports each day. Each report on average runs five minutes in
the data warehouse. If each user queries the data warehouse for the result set,
collectively, their requests consume almost two hours of processing time.
However, if you create an Intelligent Cube to store data for these reports, the
data warehouse is accessed only when the cube is first published. All 20 users
hit the Intelligent Cube. The processing time is reduced to five minutes plus the
time for the retrieval of the slice of the cubes data, which in most cases takes
only seconds.
As you can see from this example, Intelligent Cubes may also help to decrease
the computational distance. Unlike report caches, Intelligent Cubes can
provide data not only to multiple users but also to multiple Intelligent Cube
reports.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. The Intelligent Cube Publication Process 61
The Intelligent Cube Publication Process
After completing this topic, you will be able to:
Understand the process of publishing an Intelligent Cube.
High Level Steps
Before cubes can be used to run reports, they must be published. The cube
publication process involves the following main steps:
1 SQL generation and query executionThe SQL Engine generates SQL
for a cube request and the Query Engine sends this query to the warehouse.
2 Warehouse data fetchA database class component in Intelligence Server
retrieves the data using a memory buffer of 100 MB.
3 Data conversion and serializationThe Analytical Engine converts the
data into an Intelligent Cube structure. After the structure is completed, the
Intelligent Cube is saved into the Intelligence Server memory.
The following image represents the publication process:
Intelligent Cube Publication Process
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
62 The Intelligent Cube Publication Process 2011 MicroStrategy, Inc.
Within this process and from a performance perspective, cube designers must
consider two critical factorsoverall cube publication times (horizontal axis on
the graph) and peak memory usage (vertical axis on the graph). The following
sub-sections cover the trade-offs and the impact that different settings and
approaches have on these two critical factors.
Peak Memory Usage
As the previous cube publication process image shows, because of the data
processing that the Analytical Engine must perform to create the necessary
data structures to support the cube, the construction of a cube requires more
memory than what is ultimately used by a fully published cube.
This is an important factor to consider when calculating how much server
memory might be needed to create Intelligent Cubes to support a set of reports
and dashboards. Exceeding the available memory in a server during cube
publication does not cause the process to fail, but it does cause the process to
become extremely slow. The reason for this is that the operating system uses
memory swapping to disk which severely impacts cube publication times.
Based on internal benchmarks, as shown in the following image, the peak
memory used by Intelligence Server is expected to be two to five times the cube
size while publishing a cube:
Peak Memory to Cube Size Ratio
The variations in this ratio are due to the types of attributes, number of distinct
attribute elements, and the normalization techniques used to generate the
cube.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. The Intelligent Cube Publication Process 63

The study represented by the above image was run in MicroStrategy


version 9.0.1.
Intelligent Cube Data Normalization Techniques
The process of publishing cubes, which involves querying lookup tables,
dimensional tables, and fact tables from databases to perform necessary joins
and data aggregation, produces an intermediate table similar in structure to
the following table:
This intermediate table includes all the data for a final cube, and it is referred
to as the Base Cube Table. The size of the Base Cube Table is usually much
larger than the source table in the data warehouse, because of the additional
information generated by data joins and metric aggregations. This table
includes a significant amount of redundant information as the attribute forms
are repeated multiple times.
MicroStrategy uses an optimization process referred to as normalization to
optimize the size of the Base Cube Table. The final output of the normalization
process is a set of tables which includes one lookup table for each attribute and
one result table.
Intermediate Table
A1_ID A1_DES A1_... A2_ID A2_DESC ... Metric A MetricB
001 A1111... A1111 101 A2222...
002 A1111... A1111 102 A2222...
003 A1111... ... 103 A2222...
004 A1111... ... 104 A2222...
005 ... ... 405 A2222...
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
64 The Intelligent Cube Publication Process 2011 MicroStrategy, Inc.
The Normalized Cube Tables are sent to the Analytical Engine, which converts
the data into the final Intelligent Cube structure. The final Intelligent Cube
structure looks similar to the normalized structure, although the underlying
data type of each table cell is different from the Base Cube Table or the
Normalized Cube Tables, as shown in the following table:
Normalized Cube Tables
Data normalization can be performed at various steps of the cube publication
process and can be controlled through the VLDB settings. The list below
describes the four VLDB options for Intelligent Cube normalization:
Do not normalize Intelligent Cube dataThe data is not normalized
when it is retrieved from the database. Normalization occurs inside the
Analytical Engine, only after all resulting Intelligent Cube data has been
retrieved from the database. This type of normalization is not
recommended.
Normalize Intelligent Cube data in Intelligence ServerThe data is
normalized by the database class component of Intelligence Server while
fetching data from the database. This type of normalization has
demonstrated superior performance in regards to memory consumption in
the following situations:
When the peak memory usage for cube publication is expected to be low
When publishing cubes with a large number of attributes
When publishing cubes with attributes that have string IDs

This is the default setting.


A1_ID A2_ID MetricA MetricB ...
A1Index001 A1Index002
A1Index001 A1Index002
A1Index001 A1Index002
A1Index001 A1Index002
... ...
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. The Intelligent Cube Publication Process 65
Normalize Intelligent Cube data in the databaseThe data is normalized
on the database side. The resulting cube data is stored in a temporary table.
Then, data is normalized using multiple SELECT statements. This
normalization may cause memory consumption to be higher at publication
if Attribute strings or VARCHAR data types are used. This option is
recommended when publishing cubes with attributes that have multiple
forms.

For data warehouse databases which are not optimized for large data
inserts (such as Netezza) publishing a cube with SQL normalization may
take longer.
Normalize Intelligent Cube data in database using relationship
tablesThe data is normalized on the database side. This approach can
provide improved performance in scenarios where cube data includes a
large ratio of repeating data, dimensions include a large number of
attributes, and attribute lookup tables are much smaller than fact tables.
Direct loading of dimensional data and filtered fact dataThere is no
need to normalize the data. It generates optimized SQL that retrieves data
directly from a data warehouse and transforms the data as SQL passes in
the following manner:
Attribute passesOne pass for each attribute
Dimension passesOne pass for each dimension
Metric passesOne pass for all metrics in the Intelligent Cube

While this option may provide faster cube publication as compared to


Intelligence Server Normalization (the default setting), peak memory
consumption for cube publication will be higher.
To define the normalization method to be used during cube publication:
1 In MicroStrategy Desktop, edit an Intelligent Cube.
2 On the Data menu, select VLDB Properties.
3 In the VLDB Properties window, expand Query Optimizations, and select
Data population for Intelligent Cubes.
4 Clear the Use default inherited value - [Default Settings] check box to
select any of the non-default options.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
66 The Intelligent Cube Publication Process 2011 MicroStrategy, Inc.
5 Click the normalization method you want the cube to use during
publication.
6 Click Save and Close to save your settings.
VLDB Property for Cube Normalization Methods

Because every environment is unique, there are key factors to consider when
determining which normalization will be more appropriate for a given
implementation. The list below emphasizes these factors:
Database and MicroStrategy HardwareThe normalization process is
conducted on either a database hardware or a MicroStrategy hardware,
depending on the normalization method used. Therefore, the hardware
power ratio between the database server and Intelligence Server can affect
the outcome of the comparison.
Network SpeedDatabase normalization retrieves only the normalized
data. Intelligence Server normalization retrieves the entire Base Cube
Table, which is usually larger than the normalized data. On a slower
network, it is expected that database normalization performs better.
Database Insert PerformanceDatabase normalization requires insert
operations, which can affect cube normalization performance on databases
that have a slow insert performance.
Outer JoinsDatabase normalization may require outer joins to be
conducted on the database side, which may require a large amount of
temporary table space. This behavior creates a scalability limitation
because the size of the outer join can be unpredictable.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. The Intelligent Cube Publication Process 67
Intermediate Table Type VLDB Property
The Intermediate Table Type VLDB property enables you to choose between
creating derived tables or true temporary tables when intermediate tables are
created during cube publication. True temporary tables implement each pass
of a multi-pass SQL in separate tables. They represent a feasible option in most
cases and are the default setting. Derived tables, instead of issuing multiple
tables for multi pass SQL, generate a single, large SQL pass. Because there are
no CREATE and DROP TABLE statements, the queries may run faster.
However, because multiple cross joins result in temporary space issues,
derived tables may not be a feasible option to use in all cases.
To define the intermediate table type:
1 In Desktop, edit an Intelligent Cube.
2 On the Data menu, select VLDB Properties.
3 In the VLDB Properties window, expand Tables, and select Intermediate
Table Type.
4 Clear the Use default inherited value - [DBMS level] check box and select
any of the non-default options.
5 Click Save and Close to save your settings.
The following image shows the Intermediate Table Type VLDB property:
Intermediate Table Type VLDB property
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
68 When to Use Intelligent Cubes? 2011 MicroStrategy, Inc.
When to Use Intelligent Cubes?
After completing this topic, you will be able to:
Identify the usage scenarios that represent best candidates to take advantage of
Intelligent Cubes.
As with report caching, not all reports and dashboards in a project will be good
candidates for Intelligent Cubes. The decision of when to create Intelligent
Cubes to optimize dashboard and report execution performance requires some
analysis. The next sections in this document go through a simple framework
that will help project designers with this decision.
Before designing cubes, you should look at the system usage to analyze the
distribution of reports by frequency of usage and by average user wait time.
This analysis generally provides a good indication of which reports will benefit
from cube creation. The following graph illustrates the distribution of reports
in a system:
Distribution of Reports in a Project
On the lower part of the graph, represented by good performance, are the
reports that already execute within a satisfactory response time. These reports
are not good candidates for Intelligent Cubes because it is unlikely that their
performance can be significantly improved.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. When to Use Intelligent Cubes? 69
Similarly, reports that are only infrequently used (located at the far left of the
graph) are not good candidates for Intelligent Cubes, because it is unnecessary
to spend memory resources creating cubes to support reports that will not be
used frequently.
The far right of the graph displays the very frequently executed reports, which
are not good candidates for Intelligent Cubes either. For those reports, you
should use caching whenever possible, as it generally offers better
performance.
Lastly, represented in dark grey in the graph are a group of frequently run
reports with increasing levels of user wait time. These reports are the good
candidates for Intelligent Cubes. These reports can fall into three different
categories that can be addressed by individual design strategies to build
Intelligent Cubes:
Highly used reports
Multiple overlapping reports
Ad-hoc analysis reports
Using Cubes for Highly Used Prompted Reports
Highly used prompted reports with high average wait times are great
candidates for Intelligent Cube optimization. In a sense, those are the most
expensive reports in a BI application and tend to show a high level of variability
that is introduced predominantly by the use of prompts and security filters.
Caching will not be effective for this class of reports because of the above
combination of prompts and security filters. As a result, most reports in this
category will submit many highly similar database queries, each incurring the
average wait time, causing significant database workload.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
70 When to Use Intelligent Cubes? 2011 MicroStrategy, Inc.
For this category of reports, the recommendation is to create one Intelligent
Cube specifically designed to support each highly used prompted report. The
following image shows the use of cubes for frequently used prompted reports:
Using Cubes for Highly Used Prompted Reports
This is done by creating one cube containing data for all prompt answers and
security filter combinations for any of these reports. As a result, the database
only incurs a one-time database query. After that, all report requests are
answered by the Intelligent Cube with a greatly improved overall response
time. This strategy should be effective when applied to the top reports by
highest total wait time.
Using Cubes for Highly Used Overlapping Reports
This set of reports is usually characterized by the fact that most reports are
really variations of an original report design. This is a common scenario
whereby users create personalized versions of IT supplied reports.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. When to Use Intelligent Cubes? 71
In one case of a financial customer, over 1300 user-created reports could be
traced to a small originating core set of only 12 reports. Because it tends to use
up valuable memory with almost identical data structures, caching is very
ineffective for this list of independent reports. Without caching, these reports
cause a heavy workload on the database, which spends cycles executing many
highly similar queries each with the average wait time.
For this category of reports, the recommendation is to create an Intelligent
Cube containing all the common attributes and metrics for each subset of
overlapping reports. The following image shows the use of cubes for highly
used overlapping reports:
Using Cubes for Highly Used Overlapping Reports
With this solution, the system incurs the one-time cost on the database of
generating the Intelligent Cube but subsequently provides a highly improved
average execution time for each one of the reports supported by this new cube.
Cube Advisor can be a very useful tool in identifying sets of overlapping reports
among a potential list of thousands of reports. It uses Enterprise Manager
statistics to tally the total time spent in the database for every report, and it
determines the attributes, metrics, and filtering used in each to uncover
overlaps between reports.

For more information on Cube Advisor, refer to the MicroStrategy


Administration: Configuration and Security course.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
72 When to Use Intelligent Cubes? 2011 MicroStrategy, Inc.
Using Cubes for Frequently Used Ad-hoc Analysis Reports
The ad-hoc analysis reports are generally created by users on-the-fly, to answer
specific customized queries. Some customers report that over half of all their
reports are of this nature. Commonly, most of these ad-hoc reports fall into a
small set of subject areas and share common attributes and metrics. Because
caching is not efficient for this class of reports, and because such queries
cannot be predicted, ad-hoc queries create a heavy database workload.
For this category of reports, the recommendation is to create one cube per
subject area with the goal of capturing as many ad-hoc queries as possible as
shown in the following image:
Using Cubes for Frequently Used Ad-Hoc Analysis Reports

Due to the nature of ad-hoc queries, it is almost impossible to


completely redirect all of the database workload to Intelligent Cubes.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Cube Sizing and Memory Usage Considerations 73
Cube Sizing and Memory Usage Considerations
After completing this topic, you will be able to:
Understand the memory requirements for Intelligent Cubes and the tuning
techniques that must be applied when designing cubes and reports.
Cube Loading and Swapping
As with caches, Intelligent Cubes are swapped back and forth between memory
and disk depending on memory availability. If Intelligence Server memory is
insufficient to hold all the requested cubes, the least recently used cubes are
unloaded from memory to make space to load a new cube. This means that
depending on the size of the cube, if a user query that hits a cube requires
loading it from disk, the user wait time could take much longer than expected.
As a general rule, for Intelligent Cube performance optimization, it is
important not to design more cubes than can be fit into Intelligence Server
memory at any one time.
Cube Size Constraints
Without consideration to whether publishing large amounts of data in a single
Intelligent Cube is the right strategy or not, the absolute amount of data that
can be published in a single cube is constrained by the following three factors:
System resourcesTwo main factors must be considered:
Available memory on Intelligence Server to publish the cube

Peak memory usage must be considered rather than final cube size.
Available memory on the database to handle the temporary tables
created when database normalization is used
Total cube publication timeLimiting factor in terms of whether a cube
can be published during the system's batch window or not. Cube
publication involves SQL generation and execution against a database.
Making the cube definition as optimal as possible will pay off in terms of
shortening the total cube publication time.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
74 Cube Sizing and Memory Usage Considerations 2011 MicroStrategy, Inc.
Maximum number of result set rows allowedMicroStrategy only allows
2 billion rows to be published in an Intelligent Cube. This is a limitation
inherent to 64-bit platforms.
Cube Size and System Scalability
As the previous section explains, several system constraints exist that limit the
maximum cube size that could be generated in a given MicroStrategy
deployment. It is important to understand how cube size relates to overall
system scalability and throughput.
In MicroStrategys internal benchmarks, an 80 GB cube was published on a
server with 144 GB of total memory. This cube sustained response times of less
than two seconds for concurrency levels of up to 200 users. After the 200 user
mark, response time tended to degrade.
A second key observation was that smaller cube sizes sustain larger levels of
concurrency without significant performance degradation. For example, as per
the benchmarks, you can have up to 600 concurrent users for a 40 GB cube
while having up to 1800 concurrent users for a 10 GB cube.
The key learning is that while single-user response times tend to remain the
same as cube sizes increase, user and traffic scalability tends to diminish, as
shown in the following image:
Response Time Versus Throughput
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Cube Sizing and Memory Usage Considerations 75
The general recommendation is to design Intelligent Cubes with less than 5
GB. Staying within this threshold should provide optimal performance in
terms of report response times and overall user scalability. While larger cube
sizes are feasible, make sure that they can support large user concurrency
requirements. Alternatively, you should look for opportunities to reduce the
overall cube size. The following table displays throughput considerations for
Intelligent Cubes, depending on the cube size:
Factors Affecting Intelligent Cube Size
Attribute CardinalityAttributes that have a large cardinality (number of
unique attribute elements) can have a much larger impact on the memory
size of a cube than attributes that have only a few unique elements. In
addition, another important factor that impacts memory size is the number
of attribute forms that are returned for each attribute. Each new form
requires additional data to be stored in the cube.
MetricsEach metric added to a cube can also represent an additional cost
in terms of memory usage. Smart metrics tend to be especially costly in
terms of cube size because when a metric is defined as a smart metric, all
intermediate values for the metric calculation must be saved in the cube to
fulfill run-time aggregations. The Analytical Engine needs to store the
component metrics for dynamic aggregation and subtotals at different
levels, and each of the intermediate values requires the same space as a
metric value.
From a cube size perspective, the impact of a smart metric is roughly
equivalent to the impact of several simple metrics. The exact contribution
to cube size depends on how many intermediate values are required for the
formula of the smart metric. Given this impact, it is recommended to avoid
creating complex smart metrics when they will be used in combination with
Intelligent Cubes.
FiltersFilters can be used to restrict the amount of data returned for an
Intelligent Cube, and therefore, reduce its overall size.
Cube Size Recommendations
Cube Size Throughput Level
< 5 GB Optimum (Recommended)
5 - 15 GB Acceptable
> 15 GB Sub-optimal (Not Recommended)
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
76 Cube Sizing and Memory Usage Considerations 2011 MicroStrategy, Inc.
Cell Level FormattingCell level formatting is not available for Intelligent
Cubes. If a report with threshold formatting is converted to an Intelligent
Cube, the formatting is ignored during publication and the cube size
remains the same.
However, if MDX cube reports contain cell level formatting, this
information is stored when the report is converted into an Intelligent Cube
and leads to an increase in cube size.
Data TypesThe data types used to represent attributes and metrics affect
the memory requirements for these objects because they depend on how
the data types are represented in the database from where they are
retrieved. The following table provides the memory requirements based on
the data type present in the cube:

For each cell in the attribute lookup table and each metric cell in the
result table, one byte is used to keep track of whether the data is null or
not. The rest are used to save the data.

VARCHAR data type consumes more memory than other data types, so
it is recommended to be conservative when using VARCHAR.
InternationalizationFor each language supported by the Intelligent
Cube, each attribute includes an additional description field stored for the
language. As an example, an English-only cube of 338 MB in size can
increase to more that 1.5 GB by adding the necessary data to support an
additional nine languages.
Data Types and Cube Sizing
Data Type
Attribute Cost Metric Cost
Result Table
Byte Per Cell
Lookup Table
Byte Per Cell
Simple Metric
Bytes Per Cell
NUMBER 8 5 5
FLOAT (32) 8 9 9
DATE 8 45 45
TIMESTAMP 8 45 45
TIMESTAMP
WITH TIMEZONE
8 45 45
CHAR (N) 8 4N+1 4N+1
CHAR (N CHAR) 8 4N+1 4N+1
VARCHAR2 (N) 8 4N+1 4N+1
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Cube Sizing and Memory Usage Considerations 77
Other Cube and Report Design Best Practices
The following list discusses some other cube and report design best practices:
Match cubes and reports aggregation levelsWhenever possible, try to
match the level of aggregation of cubes and reports. This practice prevents
excessive aggregation work which greatly saves time and resources in the
data retrieval process.
Avoid Aggregating on Attributes with Big CardinalityAggregating a
report or a cube on attributes with big cardinalities may negatively impact
the report execution performance. When aggregating on attributes with big
cardinalities, the Analytical Engine needs to spend more time during the
data aggregation process.
Avoid Filtering on the Lowest Level AttributeWhen the lowest level
attribute of a report or cube contains a large number of elements, you
should avoid including it as a filtering criteria. You should also avoid
including lowest level attributes in the filter criteria when they are not part
of the template, while an attribute on the same hierarchy is on the template.
For example, consider a cube filtered on the Order attribute. The template
does not contain Order, but contains Customer Region, which is a higher
level attribute than Order, inside the Customer hierarchy. In this scenario,
performance is sub-optimal because the filter is included in the
aggregation.
Avoid Using Conditional MetricsConditional metrics add the filter
criteria to the metric aggregation. When the filter is added to the metric
aggregation, performance is not optimized.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
78 Incremental Cube Refresh 2011 MicroStrategy, Inc.
Incremental Cube Refresh
After completing this topic, you will be able to:
Refresh cube data using the different options available in the incremental
refresh feature.
You can set up incremental refresh settings to update the Intelligent Cube with
only new data, instead of having to republish the entire cube. This feature can
reduce the time and system resources necessary to update the Intelligent Cube
periodically, improving overall performance.
For example, you have an Intelligent Cube that contains weekly sales data. At
the end of every week, this Intelligent Cube must be updated with the sales
data for the current week. You can set up incremental refresh settings so that
only data for the current week is added to the Intelligent Cube.
The following requirements must be met for an Intelligent Cube to qualify for
an incremental refresh:
The Intelligent Cube must be updated based on attributes only. For
example:
an Intelligent Cube requiring update for Month or Region qualifies for
an incremental refresh.
an Intelligent Cube that contains data for the top 200 stores by Revenue
does not qualify for an incremental refresh.
Incremental Refresh Methods
Depending on your requirements, you can use the following methods to define
the incremental refresh options:
Intelligent Cube republish settingsRecommended if the Intelligent Cube
must be updated along one attribute, or if updates along multiple attributes
must be made simultaneously.

To define Intelligent Cube republish settings, see Intelligence Cube


Refresh Settings starting on page 79.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Incremental Cube Refresh 79
Incremental refresh filterRecommended if the Intelligent Cube must be
updated along different dimensions at different times.

To define an incremental refresh filter, see Incremental Refresh


Filter starting on page 80.
Incremental refresh reportRecommended if you want to use the results of
a report to update the Intelligent Cube.

To define an incremental refresh report, see Incremental Refresh


Report starting on page 83.

To avoid overwriting any changes made by the incremental refresh filter


or report, do not re-publish the Intelligent Cube by double-clicking it or
by publishing it on a schedule.
Intelligence Cube Refresh Settings
The refresh settings change the way the Intelligent Cube is updated when you
re-execute it manually, or when it is published on a schedule. This approach is
recommended if the Intelligent Cube must be updated along only one attribute,
or if Intelligent Cube must be updated along multiple attributes
simultaneously.
To be able to use this approach, the Intelligent Cube definition must include a
filter that qualifies on the same attribute that will be used to update the cube.
For example, if the cube must be updated with new data for the Store attribute,
the cube filter must qualify on Store.
To define Intelligent Cube refresh settings:
1 In Desktop, right-click the Intelligent Cube for which you want to update
data.
2 Select Edit.
3 In the Intelligent Cube Editor, on the Data menu, select Configure
Intelligent Cube.
4 In the Intelligent Cube Options window, in the Categories list, select Data
Refresh.
5 Click one of the following options:
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
80 Incremental Cube Refresh 2011 MicroStrategy, Inc.
Full RefreshThe cube SQL is re-executed, and all the data is loaded from
the data warehouse into the Intelligence Server memory. This is the default
setting.
You use this option when the cube data is outdated or when the update is
based on a metric qualification. For example, a cube contains data for the
top 200 stores by Profit. The top 200 stores list must be updated every so
often.
Dynamic RefreshThe cube filter is evaluated. If new data returns from
the data warehouse, it is added to the cube. In addition, the data that no
longer meets the filter criteria is deleted from the cube.
You use this option for cubes that have a rolling set of data. For example, a
cube must contain updated data for the past six months.

If the data to be added and deleted cannot be determined, the Full


Refresh option is used as a fallback.
UpdateThe cube filter is evaluated. If new data returns from the data
warehouse, it is added to the cube. If the data returned already exists in the
cube, it is updated where applicable.
You use this option if your data is updated often. For example, a cube
contains daily sales data, which is updated at the end of every day.
InsertThe cube filter is evaluated. If new data returns from the data
warehouse, it is added to the cube. Data that already exists in the cube is not
changed.
You use this option if your old data does not change after it is saved to the
data warehouse.
6 Click OK.
7 Click Save and Close.
Incremental Refresh Filter
For complex update requirements, such as updating an Intelligent Cube for
different dimensions at different intervals, you can define an incremental
refresh filter or report. The data returned by a filter is compared to the data
that is already in the cube.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Incremental Cube Refresh 81
For example, an Intelligent Cube contains monthly sales data for 2009 and
2010 for year-on-year comparison. After the year 2011 begins, you only need to
keep the data for 2010, and the data for 2009 can be removed from the cube.
To accomplish this without republishing the entire cube, you can define one
incremental refresh filter that runs at the end of every month and adds the new
month's data to the cube. In addition, you can define a second incremental
refresh filter that deletes the previous year's data at the end of every year.

Incremental refresh filter is the default option for both ROLAP and
MDX Intelligent Cubes, and is unavailable for Intelligent Cubes created
using Freeform SQL queries or Query Builder.
To define an incremental refresh filter:
1 In Desktop, right-click the Intelligent Cube for which you want to define the
incremental refresh, and select Define Incremental Refresh Report.
2 In the Incremental Refresh Options window, under Refresh type, select one
of the following options:
UpdateThe incremental refresh filter is evaluated. If new data is
returned, it is added to the cube, and if the data returned is already in
the cube, it is updated where applicable.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
82 Incremental Cube Refresh 2011 MicroStrategy, Inc.
InsertThe incremental refresh filter is evaluated. If new data is
returned, it is added to the Intelligent Cube. Data that is already in the
Intelligent Cube is not changed.
DeleteThe incremental refresh filter or report is evaluated. The data
that is returned is deleted from the cube. For example, if the Intelligent
Cube contains data for 2008, 2009 and 2010, and the filter or report
returns data for 2009, all the data for 2009 is deleted from the cube.
Update onlyThe incremental refresh filter or report is evaluated. If
the data returned is already in the Intelligent Cube, it is updated where
applicable. No new data is added to the cube.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Incremental Cube Refresh 83
You can change these options at any time by opening the incremental
refresh in the Report Editor, and on the Data menu, selecting Configure
incremental refresh options.
3 Click OK.

The Report Editor opens with a new incremental refresh. If the


Intelligent Cubes definition included a filter, it displays in the
Report Filter pane.
4 In the Report Editor, in the Report Filter pane, edit the filter if applicable,
or create a new filter.

The filter must only qualify on attributes that are present in the
Intelligent Cube.
5 To preview the data that will be updated in the Intelligent Cube, on the
View menu, select Preview Data. The data displays in grid view.

If you have security filter that prevents you from viewing some data,
the preview will only display the data that you are allowed to view.
However, when the incremental refresh is executed, all the data is
updated in the Intelligent Cube, regardless of security filters.
6 To execute the incremental refresh immediately, click Run Report.
7 Click Save and Close.
Incremental Refresh Report
You can define a report to update an Intelligent Cube. The results of the report
are compared to the data in the cube, and the cube is updated accordingly.

If you are updating an Intelligent Cube based on a Freeform SQL or


Query Builder report, this is the only available option.
The following prerequisite must be met to use this approachThe report must
use all the attributes, and at least one metric from the cube that it will update.
By default, the report template used is the same as the cube's template.

For metrics that are not included on the reports template, data is
not updated in the cube.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
84 Incremental Cube Refresh 2011 MicroStrategy, Inc.
To define an incremental refresh report:
1 In Desktop, right-click the Intelligent Cube for which you want to define the
incremental refresh, and select Define Incremental Refresh Report.
2 In the Incremental Refresh Options window, under Refresh type, select one
of the following options:
Update
Insert
Delete
Update only

For explanation about these options, see To define an incremental


refresh filter: starting on page 81.
3 In the Categories list, select Advanced.
4 In the Options - Advanced pane, click Report.
5 Click OK.

The Report Editor opens, with a new incremental refresh report. By


default, the reports template contains all the attributes and metrics
from the Intelligent Cube.
6 In the Report Editor, edit the report according to your requirements.
7 To preview the data that will be updated in the cube, on the View menu,
select Preview Data.

The data displays in grid view.

If you have security filter that prevents you from viewing some data, the
preview will only display the data that you are allowed to view. However,
when the incremental refresh is executed, all the data is updated in the
Intelligent Cube, regardless of security filters.
8 To execute the incremental refresh immediately, click Run Report.
9 Click Save and Close to save and close the incremental refresh report.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Report Execution against Cubes 85
Report Execution against Cubes
After completing this topic, you will be able to:
Understand the performance considerations when designing reports to utilize
Intelligent Cubes.
The following sections provide architectural information and performance
considerations for the two types of reports that retrieve data from Intelligent
Cubes:
View ReportsView reports are reports that are directly linked to an
Intelligent Cube.
Dynamic Sourcing ReportsDynamic sourcing reports are regular
reports that automatically hit an available Intelligent Cube.
View Report Execution
The bullets below describe the factors that impact the execution performance
of a view report. It is important to understand them to prevent bottlenecks in
the execution process:
View Report DesignAvoid using filters on text and date attribute forms.
As explained above, before aggregating the data, the Analytical Engine
must evaluate each row in the cube and number comparisons are much
faster than date and text comparisons for this purpose.
Cube SizeThe bigger the size of the cube, the longer the view report
response time. This is mainly due to the fact that filtering and aggregation
must be performed against large numbers of rows of data.
Dynamic Sourcing Execution
Dynamic sourcing enables reports to automatically hit an available Intelligent
Cube if it contains all of the information required for the report.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
86 Report Execution against Cubes 2011 MicroStrategy, Inc.
When dynamic sourcing is enabled, the engine must ensure that the data of a
published cube can satisfy the report request and that by running against that
cube it can obtain exactly the same results it would obtain if it ran against the
data warehouse. Both the Intelligent Cube and the requested report are
analyzed according to a number of criteria.

For more information about the criteria used by dynamic sourcing to


match reports and Intelligent Cubes, please refer to the OLAP Services
Guide.
Properly designing cubes and reports can significantly help improve the
report's cube hit ratio in the context of dynamic sourcing. The following
sections provide recommendations for cube and report design tailored toward
leveraging the dynamic sourcing functionality.
Cube Design
The following two recommendations are very important when designing a cube
for dynamic sourcing:
The attribute used in the cube report filter must also be included in the cube
template.
The following types of metrics prevent the use of dynamic sourcing and
should thus be avoided:
Level metric
Conditional metric
Transformation metric
Nested-aggregation metric
Fact extension or fact degradation metric
Metrics with pass-through functions
Report Design
The following recommendations are very important when designing a report
for dynamic sourcing:
The attributes used in the report filter should be part of the Intelligent Cube
template.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Report Execution against Cubes 87
If you use OR as toggle operator in the report filter, avoid combining it with
any of the following functions:
Begins with
Ends with
Does not begin with
Does not end with
Contains
Does not contain
Like
Not like
Is Null
Is not Null
Avoid using metric qualification filters
Do not use attributes in the report that have many-to-many relationships.
Avoid using the following types of metrics which prevent the use of dynamic
sourcing:
Nested-aggregation metrics
Fact extension or fact degradation metrics
Matrics with pass-through functions
Reports that do not contain any metrics can only hit cubes that do not
contain metrics.
If you use level metrics or conditional metrics in the report, ensure that
attributes used to define the level of the metric calculation or the metric
conditionality are part of the cube.

The filter applied to conditional metrics has the same limitations as the
report filter in terms of dynamic sourcing.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
88 Report Execution against Cubes 2011 MicroStrategy, Inc.
View Reports vs. Dynamic Sourcing Reports
View Reports and Dynamic Sourcing Reports have some differences in terms of
execution. When designing reports, it is important to take into account these
differences, which are described below:
Metric Qualification Level for View Filters
Dynamic SourcingThe aggregation level of metrics in a view filter is
applied at the report level, which is the level defined by the report
template.
View ReportThe aggregation level of metrics in a view filter is defined
by the attributes in the cube
For example, suppose both the dynamic sourcing report and the view report
have the following attributes in the template: Customer City and Customer.
However, the cube that satisfies both of these reports data was created with
the following attributes in the template: Customer Region, Customer City,
and Customer.
If you create a view filter with a metric qualification in the view report, the
filter is calculated for all of the attributes in the cube which are Customer
Region, Customer City, and Customer.
If you create the same view filter in the dynamic sourcing report, the filter is
calculated for only the attributes of that report which are Customer City and
Customer
Report Filter
Dynamic SourcingThe criteria in the report filter display in the
WHERE clause of the SQL that executes against the cube.
View ReportView reports do not contain report filter
View Filter
Dynamic SourcingThe criteria in the view filter are calculated during
the Analytical Engine processing.
View ReportThe criteria in a view filter displays in the WHERE clause
that executes against the cube.
Deploying MicroStrategy High Performance BI Caching and Intelligent Cubes 2
2011 MicroStrategy, Inc. Lesson Summary 89

Lesson Summary
In this lesson, you learned:
Computational distance offers a useful framework from a performance
optimization perspective because it tells you that to improve the
performance of a report or dashboard, you must focus on reducing its
overall computational distance.
By dramatically increasing the amounts of addressable RAM in a server,
64-bit technology is the ultimate high performance enabler.
Caching is the retention of a recently used element, object, or report results
for the purpose of improving query response time in future requests.
In MicroStrategy, there are four types of cacheselements, objects, reports,
and documents.
Report caches are results of previously executed reports that are stored in
memory or disk on the Intelligence Server machine, so that they can be
retrieved quickly.
From a performance perspective, it is important to prioritize which reports
are good candidates to be cached. Frequency of use, especially for shared
reports, is a good prioritization criteria.
The following recommendations to improve performance apply for report
cachesdisabling caches for highly prompted reports or with prompts on
attributes, avoiding per-user cache generation, enabling XML caching for
reports, and allocating enough memory for caches.
You can enable document caching in PDF, Microsoft Excel, and XML
formats at the project and document levels.
The cube publication process involves the following main stepsSQL
generation and query execution, warehouse data fetch, and data conversion
and serialization.
Cube designers must consider two critical performance factorsoverall
cube publication times and peak memory usage.
The peak memory used by Intelligence Server is expected to be two-five
times the cube size while publishing a cube.
Caching and Intelligent Cubes Deploying MicroStrategy High Performance BI 2
90 Lesson Summary 2011 MicroStrategy, Inc.
MicroStrategy uses an optimization process referred to as normalization to
optimize the size of the Base Cube Table.
The following groups of frequently run reports with increasing levels of user
wait time are the good candidates for Intelligent Cubeshighly used
reports, multiple overlapping reports, and ad-hoc analysis reports.
It is important not to design more cubes than what can be fit into
Intelligence Server memory at any one time to avoid swapping.
The general recommendation is to design Intelligent Cubes with less than 5
GB. This threshold should provide optimal performance in terms of report
response times and overall user scalability.
The following tuning techniques apply to Intelligent Cubessegmenting
cubes along prompt answers, designing cubes that cover smaller groups of
reports, matching the level of aggregation of cubes and reports, avoiding
aggregating on attributes with big cardinality, avoiding filtering on the
lowest level attribute, and avoiding using conditional metrics.
You can set up incremental refresh settings to update the Intelligent Cube
with only new data, instead of having to republish the entire cube. This
feature can reduce the time and system resources necessary to update the
Intelligent Cube periodically, improving overall performance.
View reports and dynamic sourcing are two different methods used by
reports to utilize Intelligent Cubes. It is important to understand how each
method impacts performance when designing reports.
2011 MicroStrategy, Inc. 91
3
DATA TRANSFER
Lesson Description
In this lesson, you will learn about the instances of data transfer and their
impact on the BI system performance. Next, you will be introduced to a few key
computer networking-related concepts. Lastly, you will learn the
performance-related recommendations when working with Distribution
Services.
Data Transfer Deploying MicroStrategy High Performance BI 3
92 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
Understand the different instances of data transfer and their impact on the BI
system performance. Describe key network concepts and network performance
recommendations. Apply best practices techniques when working with
Distribution Services.
After completing the topics in this lesson, you will be able to:
Understand the different instances of data transfer and their impact on BI
system performance. (Page 93)
Describe key computer network-related concepts. (Page 96)
Understand the main recommendations to improve network performance.
(Page 97)
Apply high performance best practice techniques when working with
Distribution Services. (Page 103)
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Introduction to Network Performance 93
Introduction to Network Performance
After completing this topic, you will be able to:
Understand the different instances of data transfer and their impact on BI
system performance.
Network in a Typical Business Implementation
Data transfers over the network are a very important component of a BI
system. Poorly tuned network performance in any one of those transfers will
translate into poor performance from a job execution perspective. The
following image shows the configuration of the Technology Performance
Benchmarking Labs and represents a typical BI infrastructure layout:
Typical BI Layout

In the image, the arrows between each one of the blocks represent data
transfers over the network.
Data Transfer Deploying MicroStrategy High Performance BI 3
94 Introduction to Network Performance 2011 MicroStrategy, Inc.
The data transfer occurs between the following components:
Case ExampleNetwork Impact on Cube Publication Times
Data fetching is essentially the transfer of results data from the data warehouse
to Intelligence Server for further processing by the Analytical Engine. One of
the cases handled by MicroStrategys testing labs illustrates well the kind of
impact that poor network performance can have on the performance of a
MicroStrategy rollout. When profiling cube publication times for cubes of
different sizes, the team noticed that the data fetch operation was taking an
unexpectedly long timemore than 70% of total cube publication time.
Data Transfer Components
Fact table, lookup table, and index data between the Oracle data warehouse and
the NetApp Storage Area Network (SAN)
Result datasets, cube data, and attribute element data between the Oracle data
warehouse and the Intelligence Server
Schema and report object definitions between the Metadata server and the
Intelligence Server
XML, binary report, and dashboard results data between the Intelligence Server
and the Web server
HTML, JavaScript, Flash, images, and other static content between the Web
server and the client
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Introduction to Network Performance 95
The following image illustrates the cube publication process.
Intelligent Cube Publication Process
The analysis concluded that the latency was due to network issues between the
Oracle data warehouse and Intelligence Server. The routing within the
benchmark labs network was accidentally configured in a way that caused
unnecessary network latency and was affecting cube publication performance
The following table displays results before and after the network tuning:
Cube Characteristics
Cube size (MB)20,335
Number of rows205 million
Number of objects 13 attributes and 9 metrics
Performance Results
Time (hh:mm:ss) Before the Network Fix After the Network Fix
Total Time 02:37:43 01:22:08
SQL Execution Time 00:24:00 00:20:02
Data Fetch Time 01:52:06 00:39:04
AE Processing Time 00:21:37 00:23:02
Data Transfer Deploying MicroStrategy High Performance BI 3
96 Key Network Concepts 2011 MicroStrategy, Inc.
Key Network Concepts
After completing this topic, you will be able to:
Describe key computer network-related concepts.
Network Terminology
Before discussing recommendations from a network performance perspective,
it is important to be familiar with the following key computer networking
concepts:
BandwidthRefers to the amount of data that passes through a network
connection over time. It is also known as throughput and is typically
expressed in terms of bits per second (bps). The greater the capacity, the
more likely it is that greater performance will follow.
LatencyIt is a synonym for delay. It refers to the time it takes for a packet
of data to get from one designated point to another. It is usually measured
by sending a packet that is returned to the sender.

Excessive latency creates bottlenecks that prevent data from filling


the network pipe, thus decreasing effective bandwidth. The impact
of latency on network bandwidth can be temporary (lasting a few
seconds) or persistent (constant) depending on the source of the
delays.
Network segmentsRefers to the portion of a computer network that is
separated from the rest of the network by a device such as a repeater, hub,
bridge, switch or router. Each segment can contain one or more computers
organized as a network, communicating using the same physical layer and
using the same contiguous range of IP addresses. Network segmentation is
used mainly for two reasonsperformance and security.
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Network Recommendations for High Performance 97
Network Recommendations for High
Performance
After completing this topic, you will be able to:
Understand the main recommendations to improve network performance.
Network Recommendations
The main recommendations to improve network performance are as follows:
Place all server components in the same segment
Consider bandwidth and latency requirements
Use HTTP compression
Set up Web proxy server
The following sections discuss each of these recommendations in detail.
Place All Server Components in the Same Segment
Beyond network traffic between the Web server and the client on a browser,
the most important network data transfers in a MicroStrategy system are the
traffic between Intelligence Server and the following components: data
warehouse, other Intelligence Server nodes, MicroStrategy Web, and the
metadata server.
Any degradation in network bandwidth or latency will have a negative impact
on the overall system performance. To minimize this possibility and to avoid
the latency introduced by router or firewall data packet processing, it is
recommended to keep all servers (Intelligence Server, Web server, data
warehouse server, and metadata server) within the same network segment.
Data Transfer Deploying MicroStrategy High Performance BI 3
98 Network Recommendations for High Performance 2011 MicroStrategy, Inc.
Consider Bandwidth and Latency Requirements
Data transfers from the Web server to the client are critical to performance.
From that perspective, it is important to ensure that good network bandwidth
and relatively low latency exist between the two.
Given the significant amounts of data that need to be moved back and forth
between the server components and how frequently it needs to be moved to
process requests, a MicroStrategy deployment tends to be extremely sensitive
to network latency, often much more than bandwidth. This is because even
a few additional milliseconds of network latency per packet can potentially
aggregate up multiple seconds in performance degradation for a given report
or dashboard request.
MicroStrategy Labs conducted tests to measure throughput and response time
sensitivity across different bandwidth settings at the network configuration
between Web servers and client browsers. The following observations were
made:
Performance will be unacceptable using a standard modem connection of
56 Kbps
Performance improves dramatically between 56 Kbps and 3 Mbps
After 3 Mbps point, increasing the bandwidth still improves performance,
although not as dramatically
For bandwidths greater than 25 Mbps, not enough performance gains were
observed that would justify an investment in such a configuration
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Network Recommendations for High Performance 99
The following table and chart presents the data collected from these tests:
Data Collected for Test
Use HTTP Compression
HTTP compression is a capability that can be built into the Web servers and
Web clients to maximize the available bandwidth, providing faster
transmission speeds between them. In a MicroStrategy environment, HTTP
Compression can help to optimize performance when there is limited network
bandwidth and latency between the Web server and Web client computers.
The HTTP compression method compresses all Web files before they are
transmitted over the network from the Web server to the client. Compliant
client browsers announce what methods they support to the server to be able to
download the correct format. If the browser does not support compression, it
downloads uncompressed data. The most common compression schemas
include deflate and zip.
The following procedures provide steps on how to enable HTTP compression
for Microsoft Internet Information Services (IIS) and for Apache Tomcat.
Data Collected for Test
Bandwidth
(Mbps)
0.056 0.128 0.512 1 3 6 9 25
No
Limit
Time (sec) 1087.5 426.4 119.2 62.5 30 23 21 16.9 17.8
Data Transfer Deploying MicroStrategy High Performance BI 3
100 Network Recommendations for High Performance 2011 MicroStrategy, Inc.
To enable HTTP compression for IIS:

HTTP compression is not supported by Windows XP.


1 In the Windows Web server, on the Start menu, point to Programs and
select Control Panel.
2 In the Control Panel window, double-click Administrative Tools.
3 In the Administrative Tools window, double-click Internet Information
Services.
4 In the Internet Information Services window, right-click the Computer
icon, and select Properties.
5 Click the Internet Information Services tab.
6 Under Master Properties, select WWW Service, and click Edit.
7 In the WWW Service Master Properties window, click the Service tab.
8 Select Compress static files to compress static files for transmission to
compression-enabled clients.

Selecting this option compresses and caches files with the extensions
.htm, .html, and .txt.
9 In the Temporary folder text box, type the path to a local directory where
the compressed files will be kept, or click Browse to locate the directory.

The directory must be on a drive that is local to the MicroStrategy


Web server and it must be on an New Technology File System(NTFS)
partition. For more information about NTFS, refer to your Microsoft
Windows documentation.
10 Select Compress application files to compress the dynamic output from
applications for transmission to compression-enabled clients.

Selecting this option compresses the dynamic output from


applications with the file extensions .dll, .asp, and .exe. Compressing
the output of application files is not recommended unless many
clients will be accessing the server over a slow connection, such as a
modem. Also, ensure that the MicroStrategy Web server has the
processor power to handle re-compressing dynamic files each time
they are requested by a client before enabling this option.
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Network Recommendations for High Performance 101
11 Click Limited to and type the size in the text box to limit the maximum
temporary folder size.

By default, the maximum temporary folder size is set to Unlimited. This


setting works for MicroStrategy Web servers with enough hard-disk
storage for both the uncompressed version and the compressed version
of static files stored in the temporary folder. However, if available disk
space is a concern, use the Limited to option. When the maximum
temporary folder size is reached as configured by this setting, IIS deletes
256 files to make room for new compressed files to be cached to the
temporary folder. Configuring a temporary folder size too small can
impact performance because IIS needs to re-compress and re-cache
static files, resulting in more CPU usage and hard drive access time.
To enable HTTP compression for Apache Tomcat:
1 Open the server.xml file under TOMCAT_HOME/conf/server.xml.
2 Locate the HTTP connector (e.g. HTTP, HTTPS) that will have compression
enabled and aggregate the [compression= on ] value to the
Connector node. The example below enables compression for the non-SSL
HTTP connector on port 8080:
<Connector port="8080" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25"
maxSpareThreads="75"
enableLookups="false" redirectPort="7552"
acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"
compression="on" />
3 Save the changes and restart Tomcat.

For more information about the Tomcat HTTP Connector and available
options regarding compression, refer to your Apache Tomcat
Configuration documentation.
Data Transfer Deploying MicroStrategy High Performance BI 3
102 Network Recommendations for High Performance 2011 MicroStrategy, Inc.
Setting Up Web Proxy Server
When limited network bandwidth and high latency exist between the
MicroStrategy Web server and the client browser or when the browser cache is
not enabled in customer environments, it is recommended to set up a proxy
server to improve the performance of accessing resources. Proxy servers
generally have their own caching capabilities, including the ability to cache all
the static content (e.g. images, scripts, files, and so forth), enabling it to avoid
unnecessary and expensive round trips to the Web server.
MicroStrategy Labs conducted tests to measure the performance gains of
adding a proxy server to the architecture. Adding a proxy server to the
architecture improved the response times by more than 30%. The following
table presents the test results. It displays the time that it took to navigate
through the project objects and to execute multiple dashboards, with and
without a proxy server:
Web Proxy Test Results
Actions No Proxy With Proxy Difference
Navigation Actions (login,
browsing, logout)
3.27 sec 2.15 sec 34%
Multi Dashboards
Executions
50.71 sec 32.2 sec 36%
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Distribution Services Performance 103
Distribution Services Performance
After completing this topic, you will be able to:
Apply high performance best practice techniques when working with
Distribution Services.
Just like networks, Distribution Services is technically another means of data
transfer between the MicroStrategy server infrastructure and a client. In this
section, you will learn about the different factors that most influence the
delivery throughput and response times of Distribution Services.

Data transfer in case of Distribution Services relates to the transfer of


the report and document results data.
Number of Recipients
The total number of recipients of Distribution Services messages is calculated
by multiplying the number of subscribers by the number of subscriptions. For
example:
1 subscription with 10 subscribers = 10 recipients
10 subscriptions with 1 subscriber each = 10 recipients
The number of recipients for a subscription tends to have more impact on
Distribution Services performance than all other factors. This is because it
directly correlates to memory footprint on Intelligence Server. Depending on
the amount of memory available on Intelligence Server and for a given fixed
report subscription, as the number of recipients increases, the server reaches a
saturation point where memory swapping starts to occur and system
throughput and response times degrade to unacceptable levels.
Report or Document Size
The size of the report or document delivered in the message is also an
important factor to consider when looking to optimizing performance of
Distribution Services. For example, a 5,000 cell document in Microsoft Excel
format sent by email will translate into a 971 KB file.
Data Transfer Deploying MicroStrategy High Performance BI 3
104 Distribution Services Performance 2011 MicroStrategy, Inc.
As is the case with the execution of very large reports and documents in either
MicroStrategy Web or MicroStrategy Desktop, delivering very large reports
and documents using Distribution Services has a significant impact on
Intelligence Server performance and delivery throughput. Internal lab tests at
MicroStrategy have shown that 64-bit servers were able to handle greater
delivery sizes much better than 32-bit ones, mainly due to the greater amounts
of RAM available on 64-bit architectures.
The following table displays sizing considerations for report/document
deliveries using Distribution Services, depending on size.
Delivery Method
Distribution Services can transmit content using the following delivery
methods:
Email
File locations in the network
Printer locations in the network
MicroStrategy technology labs performed tests to confirm performance levels
for different methods. The test results confirmed that different transmission
methods have significantly different performance levels, with file being the
fastest and print being the slowest method.
Size Recommendations
Size (MB) Performance Level
1 8 Optimum (Recommended)
8 11 Acceptable
> 11 Slow (Not recommended)
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Distribution Services Performance 105
The following graph displays the time that elapsed during tests when delivering
a 5 MB report to 100 recipients in Microsoft Excel format, using different
delivery methods, on a server with SPECint of 40:
Performance Level of Different Delivery Methods

For testing purposes, an additional delivery method was defined as


mix. It captures the performance impact of a combination of all three
delivery types.
Delivery Format
Distribution Services can deliver content to clients in the following formats:
Excel, HTML, PDF, Flash, CSV, Plaintext, and History List.
Internal testing was performed on batches of deliveries, which contained a
range of different formats to mimic production environments sending multiple
formats. A separate format category called mix refers to deliveries spanning
multiple formats. It includes both reports and documents, formatted in a
combination of Flash, Excel, HTML, and PDF format.
Data Transfer Deploying MicroStrategy High Performance BI 3
106 Distribution Services Performance 2011 MicroStrategy, Inc.
Different formats have significantly different performance levels, as shown in
the following graph:
Performance Level of Different Delivery Formats

The preceding graph displays the time that elapsed when delivering a 5 MB
report to 100 recipients via email using different formats, on a server with
SPECint of 40.

The delivered content can also be compressed and delivered in zip


format. Because the size of the report impacts delivery throughput,
reducing the size by compressing the output can improve the time it
takes for the data to transfer across networks to reach their destinations.
Data Source
When a subscription triggers, depending on the system configuration, the
delivered data could come from the data warehouse, from cubes, or from
standard report caches. The source from where the subscription gets the data
represents an important factor into the total response time for Distribution
Services deliveries. Data that comes from standard report caches is retrieved
faster than data that comes from cubes. Data that comes from cubes is
retrieved faster than data that comes from the data warehouse.
To maximize the benefits of caching for scheduled jobs execution, the following
recommendations are provided:
Enable caching for reports and documents that are delivered by
subscriptions.
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Distribution Services Performance 107
When creating document subscriptions to update caches, select the Use
valid dataset caches check box to ensure available report caches are used
when the subscription triggers. If this check box is not selected, document
caches using similar datasets get invalidated and the jobs execute against
the database. Performance is negatively impacted in this case, because the
jobs are re-executed and previous caches are invalidated.
The following image shows the setting to enable:
Cache Subscription
Avoid creating caches per user. When you create caches per user, different
users will not be able to share the same cache. In this case, several
redundant caches will be created, which could overload the available
memory and cause caches to not be reused. Instead, you can use security
filters to prevent users from sharing sensitive data.
If you use connection mapping or database pass-through in your project
environment, select the create caches per database connection and
database login check boxes. These check boxes can ensure that users will
not be able to access sensitive data from the database.
In the Project Configuration Editor, in the Caching category, do not select
the Re-run History List and mobile subscriptions against the
warehouse check box. If selected, this check box will prevent History List
and Mobile subscriptions from leveraging existing caches. If necessary, you
can select this option at a per subscription basis.
In the Project Configuration Editor, in the Caching category, do not select
the Do not create or update matching caches check box. If selected, this
check box will prevent subscriptions from updating or creating caches.
Data Transfer Deploying MicroStrategy High Performance BI 3
108 Distribution Services Performance 2011 MicroStrategy, Inc.
In the Project Configuration Editor, in the Caching category, the Keep
document available for manipulation for History List Subscriptions
only check box is selected by default. Keep the default selection because it
has a positive impact in terms of performance.
If this check box is selected, when the document is sent to the History
List through subscriptions, the document instance is kept in memory
with the following export options: Microsoft Excel, PDF, and HTML.
Therefore, if users perform manipulations to the document, no new
format needs to be generated because it is already in memory.
If you do not select this check box, a user can run the document sent to
the History List only in the format that the document was first created.
No other manipulations are allowed, because the document instance is
not kept in memory in this case.
Concurrency
Setting the number of concurrent scheduled jobs governing setting correctly is
an important step to optimize performance, because this step defines the
concurrency rate in Intelligence Server.
By default, each project is set to only allow 400 scheduled jobs to run
concurrently. To define this setting value correctly, it is important to analyze
the messages generated by the subscriptions and also how fast Intelligence
Server can process all the jobs.
The speed at which Intelligence Server processes jobs is directly related to the
amount of data warehouse threads and the concurrency that exists with other
non-scheduled processing jobs. For example, if a subscription generates 200
jobs and Intelligence Server has only 50 available threads, 150 jobs will be
waiting on queue.
It is also important to take into consideration the memory usage on
Intelligence Server when processing jobs. For example, if the available memory
on Intelligence Server only allows for processing of 100 jobs at a given time, it
is recommended to define the governor setting to 100 or slightly less. Because
it minimizes the possibility of memory swapping to disk, this change will result
in faster overall processing times.
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Distribution Services Performance 109
Alerting
Besides depending on the same performance factors that impact regular
reports and documents, alerting throughput and overall performance also
depends on the size of the alert report and whether the alert report or a
different report or document must be delivered.
Internal tests at MicroStrategy found that on average, subscriptions triggered
by alerting will have approximately 15% less throughput than equivalent
regular scheduled subscriptions. For example, a 42 KB Excel formatted report
sent to 300 recipients via email takes 100 seconds when triggered by an event
schedule. The same report delivered to the same number of recipients, if
triggered by an alerting criteria, will takes approximately 115 seconds.
Data Personalization Method
There are two methods to personalize subscription datausing prompts and
using security filters. Prompts enable users to filter the data by storing prompt
answers with each subscription. A user could have multiple subscriptions to
the same report with completely different sets of prompt answers. Security
filters are automatically applied when a scheduled job executes for the user.
Internal tests found that on average, subscriptions personalized with security
filters will present approximately 15% faster response times than
equivalent subscriptions personalized with prompts.
Clustering
In a clustered environment, scheduled jobs are distributed across Intelligence
Servers according to the load balancing specified for the different nodes. Job
division is done on a subscription basis. For example, in an environment with
two clustered Intelligence Servers, a project contains two subscriptions that
trigger at the same timesubscription A, with 100 recipients and subscription
B, with one recipient. In this case, 100 recipients will be resolved by node 1 and
only one recipient, from subscription B, will be resolved by node 2.
For performance optimization purposes, it is recommended that you create
subscriptions with similar number of recipients, where feasible, to avoid
overloading a single node in a clustered environment.
Data Transfer Deploying MicroStrategy High Performance BI 3
110 Lesson Summary 2011 MicroStrategy, Inc.

Lesson Summary
In this lesson, you learned:
Slow or poorly tuned network performance in any of the data transfers
available in a MicroStrategy environment translates into poor performance
from a report or dashboard execution perspective.
Bandwidth, latency, and network segments are key computer networking
concepts that are important to understand before discussing
recommendations from a network performance perspective.
To avoid the latency introduced by router or firewall data packet
processing, it is recommended to keep all servers within the same network
segment.
Data transfers from Web server to the client are critical to performance so
from that perspective, it is important to ensure that good network
bandwidth and relatively low latency exists between the two.
HTTP compression can help in optimizing performance when there is
limited network bandwidth and latency between the Web server and the
Web client computers.
It is recommended to set up a proxy server to improve the performance of
accessing resources.
The number of recipients for a subscription tends to have more impact on
Distribution Services performance than all other factors, because it directly
correlates to the memory footprint on Intelligence Server.
The size of the report or document delivered in the message is also an
important factor to consider when looking to optimizing performance of
Distribution Services.
Different transmission methods have significantly different performance
levels, with file being the fastest and print being the slowest method.
Different formats have significantly different performance levels, with CSV
being the fastest and Flash being the slowest.
You should enable caching for reports and documents that are delivered by
subscriptions.
Deploying MicroStrategy High Performance BI Data Transfer 3
2011 MicroStrategy, Inc. Lesson Summary 111
When creating document subscriptions to update caches, select the Use
valid dataset caches check box to ensure available report caches are used
when the subscription triggers.
The number of concurrent scheduled jobs is a governing setting that
controls the amount of jobs that trigger at the same time on Intelligence
Server based on a schedule. Setting this governor correctly is an important
step in optimizing performance because this step defines the concurrency
rate in Intelligence Server.
Subscriptions triggered by alerting will have approximately 15% less
throughput than equivalent regularly scheduled subscriptions.
Subscriptions personalized with security filters will present approximately
15% faster response times than equivalent subscriptions personalized with
prompts.
For performance optimization purposes, it is recommended that you create
subscriptions with similar number of recipients, where feasible, to avoid
overloading a single node in a clustered environment.
Data Transfer Deploying MicroStrategy High Performance BI 3
112 Lesson Summary 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 113
4
SYSTEM ARCHITECTURE AND
CONFIGURATION
Lesson Description
This lesson introduces you to important concepts regarding tuning your
MicroStrategy platform. In this lesson, you will learn about server specification
options, and the MicroStrategy recommendations to achieve optimized
performance. Next, you will learn about the most important settings to tune
Intelligence Server and their recommended values. Finally, this lesson will
discuss the main parameters for tuning a clustering system and the Web
environment.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
114 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
List the components of performance, understand the main performance
recommendations for server specification, system configuration, and the Web
environment.
After completing the topics in this lesson, you will be able to:
Understand the MicroStrategy recommendations in regards to server
specifications to achieve optimized performance. (Page 115)
Describe the main settings and governors that mostly impact the system and
understand their recommended values when tuning the environment.
(Page 126)
Understand the main parameters for tuning the Web environment. (Page
143)
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Server Specifications 115
Server Specifications
After completing this topic, you will be able to:
Understand the MicroStrategy recommendations in regards to server
specifications to achieve optimized performance.
Every Business Intelligence deployment depends on server infrastructure, that
is, hardware and software to function properly. Similar to other factors such as
database schema and dashboard design, the characteristics of the server
infrastructure on which MicroStrategy is installed and configured will have a
significant influence in the overall performance of the system as experienced
by users running reports and dashboards. This section covers the different
characteristics of server infrastructure from a performance perspective.
Processor
In computing, a processor is the unit that reads and executes program
instructions, which are fixed-length chunks of data (typically 32- or 64-bit
depending on the processor architecture).
The core is the part of the processor that performs the reading and execution
operations for each instruction. A multi-core processor is a processing system
composed of two or more independent cores. A dual-core processor contains
two cores, a quad-core processor contains four cores, hexa-core processor
contains six cores, and so on.
Single-core processors can only process one instruction at a time while
multi-core processors can process multiple instructions simultaneously.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
116 Server Specifications 2011 MicroStrategy, Inc.
In the case of MicroStrategy, the different engines and components on
Intelligence Server do take advantage of the additional processing power
provided by multi-core processors. The graph below displays internal test
results showing where Point A (i.e. the largest system throughput at which the
system response time is stable) is reached for different processor cores on
Intelligence Server.
Degradation Curve by Number of Cores
The graph shows that Intelligence Server performance in terms of throughput
achieved increases as you add more processor cores to the system. However,
the increase in throughput is not linear, that is, doubling up the number of
cores does not cause the system throughput to double.
The table below shows the ratio at which system throughput increases, based
on the same tests that produced the previous chart.
Throughput Versus Number of Cores
% Increase in Throughput (with stable response
time)
1 to 2 cores 100%
2 to 4 cores 84%
4 to 8 cores 63%
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Server Specifications 117
Architecture and Processing Speed
A processor executes a certain amount of instructions within a time frame or
cycle. The clock speed (or simply speed) of a processor is measured as the
number of cycles it can perform in a given second. A speed of one cycle per
second is called a hertz. A processor that has a frequency of 1 million cycles per
second has a speed of 1 megahertz.
As a general rule, faster clock speeds in a processor will translate into better
throughput (and thus better overall performance) because more instructions
per second will be executed. However, just as for number of cores, increase in
clock speed will not translate linearly into increase in system throughput.
Many other factors, including the processor architecture, the software itself
and the operating system on which it is running will have a direct impact on
scalability.
At MicroStrategy, the different engines and components on Intelligence Server
do take advantage of the additional processing speeds and modern processor
architectures. The table below demonstrates the difference in throughput for
Intelligence Server based on the same tests performed for the previous section,
but using different generations of processors:
Nehalem is the code name for the latest processor architecture from Intel. The
Nehalem processors used in the test were Intel Xeon and had a clock speed of
2.8GHz. The Pre-Nehalem processors are also Intel multi-core processors with
a clock speed of 2.66GHz. The significant increase in throughput cannot be
solely attributed to the additional processor speed, but also to changes in the
processor architecture, such as larger CPU caches and the new
Hyper-Threading technology that is specific to the Nehalem generation.
Throughput (User Queries/Min)
# of Cores
Pre-Nehalem
Processor
Nehalem
Processor
%Increase
1 211 394 87%
2 397 790 99%
4 794 1,451 83%
8 1,286 2,361 84%
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
118 Server Specifications 2011 MicroStrategy, Inc.
Utilization
Processor Utilization is another important parameter that must be taken into
account from a performance perspective. Regardless of the individual
processor characteristics, all processors reach a point of saturation, beyond
which the system performance degrades as more instructions are sent to the
processor by the system. This saturation point tends to be different based on
the application, so it is important to know what it is for MicroStrategy.
The graph below shows the resulting Processor (or CPU) Utilization Rates on
Intelligence Server as the report submission rate is increased.
CPU Utilization Rates
As the graph demonstrates, when Point A (i.e. the largest system throughput at
which the system response time is stable) is reached, the Processor Utilization
Rate is around 80%. Below that point, increases in submission rate translate in
equivalent increases in system throughput with a stable average response time.
Based on the above study, it is recommended to keep Intelligence Server
utilization rates at around 80% or less.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Server Specifications 119
Memory
Another very important performance factor in a MicroStrategy deployment is
the RAM available in the system. When a computer manipulates data, it places
the data in memory to be retrieved or manipulated as necessary. This
configuration results in optimum performance because computer memory is
optimized for fast access for further processing.
Disk Swapping
If the computer runs out of usable memory, it is forced to store all the
temporary data on its hard drive in what is called the page file or swap space.
When the processor is ready to use that information, the computer then has to
read it back from its hard drive and place it into memory where it can be
readily used. On average, accessing data from memory is about 10x faster that
accessing the same amount of information from disk.
MicroStrategy is an especially data-intensive application that requires
processing of potentially very large amounts of information while performing
tasks such as cube publication or large report/dashboard processing. Given the
above, it is very important from a performance perspective to have as much
memory as possible for a MicroStrategy implementation to avoid disk
swapping issues. The following are potential indicators of disk swapping:
The average amount of RAM required by Intelligence Server is around 80%
of total available RAM
The performance counter % Disk Time is greater than 80%
The performance counter Current Disk Queue Length shows a large
number of waiting requests
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
120 Server Specifications 2011 MicroStrategy, Inc.
The image below demonstrates the concept of Disk Swapping. In this case, the
memory demands of Intelligence Server and the other applications that also
reside within the server (e.g. Operating System, Database, so forth) have
exceeded the amount of available RAM in the server, resulting in disk
swapping and performance degradation.
Disk Swapping = Poor Performance
64-Bit
MicroStrategy recommends that all implementations be based on 64-bit
operating systems. The 64-bit operating systems do not have the same
restrictions on user address space that 32-bit operating systems have.
Although 64-bit operating systems do have user address space ceilings, this
limit is significantly higher than the one imposed by 32-bit operating system.
Intelligence Server Universal Edition is compiled as a 64-bit application and
runs on 64-bit operating systems. As a result, Intelligence Server Universal
Edition can take advantage of the larger user address space available on such
systems, significantly reducing the possibility of system performance
degradation due to disk swapping.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Server Specifications 121
Memory Utilization
From a performance perspective, to minimize disk swapping is recommended
to increase the amount of available RAM if server memory utilization regularly
reaches or exceeds the 80% threshold. Besides the obvious method of
physically adding additional memory chips to the server (assuming it has
remaining capacity on this front), there are a couple of alternative techniques
to consider:
Reducing the number of memory intensive applications that are running on
the Intelligence Server machine (for example databases or other
applications)
Shutting down processes, such as services and applications that are not
being used and still get allocated memory by the operating system
The image below shows an ideal Intelligence Server memory configuration. All
applications on the server, including MicroStrategy, are operating normally
and under the 80% threshold, hence, the need for disk swapping never arises.
Ideal Memory Utilization (<80%)
Disk Storage
Hard Drives (also simply called Disk) and Storage in general are an integral
component of any MicroStrategy deployment. Faster disk configurations will
yield faster input/output operations on the server, resulting in better
performance overall. Storage can be internal to the server (that is, the hard
drives are physically installed on the server hardware) or, more commonly for
applications that require large amounts of data such as MicroStrategy, external
in the form of SAN (Storage Area Network) or NAS (Network-Attached
Storage)
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
122 Server Specifications 2011 MicroStrategy, Inc.
RAID Configurations
RAID, an acronym for Redundant Array of Inexpensive Disks, is a technology
that provides increased storage reliability through redundancy, combining
multiple low-cost, less-reliable disk drive components into a logical unit where
all drives in the array are interdependent. The different schemes or
architectures are named by the word RAID followed by a number (for example,
RAID 0, RAID 1). Each scheme has a degree of reliability and performance.
Schemes with less reliability such as RAID 0 will generally cost much less than
highly reliable schemes such as RAID 10. RAID's various designs involve two
key design goals: increase data reliability and increase input/output (I/O)
performance. In general RAID 4 or RAID 5 present the best reliability-to-cost
ratio for a MicroStrategy deployment.
Disks Fragmentation in Windows Environment
When a disk is highly fragmented, the file system will have free segments in
many places, and some files may be spread over many extents. Access time for
those files will tend to become longer as the disk becomes more fragmented,
negatively impacting performance for all I/O intensive operations. From a
performance perspective, it is highly recommended to periodically defragment
the disk on Intelligence Servers running on Windows environment
(fragmentation is mainly a concern on Windows Operating Systems).
The two images below illustrate the concept of disk fragmentation:
Fragmented Disk = Performance Degradation
A Defragmented Disk
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Server Specifications 123
Disk Utilization
Similar to Processors and Memory, Disks and Storage in general also have a
utilization rate beyond which the system performance degrades or becomes
unreliable. Based on internal tests the threshold for Disk Utilization is
approximately 70% for MicroStrategy deployments.
Ideal Disk Utilization (<70%)
Operating Systems
The MicroStrategy platform must operate on top of an Operating System (OS).
Depending on how efficiently it uses the server's resources, the OS will have an
impact on overall system performance. The following chart shows the results of
internal tests comparing the performance, based on throughput, of Windows
vs. Linux-based MicroStrategy deployments.
Throughput Comparison - Linux vs. Windows
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
124 Server Specifications 2011 MicroStrategy, Inc.
The following chart shows the results of internal tests comparing the
performance, based on response time, of Windows vs. Linux-based
MicroStrategy deployments.
Response Time Degradation - Linux vs. Windows

While both operating systems yield relatively same levels of throughput and
similar average response times, the largest system throughput at which the
system response time is stable (A) is higher for Linux than for Windows. Based
on the above observations and for the same 64-bit hardware, Linux yields
higher levels of throughput than Windows (approx. 12%).
Virtualization
More MicroStrategy implementations are being deployed using Virtualization
technology such as VMWare, Microsoft Hyper-V, or LPAR on AIX. From an IT
management perspective, virtualization offers great benefits in terms of
improving server utilization rates and reducing maintenance overhead.
However, from a performance perspective, because it adds an extra layer of
indirection between the application and the actual hardware, virtualization
tends to have a negative impact. As implementation architects, it is important
to understand the tradeoffs between virtualization and performance.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Server Specifications 125
The following chart shows the results of internal tests conducted deploying
MicroStrategy on top of a VMWare vSphere Enterprise Plus environment.
Physical vs. Virtual Deployments

According to market surveys, close to 70% of current deployed


virtualized environments run on VMWare products.
The level of performance degradation when working on an virtual environment
increases with the number of cores in the system. The graphic shows that the
performance degradation goes from being almost insignificant with a small
number of processor cores, to reaching almost 25% for an eight-core system.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
126 Intelligence Server Configuration 2011 MicroStrategy, Inc.
Intelligence Server Configuration
After completing this topic, you will be able to:
Describe the main settings and governors that mostly impact the system and
understand their recommended values when tuning the environment.
Intelligence Server is the core component of the MicroStrategy architecture.
Any bottleneck at the Intelligence Server level will most likely be amplified
throughout the system resulting in poor performance. Therefore, Intelligence
Server must be optimally configured to enhance overall system performance.
The following section provides tuning and optimization recommendations for
Intelligence Server in four distinct areas: User Management, Resource
Management (mainly Memory), Workload Management, and Clustering.
User Management
For every user that logs into the system, Intelligence Server must assign
memory resources for the different tasks that the user could request. These
include report execution, History List generation, and report or dashboard
manipulations. From a performance perspective and to ensure optimal usage
of Intelligence Server, it is important to manage its key resources carefully.
Disconnecting Idle User Sessions
Idle user sessions tend to consume resources that could otherwise be available
to active users and improve (or at least maintain) the level of performance they
experience. They do so in the following ways:
By loading the idle user's working set into memory
By storing report results that an idle user may never use
By using resources for a request that wasn't cancelled before logging out
By preventing other concurrent users from logging into the system
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 127
From a performance perspective, it is recommended to optimize Intelligence
Server memory resources by automatically disconnecting idle users. Two
settings can be used for that purpose:
Web user session idle time (sec) to set a time limit before disconnecting idle
Web users
User session idle time (sec) to set a time limit before disconnecting idle
MicroStrategy Desktop users
Enterprise Manager is a good tool for system administrators to compute the
correct average idle session timeouts that can be used for the above settings.
Setting Concurrency Limits
To optimize memory usage, Intelligence Server administrators can set
concurrency limits, allowing only a specified number of concurrent users to
access the system at any given time. Two methods exist for this:
Limiting the Maximum number of user sessions on Intelligence Server
Limiting the User sessions per project on individual projects
In general, the recommended approach is to allow more concurrency on highly
used applications, while reducing concurrency on less used applications.
Enterprise Manager is again a good tool to analyze application usage as well as
acceptable concurrency levels.
Managing User Privileges
The MicroStrategy platform provides system administrators with a centralized
way to manage and control features and feature accessibility at a very granular
level. From a performance perspective, controlling the functionality that users
are granted can be a useful technique for optimizing system resource usage.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
128 Intelligence Server Configuration 2011 MicroStrategy, Inc.
For example, OLAP Services, Web exporting, and the ability to schedule
reports are actions that may overwhelm the system if they are granted to
hundreds of users that could be accessing the system all at the same time.
System memory usage can be optimized by selectively granting such privileges
to certain users rather than making them available to the entire user
community. The image below illustrates this concept.
Limiting Privileges for Better Performance\
Managing Report Requests
Each additional report request that goes through Intelligence Server leads to
additional resource consumption and eventually to performance degradation
for all users that are competing for the server's limited resources. Intelligence
Server enables system administrators to manage the number of requests that
can be submitted at any given time by doing the following:
Controlling the number of executing jobs per user, against the data
warehouse
Limiting the number of jobs per user session
Limiting the number of jobs per user account
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 129
Resource Management
Optimizing Working Set Memory
Every user that connects from either MicroStrategy Web or Web SDK-based
applications and executes a report is assigned by default a certain amount of
memory called Working Set. The report results are stored in the working set
memory to avoid report re-execution against the data warehouse and to make
report manipulations faster. The results of all subsequent report
manipulations are also stored in the user's working set.
The image below illustrates the concept of working set for multiple active users
running reports and manipulating the results simultaneously.
Working Set for Multiple Concurrent Users
The available Intelligence Server memory must be shared between the
different user working sets. If the memory limit is reached at some point
because the total memory taken up by all the user working sets is greater than
the available Intelligence Server RAM, the system will start swapping, causing
a negative impact on performance.
Optimizing the working set memory is critical for high performance, because it
allows users to conduct several manipulations efficiently, including using the
back button without executing SQL against the data warehouse.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
130 Intelligence Server Configuration 2011 MicroStrategy, Inc.
Working Set memory can be estimated by analyzing user activity and
determining the following parameters:
Maximum number of intermediate states per user
Average report size
Activity, including concurrent users and users who are simply logged in
If the working set memory estimations are greater than the available
Intelligence Server memory, it is highly recommended to increase the available
server RAM. The default working set value is 200 MB, but it generally has to be
increased based on the above estimations and the specific environment
characteristics.
Optimizing History List Usage
Similar to the Working Set functionality, each user is assigned a personal
repository, called History List that requires incremental memory as reports are
executed by the user and added to the History List for later use. The amount of
Intelligence Server memory consumed by the History List depends mainly on
the following factors:
Peak activity, including concurrent users and users who are only logged in
Average number of reports in the History List per user
Average report size
To avoid swapping and the performance degradation it entails, it is
recommended to optimize Intelligence Server memory by controlling History
List usage with one or more of the following methods:
Limiting the number of reports added to the History List on a per user basis
Automatically expiring old reports in the History List by setting a lifetime
Scheduling an administration task to delete expired History List messages
Limiting Report Data
Large report requests tend to require large amounts of memory on Intelligence
Server for temporary storage. From a performance perspective, it is important
to limit report data sizes to optimize Intelligence Server resource usage.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 131
The table below shows the approximate memory footprint of different report
data sizes:
The memory allocated to a report instance depends on the following factors:
Number of rows and columns
Data-types
Number of attributes and metrics on the report
Limiting the number of result rows (in reports) and element rows (in filters
and prompts) helps reduce the amount of memory consumed by report data.
Another method to reduce report data is displaying report information in
incremental pages, using the incremental fetch functionality. The image below
illustrates this concept:
Impact of Incremental Fetch on Memory Usage
Limiting Report Data
Rows Columns Report Cells Size in Memory (KB)
7,000 6 42,000 1,250
10,000 11 110,000 2,500
59,190 13 769,470 13,400
160,704 14 2,249,856 33,000
197,265 18 3,550,770 57,000
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
132 Intelligence Server Configuration 2011 MicroStrategy, Inc.
Limiting Report Export Sizes
Report exporting may be a memory intensive operation, especially for very
large formatted datasets. To optimize Intelligence Server resource utilization,
it is recommended to limit the export size using the following methods:
Limiting the number of cells to export to text or Excel in MicroStrategy Web
Using bulk export for large exporting tasks
Exporting large reportswithout formattingto text or CSV files
Setting a memory limit in Intelligence Server to generate an export file
The image below shows the approximate sizes of each export format based on
an initial 10 MB dataset.
Export Sizes
Workload Management
Optimizing the Number of Database Connections
MicroStrategy connects to data sources via connections called Database
threads and maintains a pool of such connections for each data source.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 133
This configuration enables Intelligence Server to distribute report requests
between the available threads and manage the load on the database.
If too few threads are enabled, there is a possibility of under-utilizing
database resources and ending up with a long queue of jobs on Intelligence
Server.
If too many threads are enabled, there is a risk of overloading the database,
causing performance degradation on report query response times.
Tuning the system to find the optimal number of database threads requires
some load testing, but as the image below illustrates, the optimal range of
threads corresponds to the zone of maximum system throughput.
Optimal Database Connection Zone
At some point during the test, the throughput drops, even if you increase the
number of database threads. This happens when the database and Intelligence
Server are overloaded and can no longer satisfy the report request rate. This
inflexion point where the throughput starts dropping represents the optimal
range of database threads for your BI system to deliver the highest throughput.
Prioritizing Existing Database Connections
MicroStrategy enables you to assign some report requests higher priority than
other jobs. If critical reports are to execute before non-priority reports, divide
the threads into a set of high, medium, and low priority threads to ensure that
certain requests execute ahead of others.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
134 Intelligence Server Configuration 2011 MicroStrategy, Inc.
Jobs are routed by priority to their corresponding database threads. This
feature is enhanced by a feature called thread connection borrowing. For
example, there are 6 threads available: 2 high, 2 medium, and 2 low. Assume a
user logs in and submits a high priority request.
1 Because this is a high priority report request, Intelligence Server checks
whether the high priority threads are free.
2 If the high priority threads are not free, Intelligence Server then checks
whether any Medium priority threads are free to be borrowed.
3 If they are not free either, Intelligence Server checks whether any of the low
priority threads are free.
4 If there is a low priority thread that is free, Intelligence Server assigns the
high priority report request to that thread.

Thread borrowing is only allowed from high to low. That is, high priority
report requests can borrow medium and low priority threads, medium
priority report requests can borrow only from low priority threads, and
low priority requests can only use low priority threads.
Thread connection borrowing ensures that higher priority report requests get
satisfied as soon as possible without being queued up. The table below
describes the 5 different parameters you can use to prioritize report requests:
Prioritization Parameters
Parameter Description
User groups An executive management user group may be assigned HIGH priority so
that their reports may execute before anyone elses.
Application
type
Assign priorities for requests coming from Desktop, MicroStrategy Office,
and MicroStrategy Web. For example, by assigning medium priority on all
MicroStrategy Desktop based requests, you ensure that all developers
would be able to run reports on medium priority threads.
Projects Report requests from project A may run under high priority, while report
requests from project B run on lower priority.
Request
type
You can assign element requests to be of high priority and report requests
to be of low priority.
Cost An arbitrary value that a developer or an administrator can assign to a
report. By assigning a cost, the developer quantifies the resources required
to execute the report, such as report execution time or result set size.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 135
Cost enables you to separate slower running reports that could monopolize
database connections from other faster running reports, as shown below:
Assigning Costs to Reports
Based on report costs, administrators can assign priorities ensuring that
critical jobs are executed first, leaving less critical jobs in the queue to be
executed at a later time. The table below shows a typical job prioritization
model, based on cost criteria:
MicroStrategy recommends that you allocate High/Medium/Low Priority
Threads in a 10/60/30 ratio. For example, 10% should be HIGH priority, 60%
or bulk should be MEDIUM priority, and 30% of all threads should be LOW
priority. Element requests should be assigned to high priority threads,
interactive reporting should be routed to medium priority threads, and batch
reports or high cost reports should be assigned to low priority threads.
Priority Matrix
Priority User Group BI Project
Request
Type
Application Report Cost Notes
High ALL ALL Element ALL Light Element browsing has the lightest
cost & higher priority
High Executive
Management
Sales
Analysis
ALL ALL Light Executive users lightest cost &
highest priority
Medium Business Analyst ALL ALL Web Medium
Medium Developer Sales
Analysis
ALL Desktop Medium Report development is a Medium
cost effort
Low ALL ALL Reports Scheduler Heavy
Low ALL ALL Reports Narrowcast Heavy Batch jobs are assigned lowest
priority threads
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
136 Intelligence Server Configuration 2011 MicroStrategy, Inc.
Server Load
The number of open jobs in the system is a variable that can have a high impact
on resource utilization. Execution of each job requires resources, such as
memory, CPU, Network, and disk space.
A job can be as simple as requesting elements or objects from the Metadata, or
as complex as executing highly formatted dashboards with multiple datasets.
Scheduled subscriptions, Narrowcast Server, and MicroStrategy Office can also
contribute to overloading the system with a large amount of concurrent jobs.
As more users are logged into the system, the chances of experiencing high
concurrency of executing jobs increases.
To ensure proper operation, the amount of jobs executing on Intelligence
Server can be limited by defining the following governing settings:
Jobs per user account
Jobs per user session
Executing jobs per user
Jobs per project
Intelligence Server Elapsed Time (for interactive and scheduled reports)
SQL timeout
In addition, the following methods are recommended to minimize the
generation of unnecessary jobs:
Controlling the report creation privilege
Analyzing usage patterns and creating mandatory prompt combinations
Defining efficient drill maps
Job Monitor and Performance Monitor are efficient tools to obtain a real-time
snapshot of the system state. Enterprise Manager helps monitor jobs and
observe patterns of resource utilization with a comprehensive historical view.
Logging and Statistics
Intelligence Server performance can degrade depending on the level and
amount of statistics that it is configured to collect.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 137
For example, the collection of report job SQL statements is useful in certain
contexts for database administrators. However, because of the operational
overhead required, collecting SQL for every report job that executes in
Intelligence Server will likely degrade system performance.
As a general practice, it is recommended to only enable low level logging and
statistics collection in the context of troubleshooting or tuning scenarios, but
never in a production environment.
Clustering
Clustering Intelligence Servers can help the system to better handle the
workload of a project. Having multiple servers increases the number of users
that can connect to the system at the same time and the number of jobs that
can be processed simultaneously. Dividing a project workload can decrease the
time users spend in queue and the time it takes to process reports as more
resources are available.
Cache Sharing
In a clustered environment, the best configuration for cache sharing is to have
each node maintaining its own cache files. This approach has the following
benefits:
Optimized Network TrafficThere is a split in the amount of network
traffic among the nodes. In addition, because each node can access its local
cache, a great percentage of the workflow is accelerated.
Better Failover SupportIf one of the nodes shuts down, the backup
nodes can regenerate the caches that were stored by the failed node. In this
case, the workflow will not be significantly affected.
Backup Frequency
Backup frequency controls the frequency, in minutes, at which cache and
History List messages are backed up to disk. In clustered environments, it is
recommended to set this value to 0, which causes caches and History List
messages to be backed up to disk immediately as they are created in memory. If
caches and History List messages are available in disk as soon as they are
created, they are synchronized correctly across all clustered nodes.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
138 Intelligence Server Configuration 2011 MicroStrategy, Inc.
History List Storage
History List messages can be stored in a database or in flat files. For
performance optimization purposes, it is recommended to store History List
messages in a database. The performance benefits of using database History
List are listed below:
Additional information about each History List message can be accessed,
such as message and job execution statistics. As a result, monitoring and
management of History List messages becomes more effective.
Database storage of the History List provides greater scalability and
improved performance. Instead of accessing several large files that reside
on the server machine, inbox information is retrieved from a database.
When the administrative task to delete History List messages is triggered, it
creates only one session on Intelligence Server rather than tens or hundreds
of separate sessions for each user.
The History List Messages monitor can be used to manage messages for
each MicroStrategy user.
Project Failover Latency
The Project Failover Latency setting is a server-level setting that defines the
wait time before a project is loaded on a backup server when a project failover
triggers. The minimum value for this setting is -1 and the maximum value is
999. Defining values such as 0 (zero) or -1 for this setting, disables the latency
period. This means that the project is not automatically loaded onto a
surrogate server. Consider the following information when defining the value
for this setting:
Setting a higher latency period prevents projects on the failed server from
being loaded onto other servers quickly. This behavior is positive from a
performance perspective if projects are large (they take a long time to load),
and the failed server can recover quickly. A high latency period provides the
failed server more time to recover.
Setting a lower latency period causes projects from the failed server to be
loaded relatively quickly in another server. This behavior is positive if it is
crucial that your projects are available to users at all times.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 139
Configuration Recovery Latency
The Configuration Recovery Latency setting is a server-level setting that
defines the wait time before a node is set back to its original configuration,
after the node recovers from a project failover. The minimum value for this
setting is -1 and the maximum value is 999. Defining values such as 0 (zero) or
-1 disables the latency period, the project is not unloaded from the surrogate
server, and the project is not reloaded on the recovered server. Consider the
following information when defining the value for this setting:
Setting a higher latency period leaves projects on the surrogate server
longer, which will be beneficial if the projects are large and it is critical to
ensure the recovered server is stable for a specific period of time before the
project load process begins.
Setting a lower latency period causes projects on the surrogate machine to
be removed and loaded relatively quickly onto the recovered server, which
helps relief the extra load on the surrogate server.
Load Balancing
Load balancing is a property that determines the load distribution across
Intelligence Server nodes. The load balancing ratio should reflect any
asymmetries in the CPU power of the nodes. For example, Node 1 may have
twice the CPU power than Node 2. In this case, load balancing should be
configured so Node 1 handles twice the load than Node 2 does.
Cube Failover Support
In a multi-node clustered environment, it is possible that all cubes are
published on a single node. If the server for that node goes down, the cubes
would need to be republished before any further report or dashboard execution
that uses them could take place. This can have a significant impact on the
performance of the entire system.
From a system performance standpoint, it is better to pre-load the Intelligent
Cubes into each node before users execute any cube reports. In this case, users
will not experience any loading overhead when the cube is hit for the first time.
A complete solution is to create an Integrity Manager test script that runs cube
reports that access the cubes that need to be pre-loaded. The script can later be
executed using a batch file in Windows, which can be scheduled using the
Windows Scheduled Tasks utility.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
140 Intelligence Server Configuration 2011 MicroStrategy, Inc.

If cubes are pre-loaded on other nodes and the primary node goes down,
users can still activate, de-activate, unload, and delete. If the cube is
unloaded, it cannot be loaded back.
For deploying the pre-loading of cubes approach, the following pre-requisites
must be met:
Each node needs to have enough memory to host all the cubes. Otherwise,
the pre-loaded cubes that have not been recently used are swapped out of
memory.

If a cube is swapped out of memory and it was published on a node


that is down, it cannot be loaded back.
The MicroStrategy environment must be version 9.0.1 or newer.
You must create or use cube reports that connect to the cubes that need to
be published. These reports act as the trigger to publish the cubes.

Only one report is needed for each cube that needs to be published. if
you define a view filter that returns no data on the report, it will help
to speed up the publishing process.
To pre-load cubes on all clustered nodes:

The procedure below only lists the relevant steps in the Integrity
Manager Wizard that you need to configure for cube pre-loading. If an
Integrity Manager Wizard step is not listed in this procedure, you can
keep the default values and click Next. For additional details on how to
create and save an Integrity Manager test script, refer to the
MicroStrategy Administration: Application Management course.
1 In MicroStrategy Integrity Manager, create a Single Project test.
2 On the Select Objects from the Base Project to be included in the Test page,
select the check boxes for the cube reports that are connected to the cubes
that need to be published.
3 On the Select Processing Options page, under Reports, select the SQL/MDX
check box.
4 Under Documents, select the Execution check box.
5 On the Summary page, click Save Test.

The test is saved as an MTC file.


Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Intelligence Server Configuration 141
6 In Integrity Manger, click Run to execute the test to ensure it is functioning
correctly.
7 Create a text file which includes the following command: mintmgr -f
full_path_to_the_MTC_file\NodeATest.mtc
Where NodeATest refers to the MTC file previously created and
full_path_to_the_MTC_file is the full path to where the MTC file is
stored:
8 Save the text file as a batch file. For example as LoadCubes.bat.
9 Repeat the previous steps for all the nodes in the clustered environment. In
the final step, instead of creating a new .bat file, add the command for each
MTC file to the same .bat file.
10 Create a schedule for executing the above batch file using Windows
Scheduled Tasks.
To create a Windows Scheduled Task:
1 On the Windows Start menu, open Control Panel.
2 Double-click Administrative Tools.
3 Double-click Scheduled Tasks.
4 In the Task Scheduler window, on the Action menu, select Create a Basic
Task.
5 In the Create a Basic Task Wizard, in the Name box, type the name for your
task. For example, Cube Loading.
6 On the Task Trigger page, click the appropriate schedule for the task to
trigger. For example, Daily.
7 Click Next.
8 Depending on the schedule you selected, you need to provide different type
of information, such as start time to trigger the schedule, and so forth.
9 Click Next.
10 On the Action page, click Start a Program.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
142 Intelligence Server Configuration 2011 MicroStrategy, Inc.
11 Click Next.
12 On the Start a program page, click Browse.
13 In the Open window, select the .bat file you create previously and click
Open.
14 Click Next.
15 Depending on the time frame selected previously, define the exact time and
day on which this task will be performed. This schedule definition can for
example be based on cube publication or update schedules on the primary
Intelligence Server node.
16 Click Next.
17 Click Finish to complete the schedule task creation.
The batch file will be executed according to the schedule you defined
previously, and cubes will be loaded in all nodes after the batch execution.

Ensure there is a sufficient gap in time between the running of the batch
file for loading the cubes into each node and the cube publication
schedule on the primary node.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Web Server Configuration 143
Web Server Configuration
After completing this topic, you will be able to:
Understand the main parameters for tuning the Web environment.
Web Server is the other key server component of the MicroStrategy
architecture. The following sections provide tuning and optimization
recommendations for the MicroStrategy Web Server.
JVM Settings
One of the key components that generally requires optimization in the context
of non-IIS Web servers is the Java Virtual Machine (JVM). Improper or
non-optimized JVM settings configuration can prevent the Web server from
running or cause it to be unstable and run very slowly.
The main JVM Settings to configure within the context of a MicroStrategy
implementation are:
Initial Heap Size(Xms)Specifies the initial Heap Size available to the
JVM applications, in megabytes. This represents the active part of the
storage heap.
Maximum Heap Size (Xmx)Specifies the maximum Heap Size available
to the JVM applications, in megabytes.
The Garbage Collection Process
The garbage collection process in the Web server is essential to understanding
the importance of tuning the Web server Heap Size. Garbage collection and
heap expansion are an essential part of a JVM operation.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
144 Web Server Configuration 2011 MicroStrategy, Inc.
The amount of objects created by applications that run in the JVM defines the
frequency at which garbage collection occurs. When an application is running
and the objects created by the application take most of the space defined by the
Initial Heap Size, the JVM is unable to allocate any more objects and triggers
the garbage collection process. This process deletes the stored objects in the
heap that are no longer referenced by applications, making some free space in
the heap, so that new objects can be processed. Usually, during the garbage
collection process, all other processes running in the JVM stop. It is an
expensive process from a performance standpoint. If the garbage collection
fails to clean enough space in the heap, the Initial Heap Size is expanded.

The heap expansion can never go beyond the Maximum Heap Size.
Heap Size Configuration
A Maximum Heap Size that is very high can cause disk swapping, causing the
Web server performance to degrade drastically. It is very important from a
performance perspective to set the Maximum Heap Size to a value that allows
the heap to be contained within the available RAM.
In this sense, a 64-bit Web server environment provides much better
performance than a 32-bit one, given the amount of memory available as
potential heap space (especially for applications with a high number of
dashboards containing large datasets.)
Increasing the Initial Heap Size will generally improve startup speed because it
avoids triggering the garbage collection process early on. Increasing the
Maximum Heap Size improves throughput as long as the heap resides fully in
physical memory and does not generate swapping.
MicroStrategy Web Pool Sizes
TCP/IP connections are used between the MicroStrategy Web XML API and
Intelligence Server to pass requests to and retrieve the responses from
Intelligence Server. When MicroStrategy Web connects to an Intelligence
Server node, either manually or automatically, the MicroStrategy Web XML
API establishes a connection pool with each Intelligence Server node. The
connection pool consists of a number of TCP/IP connections equivalent to the
number specified in the Initial pool size property on the Administrator page.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Web Server Configuration 145
A connection pool is responsible for caching connections between the Web
server and Intelligence Server.
Web Connection Pool
When a user request comes in, the MicroStrategy Web XML API gets a free
connection in the connection pool to serve the request. All connections in the
connection pool are initially free. When in use for processing a user request,
they are set to busy. After a response for the request has been completed, the
connection is returned to the connection pool and displays as free.
When the MicroStrategy Web XML API detects that the number of free
connections in a connection pool is less than two and the connection pool has
not reached the maximum pool size, it dynamically expands the connection
pool. If the connection pool reaches the maximum pool size, as when a user
request comes in when all connections are busy, the request waits for a free
connection. The wait time is configurable through the Administrator page, by
using the Server busy timeout property. If this time is reached and no
connections are available, an error returns to the client browser.
There are two related settings in MicroStrategy Web that refer to the
connection pool:
Initial Pool SizeRepresents the initial number of connections in the
connection pool
Maximum Pool SizeRepresents the maximum number of connections in
the connection pool
A general recommendation is to set the Maximum Connection Pool parameter
between 1/3 and 1/6 of the Maximum user sessions allowed.
To define pool size settings in MicroStrategy Web:
1 Launch the MicroStrategy Web Administrator page.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
146 Web Server Configuration 2011 MicroStrategy, Inc.
2 Under WEB SERVER, click Default properties.
3 Under Connection properties, define the values for Initial pool size and
Maximum pool size, as shown below:
Pool Size
4 Click Save.
Using a Separate Web Server for Static Content
In addition to dynamic content such as the actual report or dashboard data, the
requests from MicroStrategy Web comprise a series of static content, such as
css files, images and JavaScript files. While the default configuration is to have
all of the static and dynamic content in the main MicroStrategy Web Server, in
a production environment with high concurrency, this type of architecture
design is not recommended.
For performance optimization purposes, it is recommended to deploy a
separate Web server to handle the static content and to cache this information.
The cached static content and information does not need to be processed every
time it is requested. This implementation enables the MicroStrategy Web
Server to handle mostly the dynamic content, while the static content is stored
on the separate Web server.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Web Server Configuration 147
Logging and Statistics
A series of internal tests were performed to analyze the impact of statistics and
logging on MicroStrategy Web performance. The tests showed that enabling
logging and statistics does impact performance negatively and tends to
generate an overhead on JVM heap usage. In the worst case, enabling Web
statistics generated a 200% drop in performance in terms of Web server
response time and a 30% increase on JVM heap usage.
It is recommended to only enable logging and statistics in the context of
troubleshooting or tuning scenarios, but never in a production environment.
JavaScript
JavaScript is a scripting programming language most commonly used to add
interactive features to Web pages. JavaScript files are not generated
dynamically. When MicroStrategy is installed, a fixed amount of JavaScript
files will be stored in the Web server. Depending on the type of functionality
that is included in reports or documents at design time and the subsequent
actions and manipulations from users, requests for certain JavaScript files will
be made by the client. The more JavaScript files that need to be loaded in the
client, the longer will take for the client to render the report or dashboard.
In terms of performance optimization, it is important to understand what
actions can be taken in MicroStrategy Web to decrease the amount of
JavaScript files requested. The list below contains some recommendations to
improve client rendering based on reducing JavaScript file requests:
Disable Lock Row Header and Lock Column Header
Set Document Width Mode to be Fixed
Execute reports and documents in Full Screen Mode
Enable Browser Cache and Cookies
Enable HTTP Compression
Restrict the number of HTML Containers used

HTML container enables the display of real-time information from


the Web, directly in a document. The use of this functionality adds
more workload on the client side. In addition, if the container has a
link to a MicroStrategy object, it adds workload to the Web server.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
148 Lesson Summary 2011 MicroStrategy, Inc.

Lesson Summary
In this lesson, you learned:
Single-core processors can only process one instruction at a time while
multi-core processors can process multiple instructions simultaneously.
Intelligence Server performance in terms of throughput achieved goes up as
the number of processor cores goes up. The increase in throughput is not
linear.
As a general rule, faster clock speeds in a processor will translate into better
throughput, but not always linearly.
It is recommended to keep Intelligence Server utilization rates below 80%.
Have as much memory as possible for a MicroStrategy implementation to
avoid disk swapping issues.
In 32-bit versions of Microsoft Windows, applications are limited to 3 GB of
user address space. For this reason, MicroStrategy recommends that all
implementations be based on 64-bit operating systems.
It is recommended to increase the amount of available RAM if server
memory utilization regularly reaches or exceeds the 80% threshold.
In general RAID 4 or RAID 5 present the best reliability-to-cost ratio for a
MicroStrategy deployment.
It is highly recommended to periodically defragment the disk on
Intelligence Servers running on Windows environments.
The threshold for disk utilization rate beyond which the system
performance degrades or becomes unreliable is approximately 70% for
MicroStrategy deployments.
Linux yields higher levels of throughput than Windows (approx. 12%).
The overhead of virtualization causes some performance degradation,
reaching almost 25% for an eight-core system.
It is recommended to optimize Intelligence Server memory resources by
automatically disconnecting idle users, allowing more concurrency on
highly used applications, while reducing concurrency on less ones.
Deploying MicroStrategy High Performance BI System Architecture and Configuration 4
2011 MicroStrategy, Inc. Lesson Summary 149
Controlling the functionality that users are granted can be a useful
technique for optimizing system resource usage.
Intelligence Server enables system administrators to manage the number of
requests that can be submitted at any given time by using governor settings.
The default working set value is 200 MB, but it generally has to be
increased, based on estimations and specific environment characteristics.
It is recommended to optimize Intelligence Server memory by controlling
History List usage
Limit report data sizes to optimize Intelligence Server resource usage.
Limit the number of cells to export to text or Excel in MicroStrategy Web.
The optimal range of threads corresponds to the zone of maximum system
throughput.
Administrators can assign arbitrary costs to individual reports, separating
slower running reports that could potentially monopolize database
connections from other faster running reports.
To ensure proper operation, the amount of jobs executing on Intelligence
Server can be limited by defining governing settings
The best configuration for cache sharing in a clustered environment is each
node maintaining its own cache files.
It is recommended to store History List messages in a database.
It is better to pre-load the cubes into each node before users execute any
cube reports. In this case, users will not experience any loading overhead
when the cube is hit for the first time.
Improper or non-optimized JVM settings configuration can prevent the
Web server from running or cause it to be unstable and run very slowly.
The main JVM Settings to configure within the context of a MicroStrategy
implementation are Initial Heap Size and Maximum Heap Size.
Increasing the Initial Heap Size will generally improve startup speed
because it avoids triggering the garbage collection process early on.
Increasing the Maximum Heap Size improves throughput as long as the
heap resides fully in physical memory and does not generate swapping.
A general recommendation is to set the Maximum Connection Pool
parameter between 1/3 and 1/6 of the Maximum user sessions allowed.
System Architecture and Configuration Deploying MicroStrategy High Performance BI 4
150 Lesson Summary 2011 MicroStrategy, Inc.
It is recommended to deploy a separate Web server to handle the static
content and to cache this information. This implementation enables the
MicroStrategy Web Server to handle most of the dynamic content, while the
static content is stored on the separate Web server.
It is recommended to only enable low level logging and statistics collection
in the context of troubleshooting or tuning scenarios, but never in a
production environment.
The more JavaScript files that need to be loaded onto the client, the more
time it will take for the client to render the report or dashboard.
It is important to understand what actions can be taken in MicroStrategy
Web to decrease the amount of JavaScript files requested.
2011 MicroStrategy, Inc. 151
5
DATA PRESENTATION
Lesson Description
This lesson expands on how the BI ecosystem impacts report, dashboard, and
mobile performance. In this lesson, you will learn about the report and
dashboard execution flows, and the key recommendations for dataset and
design optimization. These series of design techniques will help you develop
fast executing reports and dashboards, presented in different devices.
Data Presentation Deploying MicroStrategy High Performance BI 5
152 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
Describe report and dashboard execution flow. Understand the
recommendations for designing high performance reports and dashboards.
After completing the topics in this lesson, you will be able to:
Describe the report execution flow and understand the report configuration
techniques for optimizing performance. (Page 153)
Describe the dashboard execution flow. (Page 159)
Understand dataset techniques to optimize dashboard performance. (Page
162)
Understand design techniques for optimizing dashboard performance.
(Page 172)
Optimize your mobile device to experience performance gains when
rendering MicroStrategy dashboards. (Page 194)
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. High Performance Reports 153
High Performance Reports
After completing this topic, you will be able to:
Describe the report execution flow and understand the report configuration
techniques for optimizing performance.
Report Execution Flow
It is important to be familiar with the report execution flow to better
understand how some of the components affect the report performance in a
MicroStrategy environment. The following image depicts the different steps
involved in the report execution process:
Report Execution Flow
Data Presentation Deploying MicroStrategy High Performance BI 5
154 High Performance Reports 2011 MicroStrategy, Inc.
The following steps and components are involved in the report execution flow:
1 A user submits a report request from a Web browser.
2 The report request is sent to Intelligence Server via the Web server.
3 In the case of prompted reports, before proceeding to the next step,
Intelligence Server sends the request back to the user to obtain prompt
answers.
4 After the prompt is answered, Intelligence Server checks for a valid report
cache. If a valid cache exists, the result set is returned to Web server,
otherwise Intelligence Server proceeds to the next step.
5 Intelligence Server retrieves the report definition from the metadata.
6 The SQL Engine generates a SQL statement.
7 The Query Engine opens a connection to the data warehouse, submits the
SQL statement, and retrieves the result set.
8 The Analytical Engine performs any additional analytical processing, where
necessary, and formats and cross-tabs the result set.
9 Intelligence Server sends the complete report results in XML format to the
Web server. A report cache is also created during this time.
10 The Web server translates XML data into HTML and formats the report for
display in the Web client (browser).
11 The report results are displayed in the Web client.
Report Configuration Techniques to Optimize Performance
Large XML can require significant amount of Intelligence Server memory and
require a lot of processing cycles on both the Intelligence Server and Web
server. Therefore, several of the recommendations for optimizing the
rendering of reports are directly related with the strategies to minimize the size
of the XML. The following sections cover different report configuration
techniques that contribute to better performance.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. High Performance Reports 155
Building Simple Reports
While it is important to ensure that reports deliver the required level of data
and analysis, from a performance perspective, simplicity generally translates to
faster report execution times. The simpler the reports, the smaller is the XML
and the faster is the overall response time.
Avoiding Working Set Reports
Working Set reports (also known as OLAP reports) provide great flexibility to
users, but it comes at a significant cost in terms of performance if the OLAP
features are not used. With Working Set reports, a user can have multiple
objects in the template, but only display a subset of these objects in the report
when it is executed. Subsequent report manipulations that use the objects in
the template do not require further SQL generation and tend to be faster.
However, the process of producing and rendering Working Set reports is more
expensive than the process for regular reports. For instance, Intelligence
Server must keep several versions of a Working Set report in memory, one
version for the base report (that is, the report with all the objects actually
available in the template) and one version for each subsetting report accessed
by the user (that is, each report version that the user has requested and has less
than all the template objects).
Data Presentation Deploying MicroStrategy High Performance BI 5
156 High Performance Reports 2011 MicroStrategy, Inc.
The following image illustrates this concept:
Working Set Report
For big Working Set reports, a considerable amount of XML is generated
requiring significant processing and memory resources in both the Intelligence
Server and Web server. When designing reports, ensure that there is a strong
need for the report to be a Working Set report. If the need is not there,
significant resources will be saved by just making it a regular report.
Using Only the Necessary Prompts
Prompts can be very beneficial to the report execution process because they
filter the data that returns to the client from the database and thus save
processing and memory resources at all levels of the MicroStrategy
architecture. The less data that needs to be fetched from the data warehouse,
the faster is the response time experienced by users.
However, it is important to understand that each prompt in a report requires
additional Intelligence Server resources. This is because Intelligence Server
has a process, called the Resolution step, that matches all prompt answers in a
report. Even though this process is fast and represents a very small percentage
of the overall report execution time, for high concurrency scenarios it can
become a costly bottleneck.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. High Performance Reports 157
Refraining From Using Search Inside Prompts
While search objects can provide an alternative way to define the prompt
objects available for selection, they are not efficient in terms of report
execution and will have a negative impact on performance. The
behind-the-scenes process to search for objects is very expensive for both the
Intelligence Server and the data warehouse, which in turn impacts the report
response time.
Limiting the Use of Custom Groups
Custom groups based on multiple criteria generate SQL to retrieve the metric
results for each individual set. If more than one custom group is used on a
template, SQL must be generated to retrieve the results for each combination
of group of elements. This can quickly lead to a large number of SQL passes to
be generated even for an otherwise simple report. You should consider
reducing the number of custom groups on a report template wherever possible.
You can also consider reducing the number of elements in each custom group
to minimize the footprint for report execution.
Keeping Aggregations and Joins to a Minimum
When a report executes, the more tables in the data warehouse it has to access,
the longer the report takes to run. Therefore, when designing the report,
keeping the number of joins and aggregations that either the data warehouse
or the Analytical Engine will have to perform to a minimum will have a positive
impact on performance.
Using Incremental Fetch Whenever Possible
Large report result sets, when converted to XML, generally require significant
amounts of Intelligence Server memory. In addition, the wider the reports are
in terms of columns, the longer they take to be converted to HTML by
MicroStrategy Web, resulting in slow report execution times from a user
perspective. By rendering data in smaller slices, the incremental fetch setting
in MicroStrategy Web can help significantly reduce the user wait times, while
still handling the display of large result sets.
Data Presentation Deploying MicroStrategy High Performance BI 5
158 High Performance Reports 2011 MicroStrategy, Inc.
Limiting the Number of Drill Paths
The number of drill paths available for Web users affects the size of the XML
that must be generated by Intelligence Server for a given report. To optimize
report results rendering performance results in Web, it is recommended to use
the Maximum number of XML drill paths governor setting in Intelligence
Server. This setting limits the number of drill paths available to Web users and
is shown below:
Maximum Number of XML Drill Paths
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. High Performance Dashboards 159
High Performance Dashboards
After completing this topic, you will be able to:
Describe the dashboard execution flow.
The Dashboard Execution Flow
It is important to be familiar with the document execution flow to better
understand how some of the components may affect the performance of a
MicroStrategy environment. The following image depicts the different steps
involved in the dashboard execution process:
Dashboard Execution Flow
The steps and components involved in the dashboard execution flow are as
follows:
1 A user requests the execution of a Report Services dashboard.
Data Presentation Deploying MicroStrategy High Performance BI 5
160 High Performance Dashboards 2011 MicroStrategy, Inc.
2 Intelligence Server receives the request and processes it as follows:
It executes all datasets in the dashboard
After collecting the data, Intelligence Server creates a virtual dataset
and stores it into memory

A virtual dataset consists of a master table that joins all document


datasets results together. This table includes the union of all the
attributes in the dashboard datasets, as well as one column for each
different metric.
Based on the dashboard design and depending on the user selected
output format, Intelligence Server generates the XML or a binary
stream as follows:
If the selected output involves dashboards in DHTML View Mode or
MicroStrategy Office documents, Intelligence Server generates XML.
If the selected output involves PDF documents or dashboards in Flash
View Mode, Intelligence Server generates a binary stream.
3 Depending on the user selected output format, Intelligence Server interacts
with the Web server in one of the following ways:
If the selected output involves dashboards in DHTML View Mode, the
XML is passed to the Web server. The Web server transforms XML into
HTML and JavaScript to make it viewable on the user's browser.
If the selected output is a dashboard in Flash Mode, the binary stream is
passed to the Web server. The Web server transmits it to the client along
with the DashboardViewer.swf file.

The DashboardViewer.swf file is not transmitted to the client if it has


been previously cached on the client machine.
If the selected output involves MicroStrategy Office or PDF documents,
the Web server is used as a channel to reach the user.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. High Performance Dashboards 161
4 In the client (browser), the dashboard content is rendered in one of the
available output formats:
DHTML DashboardA minimum subset of resources is loaded and
JavaScript code is executed to render the dashboard in HTML.
Flash DashboardA document instance is created. Flash then loads
the DashboardViewer.swf file. Flash renders the complete dashboard.
PDF fileA PDF file is rendered on the client machine.
MicroStrategy Office Document with an Embedded MicroStrategy
DashboardA MicroStrategy Office document with an embedded
MicroStrategy dashboard is rendered on the client machine.
Based on the steps and components involved in the document execution flow,
there are two main areas of focus when defining techniques for optimizing the
performance of dashboard executions:
Dataset TechniquesThese techniques focus on the optimization of data
preparation and XML/binary stream generation by Intelligence Server.
Design TechniquesThese techniques focus on the design considerations
to use for optimizing dashboard performance.
Data Presentation Deploying MicroStrategy High Performance BI 5
162 Dataset Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Dataset Techniques to Optimize Performance
After completing this topic, you will be able to:
Understand dataset techniques to optimize dashboard performance.
Dashboard Data Preparation Steps
The virtual dataset creation process includes the joining of all common
elements present across the datasets by the Analytical Engine.
The following image displays dashboard dataset execution:
Dashboard Dataset Execution
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Dataset Techniques to Optimize Performance 163
The following image displays a virtual dataset execution:
Virtual Dataset Execution
These two processes (dashboard dataset execution and virtual dataset
execution) can become bottlenecks during the execution of dashboards. The
following sections cover techniques for optimizing performance in the two data
preparation steps of dashboard execution.
Reducing the Number of Datasets in a Dashboard
The number of datasets in a dashboard influences the performance of
rendering the dashboard data. Reducing the number of datasets decreases the
virtual dataset creation time by the Analytical Engine. The following table
displays sizing considerations for dashboards, depending on the number of
datasets:
Sizing Recommendations
# of Datasets Performance Level
1 5 Optimum (Recommended)
5 10 Acceptable
> 10 Slow (Not recommended)
Data Presentation Deploying MicroStrategy High Performance BI 5
164 Dataset Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Consolidating Dashboard Datasets
When there are similarities across the different datasets, such as common
attributes and metrics, it is recommended to consolidate these datasets into
fewer units. This will result in less dataset executions, potentially smaller
virtual datasets, and better overall performance for the dashboard execution.
Removing Unused Datasets
To achieve high performance in dashboard execution and limit the number of
dataset executions it is important to ensure that all datasets present in a
dashboard are truly used for some purpose and are removed if they are not.
Leveraging View Filters to Provide Multiple Level of Analysis
for Each Dataset
Some dashboard designs require multiple views from the same dataset. A
common method of satisfying this requirement is to include multiple instances
of the same dataset in the dashboard. However, this approach results in
sub-optimal performance because Intelligence Server treats each instance of a
dataset as a separate dataset. The instances are processed individually,
increasing the dashboard data preparation time.
To avoid the negative performance impact associated with using multiple
datasets, use a single dataset and create grid/graphs with different view filters
to extract various data. By implementing view filters, a single dataset can be
used to create several grids and graphs that include distinct data. Reducing the
total number of dataset instances in this manner improves the data
preparation time for your dashboard.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Dataset Techniques to Optimize Performance 165
The following image displays an example of a dashboard with a single dataset
which provides several different levels of analysis through the use of different
view filters:
Dashboard with Single, Multi-Level Dataset
View Filter Criteria for one of the Graphs in the Dashboard
Enabling Quick Switch
A dashboard design that requires a grid and a graph to be displayed for the
same dataset does not require two instances of that dataset. Including multiple
instances of the same dataset in a dashboard introduces data duplication. To
enhance the performance of your dashboard and save screen space, use the
Quick switch option as an alternative approach, which provides users the
ability to quickly switch between the grid and graph views of a report with
minimal impact on dashboard performance.
Data Presentation Deploying MicroStrategy High Performance BI 5
166 Dataset Techniques to Optimize Performance 2011 MicroStrategy, Inc.

Quick Switch can be enabled both in Desktop and Web.


To enable Quick Switch:
1 In Web, edit a Dashboard.
2 Right-click the graph instance, and select Properties and Formatting.
3 In the Properties and Formatting window, under Properties, click Layout.
4 Under Grid, select the Quick switch check box.
5 Click OK.
The following image shows the Quick Switch setting:
Quick Switch Setting

Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Dataset Techniques to Optimize Performance 167
Reducing the Amount of Data in a Dataset
Each additional attribute and metric in a dataset causes the size of the dataset,
and the size of the virtual dataset to increase. This situation degrades the
overall dashboard performance. As the following image illustrates, adding
attributes to a dataset (especially attributes with high cardinalities or that are
not common to other datasets) can potentially cause an enormous virtual
dataset:
Virtual Dataset Generation
It is very important for optimum performance to ensure that all datasets used
in a dashboard only contain the amount of data that will truly be used for
analysis and display. The following table displays sizing considerations for
datasets, depending on the amount of data they contain:
Size Recommendations
Dataset Size (MBytes) Data Size (Cells) Performance Level
> 0.250 > 6,500 Optimum
(Recommended)
0.250 1.5 6,500 25,000 Acceptable
> 1.5 > 25,000 Slow (Not recommended)
Data Presentation Deploying MicroStrategy High Performance BI 5
168 Dataset Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Using Drilling to Reduce Dashboard Data
Based on your dashboard design requirements, you may want to display data at
various hierarchical levels. Some dashboards use multiple layouts to satisfy
this requirement. Although this option is convenient from a design standpoint,
it can produce a negative performance impact due to an increase in the overall
size of the data structures required to support the dashboard.
An alternative approach to using multiple layouts is the drilling feature in
MicroStrategy. When drilling is implemented on a dashboard, the end user can
access data calculated on different levels by clicking on an attribute and
choosing the desired level. This method is beneficial because instead of
pre-calculating all the required levels of data up front, the dashboard provides
end users the option to access data at other levels and incur the associated
processing cost on an as-needed basis.
Using Links to Reduce Dashboard Data
If your dashboards design requires that you display large amounts of data
across multiple panels, split the dashboard into multiple smaller dashboards,
then use links to connect them. Like other data reduction techniques, this
practice improves performance by reducing the amount of data that needs to
be initially processed. The dashboard initial load time is reduced, and
additional data can be loaded as needed.
Although links in dashboards help you reduce data and increase performance,
they do have some limitations. For example, passing value prompt answers
from one dashboard to another in a link is a common practice, and one that can
have a negative impact on performance. This is especially apparent in the
following two cases:
When there are a large number of prompt answers
When a prompt answer contains an attribute with a large number of
elements
When at least one of the above conditions is true, the generated URLs are
larger than normal, and can cause errors with some browsers. Additionally, in
the case of Flash dashboards that are not sourced from Intelligent Cubes, all
possible permutations of the dynamic prompt answers must be generated by
Intelligence Server when the dashboard is executed. These permutations are
passed to the client, significantly increasing the size of the binary structures,
and reducing the dashboard performance.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Dataset Techniques to Optimize Performance 169
When possible, divide dashboards with multiple panels into distinct
dashboards that are linked together. To ensure that this method improves
dashboard performance, avoid the prompt limitations noted above, which can
hinder the performance of your links.
Using Intelligent Cubes
Because they reduce the computational distance for the calculation of such
datasets, cubes can be very beneficial from a performance perspective.
Dashboards with datasets containing similar attributes and metrics and with
similar prompts are generally good candidates for using Intelligent Cubes to
avoid query executions against the data warehouse.
The following image compares the dashboard data sourced from database with
the data sourced from Intelligent Cubes:
Intelligent Cubes
Data Presentation Deploying MicroStrategy High Performance BI 5
170 Dataset Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Datasets Sourced from Multiple Cubes vs. Single Cube as Dataset
When processing a dashboard that contains several reports as datasets and
that are sourced from different cubes, Intelligence Server must go through an
initial data processing/aggregation step that takes the data from the cube and
puts it at the level of the dataset report. After all the datasets have been
generated, Intelligence Server creates the virtual dataset. Finally, through a
second data processing or aggregation step, it generates the data necessary to
populate the different dashboard components supported by those datasets.
These two initial steps can be completely skipped by using a single cube as
dataset. Not all dashboard designs support a single cube as dataset, but
whenever this is possible, it is a recommended approach to improve dashboard
execution performance. In MicroStrategys internal benchmarks and working
with clients to optimize their dashboard designs, it has been found that you can
have performance improvements ranging between 50% and 100% when you
switch from datasets that are sourced from multiple cubes to single cube as
dataset.
Although this technique improves dashboard performance, the amount of time
required to publish an Intelligent Cube increases as it grows in size. Use the
table below as a guide to determine whether switching to a single Intelligent
Cube as dataset can improve the performance in your environment:
Datasets Sourced from Multiple Cubes Vs. Single Cube as Dataset
Actions
Datasets Sourced
from Multiple Cubes
Single Cube as
Dataset
Dashboard execution Slower 50 100% Faster
Supporting cube sizes Smaller Larger
Cube Publication Time Shorter Longer
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Dataset Techniques to Optimize Performance 171
The following image compares the two scenarios:
Datasets Sourced from Multiple Cubes vs. Single Cube as Dataset
Data Presentation Deploying MicroStrategy High Performance BI 5
172 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Design Techniques to Optimize Performance
After completing this topic, you will be able to:
Understand design techniques for optimizing dashboard performance.
General Performance Topics
Viewing Dashboards in Express Mode
MicroStrategy 9.0.2 introduced improved rendering performance of
dashboards through Express Mode in MicroStrategy Web, which enables
dashboards to render in less time than previous versions of MicroStrategy. The
performance benefits provided by Express Mode are especially useful in global
deployments where low bandwidth and high latency are causes for concern.
Express Mode performance improvements are achieved by initially loading
only a small portion of the data, GUI, and JavaScript files for your dashboards.
As users interact with the dashboard, the necessary files are requested and
loaded onto the client. This incremental rendering eliminates the need to
transfer large amounts of data that may be unnecessary for end users.
Express Mode has been shown to improve the performance of DHTML
dashboards by up to 80%, when compared to View Mode in previous versions
of MicroStrategy. To take advantage of this new feature, upgrade to
MicroStrategy 9.0.2 or later and use Express Mode for DHTML dashboards.

Although Flash dashboards offer better performance when viewed in


Express Mode, widgets and other Flash-specific elements do not render
in Express Mode.
DHTML vs. Flash
DHTML and Flash Modes both offer the possibility of viewing highly
interactive, graphically rich dashboards. From a performance standpoint, it is
important to understand the tradeoffs that exist between the two technologies
and the MicroStrategy implementation of each.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 173
While DHTML loads and renders dashboard elements such as layouts and
panels incrementally, Flash loads such elements up front. This means that the
same dashboard definition generally takes longer to initially execute in Flash
Mode than in DHTML Mode. However, subsequent manipulations on the
dashboard, such as changing element selections, switching layouts, and so
forth, are more responsive in Flash Mode.
The following table and images illustrate such tradeoffs between the two
Modes:
DHTML vs. Flash
Flash vs. DHTML Multi-Layouts
Actions Flash DHTML
Loading & Rendering Upfront Incremental
Initial Execution Slower Faster
Subsequent
manipulations
Faster Slower
Data Presentation Deploying MicroStrategy High Performance BI 5
174 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Flash vs. DHTML Panels
The amount of data that is ultimately displayed in the dashboard is another
important factor to consider when deciding between DHTML Mode and Flash
Mode. Given its incremental nature, DHTML is a more appropriate option for
dashboards that display larger amounts of data. Balance the advantages and
disadvantages of each Mode to determine the appropriate design aspects and
Mode for your specific dashboards.
The following sections provide some general dashboard design best practices
that result in better performance, regardless of the technology selected.
Using Filtering Selectors to Incrementally Load Dashboard
Data
A simple and efficient way to optimize the initial rendering of a dashboard is
the implementation of filtering selectors to incrementally load data. Filtering
selectors initially retrieve only one slice of data, thereby significantly reducing
the initial load time of a dashboard. When a user changes the selection in a
filtering selector, MicroStrategy Web fetches the new data slice from
Intelligence Server.
Although filtering selectors speed up the initial load period, the loading time
associated with subsequent fetches may not be suitable for your dashboard. In
this situation, you can use a slicing selector which initially loads all of the data
in the dashboard. Create the selectors in your dashboards based on your
specific initial loading and subsequent manipulation requirements. The table
below outlines the appropriate use of slicing selectors and filtering selectors:
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 175
Standard Selectors Versus Filtering Selectors
When implementing a filtering selector, encourage end users to select a default
attribute element for the selector and save the document. If the user saves the
document with his selections, when he re-executes the document, the selector
and target are displayed according to those selections. Choosing a default
selection other than All improves initial rendering times because it limits the
amount of data that is displayed the first time the dashboard is executed.
Standard Versus Filtering Selectors
To enable filtering selectors in a dashboard:
1 Edit the document.
2 Locate the selector, right-click it, and select Properties and Formatting.
Actions Standard Selectors Filtering Selectors
Initial Dashboard
Execution
SlowerAll data
slices must be
processed and
fetched up front
Faster Only one
slice of data must be
processed and fetched
Subsequent
Dashboard
Manipulations
Faster All data is
already on the client;
no further trips to
Intelligence Server
are required
Slower A round trip
to Intelligence Server
is required to bring the
next slice of data to the
client
Data Presentation Deploying MicroStrategy High Performance BI 5
176 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
3 In the Properties and Formatting window, select Selector.
4 Select the Apply selections as a filter check box as shown in the following
image:
5 Click OK.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 177
Using Text Boxes Instead of Small Grids
Some dashboard designs call for sections containing a large group of metrics
and metric values. Implementing such dashboard sections with text boxes
instead of small grids is highly recommended from a performance standpoint.
Grid objects tend to be heavier than text boxes in terms of supporting data
structures and in terms of data transmission and rendering time on the client.
As shown in the following image, dashboard section implemented with text
boxes can execute up to ten times faster than the same section implemented
with grids:
Text Boxes Versus Grids

Reducing the Number of Grids, Graphs and Visualizations
The number of elements such as grids, graphs, and visualizations in a
dashboard has a direct impact on the processing time required by the
Intelligence Server to generate the necessary data structures to support it. In
the case of a DHTML dashboard, the Web server also utilizes CPU cycles to
convert the XML data structures of each grid or graph into HTML and
JavaScript to be rendered in the client.
Data Presentation Deploying MicroStrategy High Performance BI 5
178 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
In addition to the consumption of processor resources, large dashboards with
many objects require a large amount of network bandwidth to transfer data to
the client. For performance optimization purposes, it is therefore
recommended to restrict the number of objects, such as grids and graphs, in a
dashboard.
The table below displays sizing recommendations for dashboards, based on the
number of grids, graphs, and visualization elements they contain:
Size Considerations
Moving Static and Non-Dependent Content Outside of Selector
Scope
Filtering selectors can have a negative performance impact on static content,
text fields, and hyperlinks that are within the target panel. When filtering
selectors are used on a panel, these items are copied and sent back to the client
for every new slice of data. To avoid this inefficiency, MicroStrategy
recommends removing all content that is either static or not dependent on the
selector out of the target panel. If you would like to keep all of the content on
the panel, implement a slicing selector, and execute the dashboard to
determine if it improves performance.
Consolidating Grids and Graphs into a Single Advanced
Widget
In some cases, it might be possible to replace a set of grids, graphs, and
selectors by consolidating them into a single, better performing advanced
widget.
# of Elements Performance Level
< 10 Optimum (Recommended)
10 25 Acceptable
> 25 Slow (Not Recommended)
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 179
One consolidation example is the Microcharts widget, which can implement
selectors and display data in several formats, including values from a grid, a
bullet chart, and a sparkline chart. By implementing the Microcharts widget in
place of multiple grids and graphs, you can decrease the processing time of
your dashboard, and make more efficient use of screen space. The image below
shows an example of this scenario:
Consolidating into a Single Widget
Reducing Grid Sizes
Displaying large grids in a dashboard impacts performance negatively at
different levels of the architecture, especially during the client rendering stage
both in Flash and DHTML Modes. In Flash Mode, large grids consume a
significant amount of memory on the client, which can impede user interaction
with the dashboard.
To avoid any performance degradation associated with overly large grids, use
the table below to determine an appropriate size for the grids in your
dashboard:
Data Presentation Deploying MicroStrategy High Performance BI 5
180 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Size Considerations
Limiting the Number of Layouts
Because layouts are loaded incrementally in Express Mode, and up front in
Flash Mode, initial rendering and subsequent manipulation performance
varies between the two Modes. Regardless of the Mode used to display the
dashboard, the number of layouts has an impact on the size and processing
power required to generate the documents virtual dataset.
Your layout design can also hinder performance if each layout has content that
points to a different dataset. This performance impact is due to the fact that all
datasets, including those from layouts that are not viewed, must be processed
when the dashboard is initially executed.
To improve the rendering speed of your dashboard, MicroStrategy
recommends limiting the number of layouts. If you choose to implement
multiple layouts, designate a single dataset for all layouts to improve the
rendering time for the virtual dataset.
Disabling the Fit to Contents Option for Grids in MicroStrategy
Web
If your design requirements call for a large grid, MicroStrategy recommends
that you specify a fixed height and width for it in MicroStrategy Web. Disabling
the Fit to Contents option mitigates some of the performance degradation
associated with large grids.
To disable the automatic resize option:
1 In MicroStrategy Web, edit the dashboard for which you want to disable
automatic resize option.
# of Cells Performance Level
< 100 Optimum (Recommended)
100 1000 Acceptable
> 1000 Slow (Not Recommended)
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 181
2 In the left-hand side pane, select Document Structure as shown in the
following image:
3 In the Document Structure pane, expand the section of the dashboard that
contains the grid for which you want to disable automatic resize option.
4 Right-click the grid object and select Properties and Formatting.
5 In the Properties and Formatting window, under Properties, select Layout.
6 Under Size, for Width and Height, if Fit to contents is selected, click Fixed
at to disable it.
Data Presentation Deploying MicroStrategy High Performance BI 5
182 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
7 In the box, type the size in inches as shown in the following image:
8 Click OK to save the properties.
Limiting the Use of Thresholds and Subtotals
Thresholds and subtotals require Intelligence Server to perform extra
processing steps to resolve them. This is mainly due to the extra calculations
that the Analytical Engine needs to make on top of metric calculations and is
further compounded in the case of thresholds by the fact that they sometimes
use external images. The size of the XML data structures in the case of DHTML
dashboards also increases significantly when using thresholds, causing further
performance degradation.
Intelligence Server takes extra processing steps to resolve thresholds and
subtotals. This is primarily due to the extra calculations that must be
performed by the Analytical Engine and the processing power used to render
any external images used for thresholds. In the case of DHTML dashboards,
the size of XML data structures also increases significantly when using
thresholds, causing further performance degradation.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 183
A small number of thresholds and subtotals can be instrumental in drawing
attention to certain areas of a dashboard. However, excessive use of these
components can hinder your dashboards performance. The image below
demonstrates how thresholds can quickly add to the rendering time of a
dashboard:
Reducing Thresholds for Better Performance
An alternative method to using thresholds is to use dynamic images that are
based on metrics that calculate the image numbers (for example, arrow1.gif,
arrow2.gif, and arrow3.gif).
Although this method still incurs the additional cost of the metric calculations
required to determine the image to display, the XML data structure size that it
generates is comparatively smaller than that generated by traditional
thresholds.
Data Presentation Deploying MicroStrategy High Performance BI 5
184 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Limiting the Use of Smart Metrics
From a performance perspective, smart metrics differ from standard metrics in
a very significant waythey must be dynamically aggregated and sub-totaled
at all levels by the Analytical Engine. For dashboards with large datasets that
require aggregation and subtotaling at multiple levels, this can have a
significant impact in terms of performance and overall Intelligence Server
memory utilization. For this reason, it is recommended to minimize the usage
of smart metrics to achieve optimum performance in a dashboard.
Internal benchmarks comparing the use of smart metrics versus compound
metrics have shown that dashboard execution times can decrease by as much
as 30% and Intelligence Server memory utilization can decrease by as much as
50% by switching from one to the other.
DHTML Performance Topics
Enabling On-Demand Fetching of Panels
On-demand fetching of panels enables users to retrieve only the current panel
when the dashboard is executed in MicroStrategy Web, contributing to better
initial performance. Other panels are only loaded when the user requests them
and are subsequently cached in the browser for quicker retrieval later.
To enable on-demand fetching of panels:
1 In MicroStrategy Web, edit the dashboard to which you want to enable
on-demand fetching of panels.
2 On the Tools menu, select Document Properties.
3 In the Document Properties window, under Document Properties, select
Advanced.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 185
4 Under Panel Stacks (DHTML Only), in the Pre-load drop-down list, select
Current panel only of all panel stacks, as shown below:
Enabling Incremental Fetch on Grids
Incremental fetch divides large documents or layouts into pages, thereby
loading the data in blocks rather than all at the same time. This feature
improves usability and performance of a large document or layout, by reducing
the load and overall memory usage on the Web server.
For example, each row in the Details section of a document contains the Item
attribute and several metrics. Incremental fetch is applied, with a block size of
ten. In Editable Mode, Interactive Mode, or Express Mode in MicroStrategy
Web, only ten rows of items are displayed on a single page. The user must
navigate to another page to display more information.

You can define the incremental fetch options in both MicroStrategy Web
and in MicroStrategy Desktop, but incremental fetch is applied only
when the document is executed in MicroStrategy Web.
To enable incremental fetch of data for grids:
1 In MicroStrategy Web, open the document in Design Mode.
2 On the left-side pane, select Document Structure.
3 In the Document Structure pane, expand the section of the dashboard that
contains the grid for which you want to disable automatic resize option.
4 Right-click the grid object, and select Properties and Formatting.
Data Presentation Deploying MicroStrategy High Performance BI 5
186 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
5 In the Properties and Formatting window, under Properties, click
Advanced.
6 Under Incremental Fetch, select the Enable incremental fetch in Grid
check box.
7 In the Maximum number of rows per page, define the fetch size. The
following image displays the Incremental Fetch setting:
8 Click OK to save the changes.
Enabling Incremental Fetch on Grouping and Text boxes
Incremental fetch divides large dashboards or layouts into pages, thereby
loading the data in batches rather than all at the same time. This feature
improves the usability and performance of a large dashboard or layout, by
reducing the load and overall memory usage on the Web server. If the
dashboard or layout is grouped, you can select any group as the level. If it is
not, then the block size is applied to the Detail section.
To enable incremental fetch on grouping and text boxes:
1 Edit the document.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 187
2 In the Document Editor, if the document contains multiple layouts, select
the layout to which you want to apply incremental fetch.
3 On the Tools menu, select Document Properties.
4 In the Document Properties window, under Layout Properties, select
Advanced.
5 Select the Enable incremental fetch check box.
6 In the Fetch Level drop-down list, select the grouping level to which you
want to enable incremental fetch.
7 In the Block Size box, type the number of objects to return.
8 Click OK.
Using Group By
Similar to the use of filtering selectors, using Group By in a DHTML dashboard
decreases the initial load size to only the slice of data selected on the Group By,
improving the dashboard performance.

This optimization only applies to DHTML dashboards, not to Flash.


Flash Performance Topics
Flash Dashboard Components
For Flash dashboards, with the exception of slices generated using filtering
selectors, Intelligence Server produces all the binary structures necessary to
display all the data present in the document, whether it is initially visible to the
user or not. This results in a longer initial dashboard execution and rendering
period, but makes subsequent manipulations extremely responsive.
A dashboard built with Flash has the following components:
DashboardViewer.swf fileThis file enables the visualization of a
dashboard in Flash Mode. It is downloaded every time a Flash dashboard is
executed. The size of this file is normally 1.4 MB. (Hint - see Enabling
Flash Caching on the Client Browser starting on page 189.)
Data Presentation Deploying MicroStrategy High Performance BI 5
188 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Data binary fileThis file contains the necessary data to render the
dashboard. Its size is dependent on the amount of data that MicroStrategy
Web sends to the client, and it has a significant impact on the dashboard
performance. Large grids or slicing selectors with many elements pointing
to grids/graphs are examples of components that increase the data binary
file size.
Definition binary fileThis file contains formatting and properties for
each element in a dashboard. Its size is dependent by the amount of data
that is displayed and by the formatting of the dashboard. This component
also has a significant impact on the dashboard performance. Widgets, a
large amount of panels, and highly customized grid/graphs are examples of
components that increase the definition binary file size.
Flash properties filesThese files are bundles and supporting files that
are used to render the dashboard in Flash. These files do not impact
performance.
To ensure adequate performance, check the size of the data binary and
definition binary files when designing your dashboards. The tables below
displays the sizing recommendations for these binary files:
Data Binary Size Recommendation
Definition Binary Size Recommendation
File Size Performance Level
< 1 MB Optimum (Recommended)
1 5 MB Acceptable
> 5MB Slow (Not Recommended)
File Size Performance Level
< 500 KB Optimum (Recommended)
500 KB 1 MB Acceptable
> 1 MB Slow (Not Recommended)
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 189
A data binary file size greater than 1 MB can produce poor dashboard
performance. To minimize the data binary file size, use the various
performance optimization techniques outlined (see Reducing the Number of
Datasets in a Dashboard starting on page 163 and see Reducing the Amount
of Data in a Dataset starting on page 167.)
Reducing the Number of Panels and Panel Stacks
In Flash dashboards, although only the first panel in a panel stack is visible, the
underlying panels contribute to the initial binary structure generation and
definition. Therefore, additional panels affect memory consumption, as well as
the time it takes for a dashboard to be generated, transmitted, and rendered.
To reduce a dashboards initial load time, reduce the number of panels and
panel stacks that it contains. The table below displays sizing recommendations
for Flash dashboards, based on the number of panels that they contain:
Performance Level for Different Panels
Reducing Formatting Density
Formatting parameters, such as shading, rounded corners, boxes, images, and
others cause the overall size of the dashboard definition binary component to
increase, which in turn causes the dashboard execution performance to
degrade. From a performance optimization perspective, it is therefore
important to limit the formatting used in a dashboard.
Enabling Flash Caching on the Client Browser
Caching the dashboardviewer.swf file in the client browser has a positive
impact on performance. The size of the file is approximately 1.4 MB, and if it is
not cached, it is transmitted from the server to the client with every request,
resulting in suboptimal performance.
# of Panels Performance Level
< 10 Optimum (Recommended)
10 25 Acceptable
> 25 Slow (Not Recommended)
Data Presentation Deploying MicroStrategy High Performance BI 5
190 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
The steps to enable browser caching vary depending on the browser type. The
following procedure provides the steps to enable browser caching in Internet
Explorer.

For information on how to enable browser caching for other browser


types, refer to the browser's documentation.

The third-party product discussed below is manufactured by vendors


independent of MicroStrategy, and the information provided is subject
to change. Refer to the appropriate third-party vendor documentation
for updated browser caching information.
To enable browser caching using Internet Explorer:
1 In Internet Explorer, on the Tools menu, select Internet Options.
2 In the Internet Options window, in the General tab, under Browsing
history, click Settings.
3 In the Temporary Internet Files and History Settings window, click
Automatically.
4 In the Disk space to use box, type 1024.
5 Click OK to save your settings.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 191
The following image displays the Temporary Internet Files and History
Settings window, with the browser cache settings enabled:
Limiting the Use of Flash Widgets
Widgets utilize sophisticated visualization techniques that can combine with
rich interactivity to enable users to understand their data more effectively. In
some instances, widgets can be used to display multiple grids and graphs in a
single visualization, and therefore reduce the size of the output binaries.
However, excessive use of widgets in a dashboard can have the effect of
consuming a large amount of the client's browser memory, making other
manipulations, such as changing panels, changing layouts, or making
selections extremely slow.
Data Presentation Deploying MicroStrategy High Performance BI 5
192 Design Techniques to Optimize Performance 2011 MicroStrategy, Inc.
Implement widgets to replace multiple components, but make sure they do not
overcrowd and slow your dashboard. To aid you in determining the number
and type of widgets to use, the image below displays a list of the available
widgets for MicroStrategy dashboards, sorted in ascending order by client
memory usage:
Widgets by Memory Usage and Responsiveness
Optimizing Visualizations
Visualizations are valuable tools that can enhance your analysis. When creating
custom visualizations, the most important performance factor to consider is
keeping the SWF file size as small as possible. The following techniques can be
used for this optimization:
Create links to images, rather than embedding them
Import only those libraries that are absolutely necessary
Ensure that your data is pre-aggregated to the desired level
Implement incremental fetching of data for extensive visualizations
Other performance optimization techniques to consider when developing
custom visualizations:
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Design Techniques to Optimize Performance 193
Use a pure Flex API approach, integrated with the external portal, and
retrieve MicroStrategy data using the taskproc page.
When coding, refresh only those datasets that have changed, and not the
entire document.
Use a stand alone .mht file integrated with the external portal in cases
where a large number of users are expected.

This approach does not support the use of prompts and security
filters.
Use Flex programming best practices.
Use the AIR application as an alternative.

This approach limits the amounts of data that can be sent, but
requires the installation and maintenance of an additional
application.
Data Presentation Deploying MicroStrategy High Performance BI 5
194 Optimizing Performance for Mobile 2011 MicroStrategy, Inc.
Optimizing Performance for Mobile
After completing this topic, you will be able to:
Optimize your mobile device to experience performance gains when rendering
MicroStrategy dashboards.
Execution Workflow for Mobile Devices
The performance of a Mobile dashboard is dependent on several levels of
execution, as displayed in the diagram below:
Mobile execution Flow
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Optimizing Performance for Mobile 195

Each dashboard requires at least one dataset, which can be an


Intelligent Cube or a report.
1 When the dashboard is executed, all datasets simultaneously execute on the
data source.
2 Intelligence Server then processes these results into a single virtual dataset.
3 The dashboard components are generated, and rendered as XML.
4 The XML data is then transferred over the network.
5 The data is rendered by the mobile device.
To improve the performance of a given dashboard, the execution in one or
more of these steps must be streamlined. Performance requirements for
Mobile documents and dashboards are more rigorous than the requirements
for documents running on MicroStrategy Web or Desktop clients. This
difference can be attributed to the fact that mobile clients must compensate for
slower Wi-Fi and 3G networks, and memory constraints. Because of these
limitations, MicroStrategy recommends implementing the performance tuning
strategies outlined in this topic to deliver high performing mobile documents
and dashboards to your mobile end users.

The strategies outlined in this topic must be used in conjunction with


the best practices for high performance dashboards provided in this
course.
Improving Execution Time for a Mobile Document or Dashboard
Tuning strategies can be implemented at each step of the execution process to
improve overall document or dashboard performance on the iPhone or iPad.
To optimize your design and decrease execution time, the main techniques to
follow are:
Monitor network strength
Implement a caching strategy
Combine or remove datasets
Use prompts and selectors
Strategically display your data
Data Presentation Deploying MicroStrategy High Performance BI 5
196 Optimizing Performance for Mobile 2011 MicroStrategy, Inc.
Monitoring Network Strength
The MicroStrategy Mobile application ability to download data relies on the
signal strength and availability of the Internet and Mobile Server connections.
To help Mobile users find the best available Mobile Server connection, a
network connection monitor is available in the MicroStrategy Mobile
application. To configure this feature, use the Acceptable Network Response
and Network Timeout parameters in your Mobile configuration.

For detailed instructions on configuring the network monitor, see the


MicroStrategy Mobile Design and Administration Guide.
Based on the configured network latency and network timeout levels, the
Mobile application designates a poor, fair, or good quality indicator to each
available Mobile Server connection. Mobile users can view the signal strength
for the available Mobile networks to determine the strongest connection. This
information is accessible through the Network button found on the Settings
screen, as displayed in the iPhone example below.
Signal Strength Displayed at the Mobile Device
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Optimizing Performance for Mobile 197
To allow Mobile users to identify the Mobile Server with the strongest available
connection through the network monitor, configure the appropriate network
connection settings for all Mobile configurations and instruct your network
administrator to ensure that your office Wi-Fi connection provides a consistent
connection throughout the network area.
Implementing a Caching Strategy
To reduce data warehouse execution time, employ caching on Intelligence
Server. The caching options displayed below are enabled by default, but can be
turned off by an administrator. For optimal performance, ensure that report,
attribute element, document, and mobile pre-load caching are all enabled in
your environment.

For more information about caching strategies to improve performance,


see Caching and Intelligent Cubes starting on page 35.
To further reduce the execution time of your dashboard or document, enable
caching on the mobile device itself. This step eliminates the need to download
the document each time it is executed.

For information on how to enable caching on the device, see


Configuring Mobile Devices to Pre-load Caches starting on page 197.
Alternatively, you can create a mobile subscription to push the document or
dashboard to mobile devices at a specified date and time. You can arrange to
have the document/dashboard delivered during time frames when the device is
least likely to be used.

For more information on mobile subscriptions, see the MicroStrategy


Mobile User Guide.
Configuring Mobile Devices to Pre-load Caches
Mobile documents and dashboards have varying load times depending on the
amount of information included and the design strategies implemented. To
ensure that all of your documents load promptly as they are selected, configure
your mobile devices to load document caches while the MicroStrategy Mobile
application launches.
Data Presentation Deploying MicroStrategy High Performance BI 5
198 Optimizing Performance for Mobile 2011 MicroStrategy, Inc.
By default, if a cache exists for a subscribed report or document, that cache is
loaded when the user opens that report or document, for the page-by selection
or layout that is opened. This speeds up initial access to the application.
However, if you choose to pre-load caches, mobile clients will experience a
longer initial load time, and instant access will be provided to individual
documents with available caches. Test both methods to determine which
option is suitable for your environment and user base.
To configure your caches to pre-load automatically:
1 Access the Mobile Server Administrator page:
In Windows: On the Start menu, point to Programs, followed by
MicroStrategy, followed by Mobile, followed by Mobile Server, and
select Mobile Administrator.
In UNIX/Linux: After you deploy MicroStrategy Mobile Server
Universal and log on to the mstrMobileAdmin servlet using proper
authentication, the Mobile Server Administrator page opens.

The default location of the Administrator servlet varies depending


on the platform you are using.
2 In the Mobile Server Administrator page, on the menu bar, click Mobile
Configuration.
3 Click Define New Configuration to create a new configuration, or click
Edit to change an existing configuration.
4 Depending on the device type, in the drop-down list, select either iPad or
iPhone.
5 Define the new configuration, ensuring that the Automatically pre-load
caches check box is selected.
6 Click Save.

If this check box is selected, caches are loaded for all subscribed reports
and documents when the application is launched.
To configure the project to cache documents without an expiration date:
1 In Desktop, right-click the desired project, and select Project
Configuration.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Optimizing Performance for Mobile 199
2 In the Caching: Result Caches: Maintenance subcategory, select the Never
expire caches check box.
3 Click OK.
If possible, run documents and dashboards against an Intelligent Cube to
increase data retrieval speed. Large warehouses can significantly slow down
data retrieval and have a large impact on overall performance. Warehouse data
should be split into multiple Intelligent Cubes for improved performance.
Combining and Removing Datasets
Because all datasets require time to execute and join together, remove any
datasets that are not used on your document or dashboard. Also, browse
through the objects included in each dataset to ensure that they are essential to
your document or dashboard. Combine datasets as much as possible to
minimize the total number.
Using Prompts and Selectors to Reduce the
Dashboard/Document Size
Implementing prompts can be a helpful method of reducing the amount of data
that is returned and reducing the footprint of your document or dashboard. To
control user input for a prompt, specify a minimum or maximum value, or set
limits on the number of possible answers. This enables you to create a more
predictable result set for each prompt.
To display only a subset of the available data and allow users to interact with
the result set after the document or dashboard has been displayed, use
selectors. Implementing selectors clears up screen space by minimizing the
amount of data that is displayed at one time. Use a slicing selector or filtering
selector based on the unique characteristics of each document.

For more information on how selectors can improve dashboard


performance, see Using Filtering Selectors to Incrementally Load
Dashboard Data starting on page 174. For an extended description of
slicing and filtering selectors and details on their differences, see the
MicroStrategy Document Creation Guide.
Data Presentation Deploying MicroStrategy High Performance BI 5
200 Optimizing Performance for Mobile 2011 MicroStrategy, Inc.
Strategically Display the Data
To speed up the rendering of data on the mobile device, it is imperative to only
display the data and elements that are absolutely necessary for the document
or dashboard. One method to accomplish this is to distribute the data on a
crowded dashboard onto multiple dashboards. You can then link the original
dashboard to the others by placing links on tabs, creating the experience of a
single dashboard with multiple panels.
Another method of improving the speed of data rendering is through the use of
Information Windows on iPad documents and dashboards. You can configure
an Information Window to display additional data when a user clicks an
element on a grid or graph, only incrementally loading the data if the user
requires it.

For more information on creating Information Windows, see the


MicroStrategy Mobile User Guide.
MicroStrategy 9.2.1 enhances rendering of documents on mobile devices
through incremental loading. The data displayed in your document is loaded in
chunks to allow faster access to the end user. As the first portion of data is
viewed, subsequent portions of data are incrementally loaded in the
application. To take advantage of the incremental loading feature for the
MicroStrategy Mobile application, upgrade to the latest version.
General Design Best Practices for Documents and Dashboards
When designing documents and dashboards for mobile devices, general design
principles must be taken into account to achieve optimal performance. The
most common issues that contribute to performance lag for any document or
dashboard are overcrowding of screen space with superfluous elements and
data, redundant panels, and unnecessary selectors. Mitigate or eliminate these
issues in documents and dashboards that are designed for mobile devices.

To simplify your design and provide optimal performance to end users,


adhere to the best practices outlined in this course (Hint - see High
Performance Dashboards starting on page 159.)
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Optimizing Performance for Mobile 201
MicroStrategy Mobile 9.2.1 Performance Optimizations
MicroStrategy recommends upgrading to the latest available version to take
advantage of all performance optimizations included with each release. The
following are examples of some of the Mobile optimizations found in
MicroStrategy 9.2.1:
Memory governing on the iPadTo efficiently use the available memory
on the iPad, the memory governors for the MicroStrategy Mobile
application have been separated into three distinct areas:
Definition5 MB maximum
Data slice10 MB maximum
Rendering60 MB maximum
The new memory governors in MicroStrategy 9.2.1 allow you to transfer
and render Mobile dashboards that have large binary files. The binary limit
of 2 MB in previous versions has been expanded to allow dashboards with
binary files of up to 50 MB. Although this new maximum permits a much
larger binary file, MicroStrategy recommends designing dashboards to fall
well under the new maximum of 50 MB. When designing your dashboards,
keep in mind that large binary files require more time to load and render.
Incremental loading of data on multi-layout dashboardsThe new
incremental loading feature allows MicroStrategy Mobile to display a
dashboard as soon as the data for the initial layout has been loaded. As the
user interacts with the initial layout, the data on subsequent layouts is
seamlessly loaded on the mobile device. This performance optimization
minimizes the initial load time for dashboards with multiple layouts, and
provides faster access to data for end users.
iPhone and iPad UnificationThe unification of the iPhone and iPad
architecture in MicroStrategy Mobile allows a comparable experience for
the application on both devices. MicroStrategy 9.2.1 expands the
functionality of the iPhone application to better match the user experience
on the iPad, thereby providing a seamless experience for users who access
reports and documents on both devices. Expanded functionality for the
iPhone includes document interactivity, additional widgets, panel stacks,
sorting, selectors, animations, and tool tips.
Increased cell limit for gridsBy reducing the footprint for grid cells, the
MicroStrategy Mobile application is now capable of displaying grids that
contain up to 30,000 cells. This new limit improves upon the 8,000 cell
maximum in previous versions of MicroStrategy Mobile.
Data Presentation Deploying MicroStrategy High Performance BI 5
202 Optimizing Performance for Mobile 2011 MicroStrategy, Inc.
Express cache hit mechanismThe amount of time required to send a
document cache from Intelligence Server to the MicroStrategy Mobile
application has been reduced. This performance upgrade is especially
apparent in documents with a large number of datasets. When end users
execute documents that have server-side caches, they can experience an
improvement in transfer speed by up to 60% when compared to previous
versions of MicroStrategy Mobile.
Performance and memory logsImproved logging provides information
on how memory is used on mobile devices during dashboard execution. If
your dashboard exceeds any of the memory governing limits, examine the
logs to determine the aspect of the dashboard that needs to be redesigned.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Lesson Summary 203

Lesson Summary
In this lesson, you learned:
It is important to be familiar with the report execution flow to better
understand how some of the components affect the performance in a
MicroStrategy environment.
Several of the recommendations for optimizing the rendering of reports are
directly related with the strategies to minimize the size of the XML.
The simpler the reports are, the smaller the XML is and the faster the
overall response time is.
When designing reports, ensure that there is a strong need for the report to
be a Working Set report, otherwise save resources by making it a regular
report.
Search objects are not efficient in terms of report execution and should be
avoided.
Reduce the number of elements in custom groups to reduce the footprint
for report execution.
Keeping the number of joins and aggregations that either the database or
the Analytical Engine has to perform to a minimum has a positive impact
on report execution performance.
The incremental fetch setting in MicroStrategy Web and the Maximum
number of XML drill paths governor setting in Intelligence Serve can
optimize report results rendering in MicroStrategy Web.
The dashboard dataset execution and virtual dataset execution processes
can become bottlenecks for high performance dashboards. For this reason,
the techniques to optimize performance focus on these two processes that
are involved in the dashboard execution.
Reducing the number of datasets by eliminating unused datasets or by
consolidating them into fewer ones decreases the virtual dataset creation
time by the Analytical Engine.
Data Presentation Deploying MicroStrategy High Performance BI 5
204 Lesson Summary 2011 MicroStrategy, Inc.
Using a single instance of the dataset in the dashboard and making use of
multiple view filters on the template to extract different portions of the data
reduces the total number of dataset instances, and improves data
preparation time.
When you want to display the same dataset in multiple formats, add one
instance of the dataset and enable the Quick Switch functionality to avoid a
major impact in the dashboard performance.
Ensure that all datasets used in a dashboard only contain the amount of
data that will truly be used for analysis and display.
Use drilling and links to reduce dashboard data, while providing the option
to access other levels of data.
Dashboards with datasets containing similar attributes and metrics are
good candidates for using Intelligent Cubes to avoid query executions
against the data warehouse. Additionally, single-cube-as-dataset
dashboards perform 50% to 100% better than dashboards that hit multiple
cubes.
While DHTML tends to load and render dashboard elements such as
layouts and panels incrementally, Flash tends to load such elements
upfront. The dashboard designer needs to analyze all criteria to determine
which of the two technologies is more appropriate from a performance
standpoint.
To optimize the initial rendering of a dashboard, slice the data it displays
using filtering selectors.
A dashboard section implemented with text boxes can execute up to ten
times faster than the same section implemented with grids.
It is recommended to restrict the number of elements such as grids, graphs,
and visualizations in a dashboard. You can achieve this by consolidating
grids and graphs into a single visualization object.
Reducing grid sizes, limiting the number of layouts, disabling automatic
resize, limiting the use of thresholds and subtotals, and limiting the use of
smart metrics are all recommended to improve the dashboard
performance.
To improve performance on DHTML dashboards, the following
recommendations applyenabling on-demand fetching of panels; enabling
incremental fetch on grids, grouping, and text boxes; and using group-by.
A dashboard built with Flash has the following
componentsdashboardviewer.swf, data binary file, definition binary file,
and the Flash properties file.
Deploying MicroStrategy High Performance BI Data Presentation 5
2011 MicroStrategy, Inc. Lesson Summary 205
To improve performance on Flash dashboards, the following
recommendations apply: reducing the number of panels and panel stacks,
reducing formatting density, enabling Flash Caching on the client, reducing
use of Flash Widgets, and optimizing visualizations.
Mobile clients must compensate for slower Wi-Fi and 3G networks, and
memory constraints.
Tuning strategies can be implemented at each step of the execution process
to improve overall document or dashboard performance on the iPhone or
iPad.
To optimize your design and decrease execution time on your Mobile
device, the main techniques to follow are: implement a caching strategy,
combine or remove datasets, and use prompts and selectors to reduce the
amount of data.
Another method of improving the speed of data rendering is through the
use of Information Windows on iPad documents and dashboards.
The most common issues that contribute to performance lag for any
document or dashboard are overcrowding of screen space with superfluous
elements and data, redundant panels, and unnecessary selectors.
Certain limitations inherent in the iPhone and iPad may cause performance
issues in some of your documents or dashboards, such as the search as you
type feature for iPhone prompts and the binary size limit for iPad
dashboards.
Data Presentation Deploying MicroStrategy High Performance BI 5
206 Lesson Summary 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 207
6
DATA WAREHOUSE ACCESS
Lesson Description
This lesson provides solutions to query inefficiencies you may find when
running jobs in your MicroStrategy environment. In this lesson you will learn
about several query efficiency optimization techniques. First, you will be
introduced to query performance considerations. Then, this lesson provides
information about techniques to optimize performance in three distinct areas:
report design, SQL generation, and data architecting. Finally, you will learn
about query optimization techniques related to the Multi Source Option and
ODBC.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
208 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
Apply the learned skills to optimize report queries to reduce the database
execution time.
After completing the topics in this lesson, you will be able to:
Understand the main considerations about query performance. (Page 209)
Understand the optimizations that can be done on the data warehouse side
to improve query performance. (Page 213)
Apply design techniques to optimize query performance when developing a
report. (Page 219)
Apply tuning techniques to optimize the SQL generated by MicroStrategy.
(Page 224)
Understand query optimization techniques related to the Multi Source
Option and ODBC. (Page 243)
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Introduction to Data Warehouse Access 209
Introduction to Data Warehouse Access
After completing this topic, you will be able to:
Understand the main considerations about query performance.
Database Query Performance
When no caching or cube techniques are used, the performance of a typical
business intelligence query is mostly dominated by time in the database. In a
typical report request, the time spent on the database tends to average around
80% of the total report response time. Therefore, it is very important to
optimize the queries to reduce the time spent in the database.
Time Spent on the Database
To optimize query performance, there are five main actions that you can take:
Optimizing the number of SQL passes
Reducing full scans of large tables
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
210 Introduction to Data Warehouse Access 2011 MicroStrategy, Inc.
Reducing the number of table joins
Selecting data from smallest tables
Leveraging database specific parameters
The image below shows a typical SQL query and highlights the five high-level
actions that can help to optimize the query performance:
High-Level Actions to Optimize Query Performance
SQL Generation Algorithm
The SQL Engine translates report definitions created by the report designer
into SQL Queries based on different parameters such as template and filter
definitions, schema object definitions (e.g. attributes, facts, tables, and so
forth) and VLDB settings.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Introduction to Data Warehouse Access 211
The image below summarizes the steps followed by the Engine to translate into
SQL a hypothetical report definition with Time dimension attributes - Year,
Quarter & Month, Product dimension - Category & Subcategory, and Metrics -
Units Sold and Units Received as well as a filter on Quarter:
SQL Generation Algorithm
1 The dimensionality (lowest level at which the report can be calculated) is
defined. In this example, the dimensionality is Month & Subcategory.
2 The SQL Engine merges the report filter with the metrics.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
212 Introduction to Data Warehouse Access 2011 MicroStrategy, Inc.
3 The SQL Engine categorizes metrics with same dimensionality and filtering
conditions into aggregate metric groups.
4 The SQL Engine then identifies the smallest common set of tables to source
metrics from. In this case, it finds ORDER_DETAIL for Units Sold and
INVENTORY for Units Received.
5 The SQL Engine builds the SELECT clause taking into account the
dimensionality of the metric group.
6 The FROM clause is built using source tables for metrics and tables
required to satisfy filtering criteria.
7 A join is constructed to apply the filter.
8 Finally, a group-by is applied, again using Dimensionality.
A thorough understanding of the SQL construction logic followed by the SQL
Engine is critical for designing systems that result in optimized queries. A
report developer or system architect can optimize a SQL query (and thus the
performance of the report supported by that query) by applying tuning
techniques in any of the following three phases of a data request:
Report and Schema DesignEnsuring that the objects in a report, and all
their supporting schema objects are setup to generate optimal SQL.
SQL GenerationMaking use of database specific parameters to influence
SQL generation.
Data ArchitectureMaking changes on the data warehouse side (that is,
outside of MicroStrategy) to improve database query performance.
While performance optimizations and tuning techniques exist at each one of
the layers and phases of the data request process, the impact of such
optimizations tends to be greater when they are done at the lowest level and
will tend to be more localized for higher layers. The order in which these
optimizations are done is important. When possible, they should ideally start
with lower layers such as data architecture and database parameters, move to
schema design, and finally report design.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Data Architecture Optimizations 213
Data Architecture Optimizations
After completing this topic, you will be able to:
Understand the optimizations that can be done on the data warehouse side to
improve query performance.
The discussions in this course manual so far have dealt with the data
warehouse access optimizations that are all controlled from within
MicroStrategy. This section covers changes and optimizations that can be done
on the data warehouse side to improve database query performance. While a
very large number of database optimization techniques exist and are quite well
documented, in this section, you will learn the five most relevant techniques
from a MicroStrategy perspective. These techniques are:
Denormalizing the physical warehouse schema
Building Indexes to Specific Data
Partitioning Fact Tables into Subset Tables
Using Views with Pre-calculated Data Aggregations
Building Separate Lookup Tables for Attributes
Denormalizing the Physical Warehouse Schema
A denormalized physical data warehouse schema provides better performance
when using MicroStrategy because it reduces the number of table joins needed
to retrieve relevant data. The image below illustrates this point. The left side
shows a highly normalized database that requires more table joins to resolve a
report. By contrast, the right side shows a re-designed database with some
degree of denormalization that results in a single join and an optimized
database query performance.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
214 Data Architecture Optimizations 2011 MicroStrategy, Inc.

Denormalization implies that some information will be duplicated and


redundant, consuming more resources, such as memory and disk. These
tradeoffs between better performance and increased redundancy must
be evaluated on a case-by-case basis.
Denormalizing the Data Warehouse
Building Indexes to Specific Data
A database index is a data structure that improves the speed of data retrieval
operations from a database table. Indexes eliminate the highly expensive
process of full table scans. Indexes can be created using one or more columns
of a database table, providing the basis for both rapid random look ups and
efficient access of ordered records. The disk space required to store the index is
typically less than that required by the table, because indexes usually contain
only the key-fields according to which the table is to be arranged.
In the example illustrated below, an Index was built on the CATEGORY
column in the PRODUCT fact table. When a query containing a filter on
Category is generated, the database optimizer passes it to an index table which
identifies what rows to use for this query.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Data Architecture Optimizations 215
Rather than doing a full table scan to look for those values, the database will
exclusively retrieve the values that were requested in the report, resulting in a
much faster query.
Building Indexes to Optimize Performance

Indexes have less impact when the number of distinct values in the
indexed column is low. When an index is not very selective, the
database optimizer may choose to not use it, performing a full table scan
instead, defeating the purpose of creating the index in the first place.
As a general rule, indexes should be defined on columns that have many
distinct values and tend to appear in filter conditions for the most used reports
or on those columns that make up the primary or foreign key in large fact
tables. Enterprise Manager can be a useful tool to identify the most used
columns in the WHERE clause of the report SQL.
Partitioning Fact Tables into Subset Tables
A partition is a division of a logical database or its elements into distinct
independent parts. Database partitioning is normally done for manageability,
performance, or availability reasons. The MicroStrategy SQL Engine is
partition-aware, that is, it can take advantage of a partitioned data warehouse
and select the smallest possible set of tables when sourcing reports.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
216 Data Architecture Optimizations 2011 MicroStrategy, Inc.
Partitioning is typically defined for the time dimension but can be used for
other dimensions as well, especially if attributes on that dimension are used
very frequently to filter data (by Country for example, where users in a
particular country will only look at that country's data).
In the example illustrated below, the ORDER_DETAIL fact table was
partitioned based on the month in which the orders were placed. For queries
that filter on Month, the SQL Engine will pick the correct month-level fact
table partition, resulting in smaller table scans and faster database query.
Partitioning Fact Tables
Using Views with Pre-calculated Data Aggregations
Unlike ordinary tables in a relational database, a view is not part of the physical
schema. Instead, it is a dynamic, virtual table computed from data in the
database. This means that any change in the underlying table data alters the
results shown in subsequent invocations of the view.
Views are useful from a performance perspective because they can pre-join
tables and simplify complex queries and act as aggregate tables. Additionally,
views can be a useful tool to isolate base tables from report queries.
Materialized Views in Oracle, Aggregate Join Indexes in Teradata, Materialized
Query Tables in IBM DB2 and Indexed Views in Microsoft SQL Server are all
features that were specifically implemented by database vendors to improve
query performance. The image below illustrates this concept.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Data Architecture Optimizations 217
The data warehouse contains two fact tables, one with 2 billion rows and
another one with 6 billion rows. One or more reports will require data from
both tables to be combined into a single query, which could potentially cause
full table scans on two very large fact tables. For this reason, the report
architects designed a view that returns 200 million rows of data and includes a
pre-aggregated column that is not available on either table:
AVG_CALL_TIME.
Creating Views to Improve Performance

The MicroStrategy SQL Engine does not differentiate between a


database table and a database view during SQL generation.
Creating Attribute Lookup Tables
Building separate Lookup Tables for attributes instead of sourcing them from a
fact table eliminates potentially very large table scans. Using fact tables as
lookup tables does not generally affect performance from an aggregation
perspective. However, it does slow down significantly any requests that would
require a list of attribute elements.
Consider an element list prompt based on the Item attribute, which has a big
cardinality. When the user requests to load this prompt, the MicroStrategy
SQL Engine generates the list of items by issuing a SELECT DISTINCT SQL
statement. If a fact table is used as the lookup table for Item, the elements will
be selected from a very large table and the DISTINCT SQL statement can
perform very poorly.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
218 Data Architecture Optimizations 2011 MicroStrategy, Inc.
On the other hand, a lookup table specifically defined for the Item attribute will
be considerably smaller in size. A SELECT DISTINCT SQL statement will run
significantly faster against such a table, resulting in better overall performance.
The image below illustrates this example:
Creating Attribute Lookup Tables to Improve Performance

When designing database tables, numeric or date/time data types for ID


columns are preferable to non-numeric column data types (that is,
CHAR, VARCHAR, and so forth). Non-numeric data types tend to be
inefficient and slow from an indexing and join efficiency perspective.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Report and Schema Design Optimizations 219
Report and Schema Design Optimizations
After completing this topic, you will be able to:
Apply design techniques to optimize query performance when developing a
report.
The first phase of the process of data request in a Business Intelligence
environment is the report design. By carefully designing the project schema
and by taking into account important optimization considerations, a report
designer can tune the SQL Engine to generate optimal queries. Conversely, a
poorly designed schema will generally result in costly and inefficient SQL
queries against the database and in bad performance.
Eliminating Unnecessary Table Keys
A table key is a set of attributes that represents the lowest level at which data
will be available on the table. The MicroStrategy SQL Engine always joins
tables on all the common table keys between tables. Therefore, if there are
more attributes in a table key than necessary, unnecessary joins may occur. To
optimize query performance, use one of the following techniques:
Redefining parent child attribute relationships to eliminate extra keys
Mapping attributes to only specific tables
Deleting unnecessary attributes from the logical model
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
220 Report and Schema Design Optimizations 2011 MicroStrategy, Inc.
In the example illustrated below, the designer was able to save one potentially
expensive join on the EMP_ID column simply by un-mapping the Employee
attribute from a specific set of tables.
Eliminating an Unnecessary Table Key
As schema designers it is always important to verify that all table keys are
defined as efficiently as possible and only contain the attributes that truly
represent the lowest level of data detail for a given table. This relatively simple
optimization will result in less joins and faster database queries.
Including Filter Conditions In Fact Definitions
In some situations, a SQL query can be optimized by pushing filtering
conditions into fact expressions using CASE statements instead of defining
them at the metric level. When filtering conditions are applied within the
metric definition, the SQL Engine will create separate SQL passes for each
conditional metric. The metric condition is then applied in the WHERE clause
of the SQL with a final assembly pass. On the other hand, if the filter condition
is defined directly on fact definition, the SQL Engine issues a single SQL Pass,
with the filter in the SELECT clause.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Report and Schema Design Optimizations 221
In the example illustrated below, the designer was able to resolve a report in a
single SQL statement when it previously took two temp table creation queries
and one final SQL pass.
Filter Condition in Fact Definition

This technique can be used basing the fact definition expressions on


attributes that are not present in the fact table. For example: Sum (IF
(Gender@ID = "M", Revenue, 0 ) ).
Substituting Custom Groups with Consolidations
Every custom group element is treated as a filter condition resulting in at least
one additional SQL pass to calculate each element. Thus, custom groups are
expensive from a SQL generation and database query execution perspective.
Some types of custom groups that do segmentation based on filters can be
replaced by custom group metric banding in combination with metrics
specifically designed for this scenario. This optimization can significantly
increase performance because it reduces SQL passes against potentially large
data warehouse tables.
For other cases where the above optimization is not feasible, Consolidations
should be considered as a potential alternative. Consolidations are not SQL
intensive because they are calculated by the Analytical Engine.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
222 Report and Schema Design Optimizations 2011 MicroStrategy, Inc.
From a data warehouse query optimization perspective, substituting Custom
Groups with Consolidations can be beneficial, as shown below:
Custom Group vs. Consolidation
In the example illustrated above, on the left-hand side, when a Custom Group
is used, a separate SQL pass is issued to calculate each Region element:
Northeast, (South, Southeast, Southwest) and Central. When the exact same
filtering criteria are specified as a Consolidation, as depicted on the right-hand
side, the SQL Engine issues only one SQL pass to retrieve all the information
and the remaining calculations are performed by the Analytical Engine.
Designing Summary Metrics from Base Metrics
For cases when two dimensional metrics at different levels of the same
hierarchy are part of a report, defining the higher level metric using the lower
level metric as base will result in smaller sized table scans and therefore in
better database query performance.
In the example illustrated by the following image, on the left side, the Daily
Revenue metric is a dimensional metric based on the Revenue fact, sourced
from a very large fact table and with a dimensionality of Day. Adding this
metric to a report as defined will result in a SQL pass that scans the very large
fact table.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Report and Schema Design Optimizations 223
The Quarterly Revenue metric is also a dimensional metric based on the
Revenue fact, but with a dimensionality of Quarter. Adding it to the report will
result in a second SQL pass that scans the very large fact table.
On the right side however, while the Daily Revenue metric definition remains
the same, the Quarterly Revenue is now defined using Daily Revenue as base
metric. This relatively simple change in definition results in just one very large
table scan and a small intermediate result table scan.
Summary Metric from Base Metric
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
224 SQL Generation Optimizations 2011 MicroStrategy, Inc.
SQL Generation Optimizations
After completing this topic, you will be able to:
Apply tuning techniques to optimize the SQL generated by MicroStrategy.
Although SQL generation depends significantly on report and schema object
definition, there are tuning techniques and recommendations at the SQL
generation level that impact the database query performance. The SQL
generation process can be divided into three functional layers, each of which
can be optimized for query performance:
Logical Query LayerTranslates the report definition into generic SQL
Database Optimization LayerTranslates generic SQL to database
platform-specific SQL
Query Optimization LayerOptimizes query SQL by eliminating
redundant SQL passes
Logical Query Layer
The main function of the logical query layer is to translate the report definition
into a generic SQL statement. Three key techniques in this layer can be used to
optimize database query performance, as detailed in the next sections.
Consolidating Multiple Expressions Into a Single Fact
As a general rule, it is recommended to consolidate columns and or column
expressions with the same functional meaning into a single fact definition. This
design enables the SQL Engine to truly select the smallest available table on the
schema that supports the fact whenever it is used in a metric definition.
The following image illustrates this concept. In this hypothetical example, a
single fact (Profit) is defined as three different expressions that are mapped to
three different fact tables in the schema.
[TOTAL DOLLAR SALES - TOTAL COST] maps to the DAILY_SALES fact
table with 200MM rows
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 225
[ORDER_AMOUNT - ORDER_COST] maps to the ORDER_FACT table
with 2B rows
[QTY_SOLD * (UNIT_PRICE - UNIT_COST)] maps to the ITEM_DETAIL
table with 20MM rows
Based on the above fact definition, when the Profit metric is placed on a report,
the MicroStrategy SQL Engine chooses the smallest possible table
(ITEM_DETAIL in this case) to retrieve Profit data. Instead, if a different fact
object had been defined based on each one of the different expressions, the
engine would not be able to pick the most optimum table given a request for
Profit data. This type of design inefficiency is commonly observed on
MicroStrategy implementations in the field.
Consolidating Multiple Expressions Into a Single Fact
Resolving Metrics from a Single Fact Table
Reducing the number of Fact Tables needed to complete a data request is
another technique that helps optimizing database query performance,
especially when the query requires full scans of large Fact Tables.
The following image illustrates this scenario using the same example from the
previous section. On the left side, Profit is only defined as
[TOTAL_DOLLAR_SALES - TOTAL COST] and sourced from the
DAILY_SALES fact table. Unit Price is defined as [UNIT_PRICE] and sourced
from the ITEM_DETAIL fact table.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
226 SQL Generation Optimizations 2011 MicroStrategy, Inc.
When both metrics are used in a report, the Engine generates at least two
different SQL passes, each scanning large fact tables to retrieve the data.
Two Fact Tables vs. Single Fact Table
On the right side, in addition to the first formula, Profit was also defined as
[QTY_SOLD * (UNIT_PRICE - UNIT_COST)] and sourced from the
ITEM_DETAIL fact table. In this case, because of the new definition of Profit,
the Engine resolves the report using a single SQL pass against the smaller fact
table, resulting in much better database query performance.
Creating Aggregate Tables to be Used Instead of Large Fact
Tables
The SQL Engine has the capability of choosing the most optimal fact table to
service a specific report request. By selectively creating aggregate tables to the
project's physical schema, a report designer can take advantage of this
MicroStrategy SQL Engine capability to increase overall system performance
from a database query perspective.
The following recommendations are provided to build aggregate tables:
Build them with the higher level attributes of the Time Dimension
Build them based on the most requested, higher-level attributes of the
project hierarchies
Build them using distributive functions such as Sum, Max, or Min
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 227
Enterprise Manager can be a valuable tool in helping report designers
determine which reports are the most used and take the most time to run due
to long database queries. A further analysis of this group of reports can yield
the attribute levels and metrics that can then be used as a basis to generate
aggregate tables. This type of analysis will ensure that a whole group of
commonly used reports benefit from the newly created aggregates.
A faster and simpler, but potentially less efficient method to create aggregate
tables is through the use of the MicroStrategy Datamart feature. For more
information on Datamarts and Datamart creation using MicroStrategy, refer to
the Advanced Reporting Guide.
Database Optimization Layer
The main function of the database optimization layer is to translate generic
SQL to Database Platform-specific SQL. Six key techniques in this layer, all
involving VLDB properties, can be used to optimize database query
performance. These techniques are detailed in the next sections.
Understanding VLDB Properties
It is important to understand the role that VLDB properties play within the
Database Optimization Layer. VLDB properties enable report designers and
architects to alter the syntax of a SQL statement and to take advantage of
database-specific optimizations. VLDB properties can provide support for
unique configurations and optimize performance in special reporting and
analysis scenarios. They can be accessed through the VLDB Properties Editor
in several ways, depending on the level of MicroStrategy objects the designer
wants to modify with the VLDB property changes.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
228 SQL Generation Optimizations 2011 MicroStrategy, Inc.
For example, a VLDB property change can be applied to an entire database
instance or only to a single report associated with that database instance. The
following table displays the object levels at which VDLB Properties can be
defined and the steps to access the VLDB Properties editor for that object:
Using Parameterized Queries for Bulk Data Insertion
For certain operations that require data to be inserted from the Intelligence
Server memory into the database, such as multi-source or datamart reports, or
for iterative analysis involving the Analytical Engine, the MicroStrategy Query
Engine by default inserts data row-by-row. This type of operation is typically
very slow at the database level due to the table-level locking and unlocking that
is necessary for the successive data insertions.
Object Levels to Define VLDB Properties
Level of VLDB
Property
How to launch the VLDB Property Editor
Attribute In the Attribute Editor, on the Tools menu, select VLDB
Properties.
Database
Instance
In the Database Instance Manager, right-click the
database instance for which you want to change
VLDB settings and select VLDB Properties.
OR
In the Project Configuration Editor, select the
Database instances, then click VLDB Properties.
Metric In the Metric Editor, on the Tools menu, point to
Advanced Settings, and then select VLDB Properties.
Project In the Project Configuration Editor, in the Project
definition category, select Advanced. In the Analytical
Engine VLDB properties area, click Configure.t
Report In the Report Editor or Report Viewer, on the Data menu,
select VLDB Properties.
Template In the Template Editor, on the Data menu, select VLDB
Properties.
Transformation In the Transformation Editor, on the Tools menu, select
VLDB Properties.
Note: Only the Transformation Role Processing VLDB
property is accessible from this editor. All other VLDB
properties must be accessed from one of the features
listed in this table.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 229
An alternative approach that improves overall query performance significantly
is to use bulk data insertions using parameterized statements. This feature,
when supported by the database platform, prepares the data to be inserted into
the database by splitting it into many equal sized packages.
The table below, generated based on internal tests, shows the comparative
performance gains achieved when inserting 50,000 rows of data into different
database platforms using bulk vs. row-by-row data insertions:
Parameterized insertion must be enabled in three places: on the database, on
the ODBC driver, and in the MicroStrategy database instance.

The procedure below only cover steps to enable parameterized insertion


in the MicroStrategy database instance and on the ODBC driver. Refer
to the specific database platform documentation for steps to enable this
feature on the database side.
To enable parameterized insertions:
1 In Desktop, in the Folder List, expand the Administration icon, followed by
Configuration Managers.
2 Select Database Instances.
3 In the Object Viewer, right-click the database instance on which you want
to enable parameterized queries, and select Edit.
4 In the Database Instances Editor, select the right Database Connection, and
click Modify.
5 In the Database Connections Editor, click the Advanced tab.
Time to Insert 50K Rows (seconds)
Database
Platform
No Parameterization Parameterization Inserts
DB2 185 0.64
NETEZZA 3940 1.31
ORACLE 107 0.84
SYBASE 151 7.26
TERADATA 327 1.78
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
230 SQL Generation Optimizations 2011 MicroStrategy, Inc.
6 Select the Use parameterized queries check box.
7 Click OK.

After enabling parameterized queries, you need to restart Intelligence


Server. The image below shows this setting:

For Oracle and Sybase, you also need to check the Enable
SQLDescribeParam check box at the DSN level, as shown below:

Parameterized insertion prepares the data to be inserted into many equal sized
packages. You can tune the size of each individual packet by setting the
MaximumRowsetSize parameter in the ODBCConfig.ini file, which determines
how many KB each individual package carries. The default value is 32 KB.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 231
Varying the values for this setting can affect insertion performance as seen in
the graph below. The best value should be determined on a case by case basis
by testing directly on the specific database environment.
Insertion Times vs. Rowset Size

For more information about parameterized inserts, please refer to the


knowledge base document-TN30290: How to optimize performance of
database inserts for Multisource and Datamart reporting.
Providing SQL Hints to the Database Optimizer
The SQL Hint VLDB property causes the SQL Engine to provide SQL hints that
guide the database optimizer to generate the most optimal query plan. It is
mainly used for the Oracle SQL Hint pattern. The string is placed after the
SELECT word in the Select statement. This property can be used to insert any
SQL string that makes sense in a Select statement. The hints can guide the
optimizer to do the following:
Use less memory
Ignore or use certain indexes
Specify join orders
Control sub-queries
Use materialized views
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
232 SQL Generation Optimizations 2011 MicroStrategy, Inc.
Force OR conditions and IN clauses into UNION
Control parallel operations
Use Transformation Formula Optimizations
In the example below, a hint was inserted to guide the database optimizer to
use an Index on the ITEM_ID column for a specific query generated by the
SQL Engine.
SQL Hints
Specifying Faster Sub Query Types
The MicroStrategy SQL Engine will automatically generate co-related
subqueries for certain types of calculations, typically those involving
relationship filters, many-to-many attribute evaluations, and set operators. In
a co-related subquery, the inner query must be executed for every row of the
outer sub-query, often resulting in suboptimal performance. To improve
database performance, it is important to select best-performing sub-query type
from the different available options. The Sub Query Type VLDB property
enables report designers to select between different types of subqueries. The
most optimal option depends on each database platform's capabilities.
MicroStrategy offers seven distinct methods for sub queries, two of which
actually use temporary tables. Not all databases support all seven options.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 233
The image below displays the different sub query options available:
Sub Query Options Available in MicroStrategy
Internal tests performed against Oracle comparing the seven different methods
are presented in the table below. Results can often be environment-specific
rather than database platform specific so for best results, it is recommended to
conduct similar tests on-site:
The following recommendations will generally improve sub-query
performance:
Using an IN clause instead of an EXISTS, whenever applicable
Using temporary tables for reports using several relationship filters
Creating temporary tables then using an EXISTS statement against the
smaller intermediate table for large fact tables
Tests Comparing Different Sub Query Types
Sub Query
Type (VLDB
Value)
Execution
Time (Avg)
Test Data (Run)
1st 2nd 3rd 4th 5th
1 2:13.3 2:14.4 2:10.6 2:13.3 2:54.4 2:14.8
2 2:34.3 2:41.2 2:51.7 2:11.1 2:52.2 2:15.1
3 2:34.0 2:46.5 2.38:4 2:38.6 2:22.0 2:24.4
4 2:13.5 3:27.0 2:17.9 2:13.1 2:11.3 2:11.5
5 3:36.6 3;20.6 3:30.6 3:57.7 3:57.7 3:16.3
6 2:22.8 2:12.6 2:14.1 2:15.3 3:48.5 2:49.0
7 4:00.2 3:20.0 4:38.7 4:25.4 4:12.8 3:24.3
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
234 SQL Generation Optimizations 2011 MicroStrategy, Inc.
The image below illustrates an example, in which the EXISTS clause has been
replaced by an IN clause, using the SubQuery Type VLDB setting, resulting in
better performance:
Subquery Optimization using VLDB Settings
Specifying Faster Set Operators
The VLDB property Set Operator Optimization is a query optimization
property that can be located under the Query Optimization folder. It
determines whether to use set operators, such as UNION, EXCEPT, and
INTERSECT, to combine multiple filter qualifications rather than their
equivalent logical operators, such as AND NOT and AND. Some databases
evaluate set operators more efficiently than the equivalent logical operators in
SQL.
It is not necessary to specify set operators explicitly in the filter editor. In fact,
there is no change in the filter editor itself. The relationships between
qualifications are specified in terms of logical operators. When set operators
apply, a logical operator in the filter definition is automatically translated into
the corresponding set operator, as shown by the table below:
Logical Operator and Set Operator Equivalency
Logical Operator Set Operator
AND INTERSECT
OR UNION
AND NOT EXPET or MINUS
OR NOT (no set operator equivalent)
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 235
Set operators may enhance query performance when used to combine sets of
data within a larger query, most typically when there are multiple SELECT
clauses within the WHERE clause of a given query. The MicroStrategy SQL
Engine will usually generate subqueries when the following types of filter
qualifications are used:
Relationship filters
Metric qualifications when combined with other types of set qualifications
with the logical operators AND, NOT, or OR
Report-as-filter qualifications when combined with the logical operators
AND, NOT, or OR
Internal tests performed against Oracle showed much better performance with
Set Operator Optimization enabled, as demonstrated below:
Specifying Faster Intermediate Table Types
The use of temporary tables underpins most of the SQL Engine's multi-pass
SQL statements. Therefore, it is important to ensure that the best possible
intermediate table type for a given database platform is selected.
The Intermediate Table Type property is a VLDB property that specifies what
kind of intermediate tables should be used to generate SQL for a given report.
This property can have a major impact on report execution performance. When
defining this setting, consider the following:
Permanent tables are usually less optimal
Derived tables and common table expressions usually perform well, but
they do not work in all cases and for all databases
True temporary tables usually perform well, but not all databases support
them
Internal Tests performed Against Oracle (seconds)
TPCH on
Oracle
1st
Run
2nd
run
3rd
Run
4th
Run
5th
Run
Avg
Set Operator
Enabled
26.908 26.673 26.689 26.643 26.939 26.770
Set Operator
Disabled
39.737 39.455 39.673 39.937 44.705 40.701
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
236 SQL Generation Optimizations 2011 MicroStrategy, Inc.
The default setting in MicroStrategy is permanent tables, due to the fact that
they work for all databases in all situations. However, based on different
database types, this setting can be changed. This process can be further
optimized on a report-by-report basis. In some databases, for reports with a
very large number of SQL passes, some intermediate types will perform better
than the default type.
In the example below, where the database type is DB2, a common table
expression is used as an intermediate table type, instead of a permanent table.
Intermediate Table Type
Specifying a Faster Report Data Population Method
When a report executes, Intelligence Server stores the report results data in
memory using a highly normalized format to improve performance, save
memory, and to support OLAP operations.
For example, a report with Region, Category, Subcategory, and Revenue in a
denormalized format contains several redundant data cells. Its normalized
counterpart, which is the format that Intelligence Server would use to store the
report's data in memory, is much more efficient: it uses one set of tables under
the Product dimension, one fact table, and only one table under the Geography
dimension.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 237
This example is illustrated below:
Normalized Storage
Given that normalization is a mandatory final step for all report executions,
choosing the proper normalization method will have an impact on
performance. The Data population for reports VLDB property enables
designers to determine how the data for a report is normalized after execution.
Guidelines per Normalization Method
Normalization
Method
Guidelines
No
Normalization
Not recommended. This setting means that the entire
dataset returned from the warehouse will be passed
through Intelligence Server to the Analytical Engine,
where it still must be transformed into a normalized
format before it can be rendered as a report.
Intelligence
Server
Normalization
Not recommended if Intelligence Server memory is a
limitation. When this is not the case, this option might
perform better than Database Normalization,
especially if the database is not optimized, generally
slow or under heavy stress.
Database
Normalization
Recommended when Intelligence Server memory is
limited or when large datasets would need to be
returned and processed within Intelligence Server.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
238 SQL Generation Optimizations 2011 MicroStrategy, Inc.
The relative performance of each normalization method is highly dependent on
the environment (for example, report characteristics, data sizes, attribute
cardinalities, memory availability on Intelligence Server). Therefore, even
though some high-level guidelines are offered in this section, the
recommended approach is to conduct tests on-site to determine the method
that best fits each client environment.
Optimizing Transformation Formulas
The Transformation Formula Optimization VLDB property enables report
designers to improve the performance of expression-based transformations.
Performance can be improved for reports that include expression-based
transformations and meet the following requirements:
The attribute that is transformed must only have ID as its display form
(that is, no DESC as display form)
The transformation must be expression-based, using ID in the expression
and only contain "+ -"
The report template and filter can have the attribute that is transformed but
cannot have any other attributes of the same dimension as the
transformation attribute
Enabling this property can improve performance of expression-based
transformations because it eliminates the requirement to join with the
transformation table. If the transformation is included on a report that cannot
support this optimization because one or more of the conditions discussed
above are not met, then a join with the transformation table will automatically
be used. Internal tests conducted in Oracle showed performance
improvements between 10% and 20% in report execution times with this
setting.
Query Optimization Layer
The main function of the query optimization layer is to optimize the report
query SQL by eliminating redundant SQL passes. There are five optimization
levels on this layer, all exposed through one VLDB setting.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 239
The table below provides high level details on each one of the levels:
The table below presents a sample of the performance improvements that have
been achieved in internal tests using the Query Optimization Layer:
The following sections provide four different scenarios that benefit from using
one of the four levels of global query optimization.
Query Optimization
Level Level Description Notes
Level 0 No Optimization Queries are not optimized
Level 1 Remove Unused And
Duplicate SQL Passes
Redundant, identical, and equivalent SQL passes
are removed from queries during SQL generation.
Level 2 Merge SQL Passes
Accessing the Same
Fact Table
Level 1 optimization takes place and SQL passes
from different SELECT statements are consolidated
when it is appropriate to do so.
Level 3 Push Metric Filter
Conditions into Fact
Expressions
Level 2 optimization takes place and SQL passes
which access database tables with different
WHERE clauses are consolidated when it is
appropriate to do so.
Level 4 Merge Intermediate
Temporary Table SQL
Passes
This is the default level. Level 2 optimization takes
place and all SQL passes with different WHERE
clauses are consolidated when it is appropriate to
do so. While Level 3 only consolidates SQL
statements that access database tables, this option
also considers SQL statements that access
temporary tables, derived tables, and common
table expressions
Performance Improvement with Query Optimization
Execution Time Reduction SQL Pass Reduction
Percent
Reduction
Time
Before/After
Percent Reduction Time Before/After
Report 1 67% Before: 24 sec
After: 8 sec
65% Before: 78
After: 27
Report 2 60% Before: 20 min
After: 8 min
93% Before:60
After: 4
Report 3 56% Before: 9 sec
After: 4 sec
55% Before: 11
After: 5
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
240 SQL Generation Optimizations 2011 MicroStrategy, Inc.
Removing Unused and Duplicate SQL Passes
Optimization Level 1 can be used to remove duplicate SQL passes from the
query. The image below illustrates this concept. The filter Categories with Sales
>50K is included both as a metric conditionality and as report filter. This
design results in a duplicated SQL pass, as shown on the left-hand side.
Eliminating Duplicate SQL Passes
Without the Query Optimization Layer, no further manual tuning would have
removed this duplicate pass. By enabling optimization Level 1, the SQL Engine
automatically identifies and eliminates the duplicate SQL pass.
Merging SQL Passes that Access the Same Fact Table
Optimization Level 2 merges SQL passes with different selections from the
same fact table into a single SQL pass. In the following example, the report
contains two metrics, the Count metric, sourced from the ITEM_MTH_SLS
table, and the Revenue metric, also coming from the ITEM_MTH_SLS table.
The MicroStrategy SQL Engine generates two SQL passes to compute each
metric. These two SQL passes differ only in the SELECT clause. The WHERE,
FROM, and GROUP BY clauses are identical in both SQL passes. In this case,
enabling optimization level 2 optimizes performance by merging both SQL
passes into a single SQL pass.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. SQL Generation Optimizations 241
This scenario is illustrated below:
Merging SQL Passes
Pushing Metric Filter Conditions into Fact Expressions
Optimization Level 3 automatically pushes filter conditions into fact
expressions. In the example below, the report contains three metrics, each with
a separate filter condition, but all defined using the same attribute. The
performance optimization is displayed on the right-hand side of the image.
Resolving Filter Conditions in a Single SQL Pass
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
242 SQL Generation Optimizations 2011 MicroStrategy, Inc.
The SQL Engine builds a separate SQL pass for each metric and applies the
filter by joining the fact table to the Region lookup table, as seen on the
left-hand side of the image. By enabling optimization level 3, the SQL Engine
automatically generates a single SQL pass that pushes the filtering condition
into the SELECT clause with a CASE or an IF statement.
Merging Intermediate Temporary Table SQL Passes
Optimization Level 4 merges intermediate temporary table passes. In the
example below, a report is created with another report as a filter. The SQL
Engine computes this filter using two separate SQL passes and stores
intermediate data into two tables:
Northeast region data, stored in the ZZMD01 table
Central region data, stored in the ZZMD02 table.
When optimization level 4 is enabled, the two report SQL passes are merged
into a single pass that stores data in just one temporary table. This approach
optimizes performance by reducing SQL passes.
Merging Intermediate Temp Table SQL Passes
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Other Query Performance Optimizations 243
Other Query Performance Optimizations
After completing this topic, you will be able to:
Understand query optimization techniques related to the Multi Source Option
and ODBC.
Multi Source
The Multi Source Option allows BI architects to define a single MicroStrategy
metadata to retrieve data from multiple relational data sources. With Multi
Source, a single report can connect and run SQL against multiple databases.
Accessing data from multiple sources involves additional overhead in terms of
data movement. In general, a given query or report from a single source will
perform faster than a similar report involving multiple sources.
Data Movement for Multi Source
The amount of data movement required to resolve a query tends to be the
major factor influencing Multi Source performance. In turn, using Multi
Source tends to impact the following areas:
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
244 Other Query Performance Optimizations 2011 MicroStrategy, Inc.
Report execution time Intelligence Server moves data by issuing
multi-pass SQL against the multiple sources and creating temporary tables
on the database where all the data is to be joined. This requires additional
time for data movement across the network and large amounts of database
inserts.
Number of database connections to resolve a queryBy definition,
Intelligence Server needs to open more database connections when
executing a multi-source report if compared to a single source report.
Database connections are handled by Intelligence Server. Each database
connection requires additional memory and CPU resources.
Database Resource UtilizationInserting large amounts of data will
translate into high database utilization. The impact on the database
resources is even greater at high concurrency levels with limited report
caching.
Network Resource UtilizationMulti-source reports use internal network
bandwidth to move data across different sources.
The following sub-sections discuss different recommendations to optimize
performance of Multi Source reports.
Minimizing the Number of Database Instances
A new database instance must be created for each data source used for data
retrieval. Each additional database instance requires additional resources
within Intelligence Server. Intelligence Server maintains a pool of threads for
each database instance and each thread can use up to 15 MB of memory when
idle and a maximum of 100 MB when in use.
Avoiding Read-Only Sources
Read-only sources limit flexibility because the engine is forced to create all
required temporary tables on a different database and, in some cases, move
entire tables from the read-only source to the data warehouse. Internal test
shows a considerable performance impact in using read-only sources as
opposed to read and write sources, due to increased data movement. For
performance optimization purposes, whenever possible, avoid using read-only
sources for Multi Source.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Other Query Performance Optimizations 245
Duplicating Lookup Tables
With Multi Source, where lookup tables can be sourced from multiple
databases, the architect must define a primary database instance and one or
more secondary database instances for that lookup table. Environments where
the lookup tables are duplicated across the used sources are generally faster
than those where lookup tables are available in only one source. With
duplicated lookup tables, simple queries may be resolved entirely within a
single database, which reduces data movement and response time.
In a Multi Source environment, it is possible to have aggregate tables located
on different databases for enhanced performance. The SQL Engine will be able
to pick the best table to resolve a query, regardless of its location. In such a
case, it is recommended to maintain a copy of the lookup tables on the same
database as the aggregate tables to reduce data movement for simple or
aggregated queries.
The image below shows results for two sample reports, R1 and R2, executed
against three different configurations. The first system has a single database
instance. The second one has a data warehouse and a datamart with copies of
the lookup tables. The third system consists of a data warehouse and a
read-only database. As expected, reports run faster on the single source system
and slower on the system with the read-only source. Results also show the
amount of data movement required for each case.
Table Distribution Impact on Report Response Time
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
246 Other Query Performance Optimizations 2011 MicroStrategy, Inc.
Using Caches or Cubes in High Concurrency Scenarios
High user concurrency can significantly increase the amount of data movement
between multiple databases, creating higher network utilization, heavy
database disk I/O, and impacting performance. Caching Multi Source reports
at off-peak hours can significantly enhance performance, especially for highly
used reports. Administrators may also consider using cubes in cases where
caching is limited, such as projects with numerous security filters, user security
profiles, and prompted reports. This approach shifts heavy I/O operations and
data movement to off-peak hours.
Tuning the Data Sources
The databases, where data inserts and joins take place, will experience the
biggest impact from using the Multi Source option. If the data source is a
bottleneck, adding more resources such as processors, RAM or disk space to
Intelligence Server will not likely reflect on improved performance. Instead,
architects should consider approaches to tuning the data source, such as
optimizing the data model, caching connections, and tuning database
connection parameters for more efficient data insert operations.
ODBC
In a MicroStrategy environment, Intelligence Server uses ODBC to connect
with the data warehouses and the metadata. ODBC accesses databases by
inserting a middle layer, called a database driver. ODBC drivers translate
Intelligence Server requests into commands that the database understands.
ODBC implementations run on many operating systems, including Microsoft
Windows, UNIX, Linux, and so on. Hundreds of ODBC drivers exist, including
drivers for enterprise DBMS such as Oracle, DB2, Microsoft SQL Server,
Sybase, and so forth.
MicroStrategy installs the ODBC drivers as part of the normal installation
process. The ODBC drivers are installed with default settings, which are tested
to perform well according to the database type they serve. However, certain
scenarios may justify tuning some of these settings to optimize performance of
communication with data sources. These scenarios are covered in this section.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Other Query Performance Optimizations 247
Using Extended Fetch
Data may be retrieved from the data warehouse through ODBC by using either
SQLFetch or SQLExtendedFetch calls. The default mode in Intelligence Server
is SQLFetch. SQLFetch returns data one row at a time while
SQLExtendedFetch returns multiple rows at a time.
When the number of result sets is large, SQLExtendedFetch may optimize
performance making faster retrieval of query results easier. However, not all
ODBC drivers support the use of SQLExtendedFetch.
The following table shows the results of an internal test comparing the two data
fetch methods. The results show that for small result sets, SQLExtendedFetch
can actually be slightly slower than SQLFetch. But, for large datasets, it is
definitely faster and a better option from a performance perspective.
To enable SQLExtendedFetch in Intelligence Server:
1 In Desktop, in the Folder List, expand the Administration icon, followed by
Configuration Managers.
2 Select Database Instances.
3 In Object Viewer, right-click the database instance on which to enable
extended fetch, and select Edit.
4 In the Database Instances Editor, select the right Database Connection, and
click Modify.
5 In the Database Connections Editor, click the Advanced tab.
6 Select the Use extended fetch check box.
7 Click OK.
Test Results Comparing Data Fetch Methods
Number of Rows (Thousand)
50 100 250 500 1500 2500
Data Extraction
Time (seconds)
SQL Fetch 0.34 0.62 1.56 3.29 13.67 21.18
SQL Extended Fetch 0.34 0.65 1.64 3.25 9.92 16.51
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
248 Other Query Performance Optimizations 2011 MicroStrategy, Inc.
After enabling extended fetch, you need to restart Intelligence Server. The
image below shows the extended fetch setting:
Use Extended Fetch Setting
Using Multi-Process vs. Multi-Threaded
Multi-process and multi-threaded are the two options available when defining
the database driver mode.
If multi-process is selected, each connection to the data warehouse is
spawned as a separate process. If one process fails, such as when a database
access thread hangs or is lost, other processes are not affected.
If multi-threaded is selected, all connections to the data warehouse are
maintained inside one Intelligence Server process. All connections, SQL
submissions, and data retrievals from the database are handled within this
process.
Multi-process mode is considered more stable and robust while the
multi-threaded mode is considered more efficient in terms of resource
consumption, but less stable.
Based on internal testing, the recommendation is to set database drivers to
multi-process mode. The robustness and stability which come with
multi-process mode greatly overshadow any increased efficiency that may
come with multi-threaded mode. Problems that appear randomly in
multi-threaded operation can often be resolved by switching to multi-process
mode.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Other Query Performance Optimizations 249
Understanding Oracle ARRAYSIZE
ARRAYSIZE specifies the number of rows to return when returning a set of
values to the application. Since this specifies the array size to return, it directly
impacts the number of round trips required to satisfy a request for data.
ARRAYSIZE is a setting in the Oracle ODBC Driver and its values can be an
integer from 1 to 4,294,967,296 (4 GB). The default value is 60000. The value 1
is a special value that does not define the number of bytes but, instead, causes
the driver to allocate space for exactly one row of data.
Larger values increase throughput by reducing the number of times the driver
fetches data across the network when retrieving multiple rows. Smaller values
increase response time, as there is less of a delay waiting for the server to
transmit data, but increase the number of round-trips between server and
database.
Internal tests show that setting the array size close to the size of the data to be
transferred can improve performance by 20%, at the expense of increasing
memory usage. The table below displays the test results:
Test Results for Different Arraysizes
Row Count 1500 2500 3500
Estimated Data Size 12M 20M 28M
Fetching time using
SQLFetch (seconds)
Arraysize= 60 Kbyte 5.37 8.46 13.03
Arraysize= 12 Mbyte 4.29 7.09 11.37
Arraysize= 256 Mbyte 4.39 7.26 11.75
Fetching time using SQL
Extended Fetch
(seconds)
Arraysize= 60 Kbyte 5.03 8.64 14.37
Arraysize= 12 Mbyte 4.15 6.92 11.28
Arraysize= 256 Mbyte 4.48 7.09 11.5
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
250 Lesson Summary 2011 MicroStrategy, Inc.

Lesson Summary
In this lesson, you learned:
In a typical report request, the time spent on the database tends to average
around 80% of the total report response time.
You can optimize a SQL query by applying techniques to the following
phases of data request: Report/Schema Design, SQL Generation, and
Architecture.
The order in which optimizations are done should ideally start with lower
layers, such as data architecture and database parameters, and
subsequently move to schema design, and report design.
Because it reduces the number of table joins needed to retrieve relevant
data, denormalized data warehouse schemas provide better performance
when using MicroStrategy.
Indexing is one of the most used data warehouse techniques to improve
query performance, because they eliminate the highly expensive process of
full table scans.
The MicroStrategy SQL Engine is partition-aware and can take advantage
of a partitioned data warehouse and select the smallest possible set of tables
when sourcing reports.
Views are useful from a performance perspective because they can pre-join
tables, simplify complex queries, and act as aggregate tables.
Building separate lookup tables for attributes which would otherwise be
sourced from a fact table eliminates potentially very large table scans.
By carefully designing the project schema and by taking into account
important optimization considerations, you can tune the SQL Engine to
generate optimal queries.
To optimize query performance, unnecessary joins can be eliminated by
using one of the following techniques: redefining parent child attribute
relationships, mapping attributes to specific tables, and deleting
unnecessary attributes.
Deploying MicroStrategy High Performance BI Data Warehouse Access 6
2011 MicroStrategy, Inc. Lesson Summary 251
Sometimes, a SQL query can be optimized by pushing filtering conditions
into fact expressions using CASE statements instead of defining them at the
metric level.
From a query optimization perspective, substituting Custom Groups with
Consolidations can be beneficial.
For cases when two dimensional metrics at different levels of the same
hierarchy are requested, defining the higher level metric using the lower
level metric as base will result in better database query performance.
The SQL generation process can be divided into three functional layers,
each of which can be optimized for query performance: Logical Query
Layer, Database Optimization Layer, and Query Optimization Layer.
As a general rule it is recommended to consolidate columns and column
expressions with the same functional meaning into a single Fact definition.
Reducing the number of fact tables needed to complete a data request is
another technique that will help optimize database query performance.
Aggregate tables can be a very useful performance improvement technique.
VLDB properties enable report designers and architects to alter the syntax
of a SQL statement and to take advantage of database-specific
optimizations.
Parameterized statements, when supported by the database, prepares the
data to be inserted into the database by splitting it into many equal sized
packages.
The SQL Hint VLDB property causes the SQL Engine to provide SQL hints
that guide the database optimizer to generate the most optimal query plan.
The SubQuery Type VLDB property enables report designers to select
between different types of subqueries. The most optimal option will
generally depend on each database platform's capabilities.
Some databases evaluate set operators more efficiently than the equivalent
logical operators in SQL.
It is important from a database performance perspective to ensure that the
best possible intermediate table type and the fastest report data population
method for a given database platform is selected.
The Transformation Formula Optimization VLDB property enables report
designers to improve the performance of expression-based
transformations.
Data Warehouse Access Deploying MicroStrategy High Performance BI 6
252 Lesson Summary 2011 MicroStrategy, Inc.
The main function of the query optimization layer is to optimize the report
query SQL by eliminating redundant SQL passes.
The amount of data required to resolve a query tends to be the major factor
influencing Multi Source performance.
When the number of result sets is large, SQLExtendedFetch may optimize
performance making faster retrieval of query results easier.
Based on internal testing, the recommendation is to set database drivers to
multi-process mode.
Setting the array size close to the size of the data to be transferred can
improve performance by 20%, at the expense of increasing memory usage.
2011 MicroStrategy, Inc. 253
7
PERFORMANCE TESTING
METHODOLOGY
Lesson Description
This lesson talks about performance testing in a BI environment. Questions
about the performance of the system need to be answered with various
performance testing. Unlike straightforward feature testing that verifies the
functional correctness of the product, performance testing needs special design
and is more complex to execute and analyze.
At the end of this session, you will be intimately familiar with the methods and
techniques we use internally to test, troubleshoot, and optimize
performance-related challenges. You will be able to apply these same methods
and techniques to improve performance within your own BI deployment.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
254 Lesson Objectives 2011 MicroStrategy, Inc.
Lesson Objectives
After completing this lesson, you will be able to:
Understand performance testing, so you can choose the right type of tests for
specific performance requirements. Design, implement, execute, and analyze
performance tests in a correct way.
After completing the topics in this lesson, you will be able to:
Understand the main considerations for performance testing. (Page 255)
Understand the performance testing methodology and apply it to your
environment needs. (Page 260)
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Introduction to Performance Testing 255
Introduction to Performance Testing
After completing this topic, you will be able to:
Understand the main considerations for performance testing.
In software engineering, performance testing is testing that is performed, from
one perspective, to determine how fast some aspect of a system performs under
a particular workload. It can also serve to validate and verify other quality
attributes of the system, such as scalability, reliability, and resource usage.
The Performance Testing Methodology along with much of the content in this
course, is part of the outputs of the High Performance Initiative. Before
describing the Performance Testing methodology, it is important to answer a
few questions:
Why is Performance Testing important?
What is Performance in a Business Intelligence Environment?
Why is having a Performance Testing Methodology important?
Why is Performance Testing Important?
Software performance is critical, especially for enterprise software
applications. Often times, it is the key reason for the success or failure of a
purchasing deal. A BI application that runs very slow or crashes from time to
time will never sell, no matter how rich its functionality are. Because of this,
the MicroStrategy BI product must pass strict performance testing before it is
released.
The MicroStrategy Enterprise Analysis QE team runs performance testing
in-house to certify that the product has good performance before it acquires
Generally Available (GA) status. The passing criteria is based on whether the
system successfully survived 24 hours of concurrency stress testing. Due to the
complexity of the BI system, even if the product passed performance criteria
during the in-house testing, it might present poor performance in a customer
environment.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
256 Introduction to Performance Testing 2011 MicroStrategy, Inc.
Many factors can affect the system performance: database server, Web
application server, network environment, system architecture, user usage
patterns, business logic, degree of code optimization, and so forth. Hence, it is
often difficult to achieve the performance goals in the customer environment
without any adjustments.
For engineers in the MicroStrategy technology department to be able to resolve
the issues either through tuning or code optimizations, it is crucial that you
understand your system performance goals, design the right tests to verify
system performance, and help identify the performance bottlenecks.
Depending on the characteristics of a BI implementation, different types of
performance testing might be needed to cover different areas. For example:
Benchmark TestingUsed to ensure the project meets SLAs
Load TestingUsed to ensure the system has enough spare capacity to
meet projected usage
Regression TestingUsed to ensure that everything is still performing
well after changes or upgrades have been done to the system
What is Performance in a BI Environment?
To be able to test and do something about performance, you must be able to
measure it. In other words, product performance must be specified in a
quantitative way, as a set of numeric values that describe how fast and how
efficiently a software system can complete specific computing tasks.
For example, instead of saying the Web application must respond to report
execution request quickly, a performance statement will say the Web
application must return results to the report execution request in 2 seconds.
For this reason, performance needs to be measured before you can do any
quantitative analysis. To support performance analysis to answer a specific
performance question, performance metrics need to be collected. The most
used performance metrics are described below.
Response Time
From the users perspective, software performance is translated to response
time. Response time is the time needed for the system to respond to a user
request. It is end-to-end time, from the instant the user submits a request to
the instant the user gets the results back.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Introduction to Performance Testing 257
Response time consists of two parts:
System response timeTime from the instant the client receives the user
request to the instant the client receives the results data from the server.
Rendering timeTime the client spends to render the result. For instance,
when executing a Report Service document in MicroStrategy Web, after the
client (browser) gets all the results, the system response time is over.
However, it still needs to present the data by interpreting and executing
Java scripts, which corresponds to rendering time.
Whether a specific response time is acceptable or not depends on the request
type submitted by a user. For example, for report execution, 2 seconds of
response time is very attractive, while more than 10 seconds denotes poor
performance. Nevertheless, for a large exporting request that happens once a
day, 30 minutes may be acceptable.
Job Throughput
Job throughput refers to the average number of user requests that are
processed by the system in a unit of time. It directly shows the system
performance capacity. In general, throughput is measured by requests/second
or pages/second. Throughput is a key indicator in capacity planning projects,
because it describes the system performance capacity. It is also commonly used
in performance tuning. According to in-house testing data, 80% of the
performance issues are due to throughput bottlenecks.
CPU Usage
Any process running in an Operating System uses CPU time. For instance,
when Intelligence Server is idle, the server process uses zero CPU time. When
it receives the request to execute a report, the server process becomes busy,
until the report results return to the user. With proper tools, CPU time can be
calculated for any running process. For a specific user request, the smaller the
CPU usage, the better is the performance.
Memory Usage
Any application running in an Operating System uses system memory. The
memory usage of an application should reach stable state after running for a
long time.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
258 Introduction to Performance Testing 2011 MicroStrategy, Inc.
When the application process starts to handle a user request, it usually causes a
jump in memory. This is because the process needs additional memory to do
calculations and to hold temporary data. After the request is processed, the
additional memory is released and the memory consumption returns to its
original level. If the application keeps on running, the memory usage continues
to increase until it reaches a limit, which can cause the application to crash.
There are instances when the application consumes a greater than expected
amount of memory from the system. This usually indicates that there is
inefficient memory allocation in the program algorithms.
Data Size
Data size usually refers to output file size, such as the generated file of report
exporting.
Why Is Having a Methodology Important?
As the figure below illustrates, a MicroStrategy implementation is made up of
multiple layers, each one with its own characteristics and complexities that
may challenge even well-experienced performance testers.
MicroStrategy Platform
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Introduction to Performance Testing 259
Performance testing results are usually non-deterministic. The same test, when
executed in different environments, will probably produce different results.
Even in a consistent environment, running the same test twice can also
generate different results. This is because there are still some environment
factors, such as network traffic, that may vary and cannot be controlled.
User concurrency may also represent a challenge. It is more complex to do
reliable measurements of user requests, such as browsing folders or executing
reports, when there are many users logging in and performing other actions at
the same time. To come up with reliable results, it is crucial that you have a
methodology in place.
Without a proper methodology to address the inherent complexities of any
respectable BI deployment, most performance testing initiatives will at best
yield inconclusive results leading to ineffective solutions. At worst, they fail
and produce incorrect results, which is not a good position to be in when you
are under pressure to meet stringent SLAs or a tight delivery deadline.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
260 Performance Testing Methodology 2011 MicroStrategy, Inc.
Performance Testing Methodology
After completing this topic, you will be able to:
Understand the performance testing methodology and apply it to your
environment needs.
The key to make your performance testing fulfill your business and technical
goals is the adoption of a formal Performance Testing Methodology. The
MicroStrategy Performance Testing Methodology is used by MicroStrategy
Technology, Support, and Services teams to test, troubleshoot, and optimize
any individual performance challenge. It consists of five simple, but essential
steps that are illustrated below:
MicroStrategy Performance Testing Methodology
DefineGo beyond generalities such as performance is slow/bad and
define with precision the action that needs performance testing and what
your performance goals are for it.
QuantifyTake an initial high-level measurement of the action. Start with
single user testing, if single user performance is acceptable, then move to
concurrency testing.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 261
ProfileGenerate a detailed profile of the action, understanding how each
component of the architecture contributes to the end-to-end response
times.
OptimizeOptimize each one of the layers, focusing on the biggest
contributors to overall response time first. Continue this process until you
reach acceptable performance levels as defined in Step 1.
MonitorEstablish a performance monitoring plan to ensure optimum
levels of performance in the system are moving forward.

It is very important to make sure that the actions, tests, decision points,
and changes made to the system are properly documented all along the
way. This helps in several ways:
It helps keep a record of the changes that were made to the system in
case a rollback becomes necessary
It enables different members of the organization to participate and
follow-up on the project at different times
It ensures that the performance testing process also becomes a learning
process for the organization
The following sections describe in detail each step of the MicroStrategy
performance testing methodology.
Define System Goals
The first step to accomplish successful results in the performance testing
process is to define the goals and expectations for the testing. Performance
goals are usually defined by the business side or the project stakeholders.
Having the right goals established will help you decide the right type of test to
run and the right tools to perform the test.
However, to accomplish successful performance testing and troubleshooting, it
is important to go beyond general terminology such as the system is slow or
performance is bad and define, with a high degree of precision, the exact
action that requires performance testing and the acceptable performance
target for it. A vague definition for a performance problem often leads to the
wrong type of performance testing. As a consequence, people's time is wasted
and incorrect results are generated.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
262 Performance Testing Methodology 2011 MicroStrategy, Inc.
A performance goal is usually described by the following parameters:
Response timeThere may be different response time goals for different
user profiles or different types of interactive requests.
ThroughputMaximum throughput for the system while still maintaining
desired response time
Concurrency or User LoadNumber of interactive users supported by
the system during the peak hour while maintaining desired response time

Throughput and concurrency goals can be made equivalent. For


example: if one concurrent user is defined as expected to run 2 reports
per minute, then the goal of reaching 100 concurrent user during the
peak hour is the same as reaching 200 reports/minute.
UtilizationUse of application server and operating system resources, such
as thread pools, heap, JDBC connection pools, CPU, physical memory, disk
I/O, and network activity.
The image below illustrates the relationship between these parameters:
User Load X Utilization X Response Time X Throughput
The above image shows that the increase in concurrency (user load) slowly
generates longer response times as resource utilization linearly increases. By
80% of utilization, response time starts to degrade, even if the throughput
continues to increase. At this point, the system reaches the saturation point,
where performance is no longer acceptable.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 263
By setting the right goals when performing a performance testing, you can find
out where the saturation point is and avoid reaching it by tuning the system or
adding additional hardware.
The list below presents a few examples of good, precise performance definition
statements. They all define with precision the action that will be tested for
performance, and the criteria that needs to be met for the performance test to
be considered successful:
Average response time for the Regional KPI Overview dashboard should
remain under 5 seconds
All prompt dialogs in the Executive Dashboard application must render in 1
second or less
Total publication time for the Sales and Forecasting cube should take 45
minutes or less
The Performance Analysis project should tolerate a load of 100 report
requests per minute at peak time, with average response time of 5 seconds
or less
Quantify Performance
After you set the performance goals, the next step is to create a process to
obtain the key performance KPIs on the current production environment.
Tracking these KPIs over time is a critical step to establish the performance
and scalability baseline and evaluate the performance improvement on the
production system.
Collecting and analyzing performance KPIs provides the facts you need to
decide on what will be the next step. Before spending resources on a
fully-fledged performance testing and troubleshooting effort, it is
recommended to first take a single-user and perform a high-level
measurement of the action. This measurement allows you to determine a
couple of key items:
How close you are from the target performance
The types of tests that might be subsequently required
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
264 Performance Testing Methodology 2011 MicroStrategy, Inc.
You start with a Single User Testing, which measures the response time for the
action you are testing on an isolated environment and with only ONE user
accessing the system. In this type of test, the response time for the action
represents the best possible response time, because all of the systems
resources are 100% dedicated to completing that action. Depending on the
results of this test, different outcomes will take place:
If the Single User Performance is acceptable, that is, the response time of
the test is under the performance target you stipulated on the DEFINE
stage, your next step on the methodology will be to focus on concurrency.
On the other hand, if the Single User test does not meet the performance
target, doing concurrency or load testing does not make sense, given that
the best possible response time is already over the target.
In this case, the next step in the methodology is profiling and optimizing the
action under Single User. Only when the action is optimized and its response
time finally meets the targets, it makes sense to move to concurrency testing.
The figure below illustrates both scenarios:
Single User Test Outcomes
Profile the Action
The Profile step generates a detailed performance profile of the action.
Depending on the results of the Single User test, you will do either a
Single-User Performance Profiling (if the test did not meet the target) or a
Concurrency Performance Profiling (if the test met the target).
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 265
Each type of profiling has its own technique, and both produce diagrams
depicting the actions performance profile. In the case of Single User
Performance Profiling, the technique consists of generating a Performance
Stack Diagram, as shown below:
Performance Stack Diagram
In the case of Concurrency Performance Profiling, the technique consists of
generating a Performance Degradation Curve, as shown below:
Performance Degradation Curve
Single User Performance Profiling
In the Quantify step, the results for a single user performance test may be
represented by a single bar, as shown below:
Single User Test Results
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
266 Performance Testing Methodology 2011 MicroStrategy, Inc.
The height of the red bar represents the response time measurement, which in
this case exceeds the performance target established. The problem with this
image is that it does not provide any details about the time it took for each one
of the components involved on this execution to complete. With such a graph,
it is difficult to develop an optimization plan. To truly understand which
components are responsible for the poor performing results, you need to create
a more detailed performance profile of the action. The image below represents
a performance profile broken down by component:
Single User Test Results - Detailed

This detailed information can determine how best to optimize the action
to meet the performance targets.
The execution flow of a typical MicroStrategy dashboard, illustrated below, can
exemplify this technique:
Dashboard Execution Flow
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 267
Between the database and the dashboard, there is an underlying execution
workflow that has a direct impact on dashboard size and performance. For
information about the detailed steps involved with a dashboard execution, see
The Dashboard Execution Flow starting on page 159.
The image below depicts what the Performance Stack Diagram looks like for a
typical MicroStrategy Dashboard that executes against the Data Warehouse:
Performance Stack Diagram of a MicroStrategy Dashboard
The graph bar is now broken down into the following sections:
SQL Generation within Intelligence Server
Query Engine
Analytical Engine, Data Preparation Engine
XML Generation Engine
Web server processing time
Network Time
Client Time
This detailed Performance Stack Diagram is an important tool to help you
make decisions on where and what to optimize to improve performance. All of
the parameters and component execution times in this type of Performance
Stack Diagram can be obtained using the different out-of-the-box performance
counters and tools that ship with the MicroStrategy platform.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
268 Performance Testing Methodology 2011 MicroStrategy, Inc.
The tables below list the tools and the counters to measure for each component
of the MicroStrategy architecture.
The key to acquire useful results with these tools is ensuring that the
measurements are taken under Single User mode. In this mode, the systems
resources are 100% dedicated to completing the action you are profiling.
Performance Counters - Time
Performance Job flow information for MicroStrategy Dashboard Executions - Time
Component Counter Measure Time
Intelligence
Server
Diagnostics Configuration Document Execute Task
Find Report Cache
Resolution Step
SQL Generation
Query Engine
Analytical Engine
Data Preparation
XML Generation
Web Server Web Statistics Web Server Processing Time
Client Web Statistics Resource Loading
Java Script
Wait Time
Network - Any remaining difference
Stop Watch End to End time
Performance Counters - Size
Performance Job Flow Information for MicroStrategy Dashboard Execution - Size
Component Counter Measure Time
Size Document Cache Monitor XML
Web Statistics Data sent from Intelligence Server to
Web (bytes received)
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 269
Concurrency Performance Profiling
When the performance test in the Quantify stage is able to meet the target, the
next step is to profile the action in Concurrency mode. The Concurrency
Performance Profiling technique uses the Performance Degradation Curve.
The image below shows a Performance Degradation Curve with two measures
plotted against the same variable, Submission Rate:
System ThroughputRepresents the amount of reports that completed
successfully in one minute
Average Response TimeRepresents the time it took the reports to
complete against submission rate.
Performance Degradation Curve

The difference between Throughput and Submission Rate is that


Submission Rate is fixed, because it is an input to the system that you
can control, and Throughput is the output of the system. As the number
of concurrent users in the system increases, the number of report
requests or submission rates also increases.
One region of the Performance Degradation Curve that is important to identify
is the Performance Plateau. It identifies a range of values at which
performance stays consistent. Whatever the performance of the system is in
this region, it is the best performance that can ever be expected from the
system without conducting further tuning.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
270 Performance Testing Methodology 2011 MicroStrategy, Inc.
In the example curve, above the 4000 submission rate, average response times
start to climb very quickly. The system is no longer handling the load properly
and may even be unstable. This change in response time often happens very
quickly and without warning, causing a sharp change in the slope, or direction,
of the curve. This part of the curve is called the Performance Knee. The knee
represents the absolute maximum load that the system will be able to tolerate.
Increases in submission rates eventually translate into performance
degradation. In this sense, the Performance Degradation Curve is a powerful
profiling tool, because it provides the information on the maximum number of
reports that the system can process in one minute, before the report
performance starts to degrade.
There are two points in the curve that are important because they represent key
performance parameters for a given BI system:
Point ALargest system throughput at which the system response time is
stable (Performance Plateau ends at Point A). It reflects the user experience
under high concurrency.
Point BMaximum throughput observed irrespective of response times. It
is the absolute maximum capacity of the system.

Points A and B will not always coincide in the graph.


In environments where reports are executed in batch, user interactivity is not
part of the equation and therefore response times are less relevant. In this case,
point B is more important, because you do not care much about response times
as you do about maximum capacity.
Because most performance issues occur under high concurrency, you need to
develop tests that mimic the high-concurrency that you experience on a regular
basis. The best way to accomplish such testing is by using the right tools. There
are different load testing tools in the market that allow you to calculate the
different points in a Performance Degradation Curve. Testing tools are crucial
for the success of most performance tests, because manual test executions and
stop watches can only cover a very limited scope of performance issues.
The set of tools selected for the testing can be commercial tools, tools
developed in-house, or a combination of both. Commercial tools are often the
better option because they are mature, stable, easy to use, and provide
powerful tools for result viewing, analysis, and reporting.
After the requirements are defined, you can start evaluating commercial tools,
matching their characteristics with your requirements. You then select the tool
with the most matches. In the event that no commercial tools can satisfy your
requirements, you can try to create your own special tools.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 271
The most popular testing tools in the market are listed in the following table:

MicroStrategy uses Silk Performer to perform concurrency performance


profiling. Load Runner is more broadly adopted in the market.
Performance Degradation Curve Real Examples
The figure below shows the Performance Profile MicroStrategy generated for
an iPad Dashboard Application:
Response Times for iPad Application
Each pair of points in the graph corresponds to a particular Submission Rate
and represents a full load test of the system. To produce this chart, a total of at
least nine different load tests were necessary. The first test was performed with
a single user, and the subsequent tests were performed increasing concurrency
loads, resulting in incrementally higher submission rates of 500 reports per
minute, then 1000 reports per minute, then 1500, and so on.
Popular Testing Tools
Software Vendor
Open STA Open System Testing Architecture
IBM Rational Performance Tester IBM
JMeter An Apache Jakarta open source project
LoadRunner HP
SilkPerformer Micro Focus
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
272 Performance Testing Methodology 2011 MicroStrategy, Inc.
From this graph the following results can be extracted:
The best possible Average Response Time in the system is of about 4.64
seconds, indicated by the Performance Plateau.
The maximum throughput tolerated by the system is slightly above 3000
queries per minute, indicated by the throughput curves Performance Knee.
The figure below shows an example of a Performance Profile that indicates
problems with the application:
Profile Indicating Performance Issues
From this graph the following results can be extracted:
Response time starts degrading very early on in the tests. Before the system
reaches the maximum throughput the performance significantly degrades,
by the 10 queries/min mark.
There is not a clear performance plateau
This type of bad curve denotes mainly two types of issues:
Other applications running simultaneously in the system and draining
resources that would have otherwise been available for the test.

During the profiling test, it is important to ensure that nothing else


is running on the system and that all resources are 100% dedicated
to the test.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 273
There are aspects of the system that are not 100% optimized yet, such as:
Improper default setting configuration on the server components. For
example, default memory allocation for cache, working set, or XML
generation on Intelligence Server
Insufficient heap space allocated on the Web server
Insufficient memory on Intelligence Server
By optimizing the system, you should be able to obtain a curve with a shape
that looks much more like the GOOD CURVE example.

MicroStrategy runs hundreds of performance tests each week. The


results and conclusions of these tests are often issued as press releases
and posted on the MicroStrategy website.
The curve below is one example that covers a set of benchmark tests that
MicroStrategy Labs recently did to compare the performance of MicroStrategy
release 9.0.2 with previous releases.
MicroStrategy Benchmark Tests
This curve looks slightly different cosmetically and it is shown here
upside-down. However, you can read from this curve the same type of
measurements extracted from the previous two examples:
The performance plateau is at about 1.5 seconds, indicating the best
possible response time for the system
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
274 Performance Testing Methodology 2011 MicroStrategy, Inc.
The performance knee is at about 147 KiloCycles.

The KiloCycle is MicroStrategy standard performance power rating


and represents 1,000 user requests per hour. Translated into
submission rate, this number represents 147,000 queries per hour or
2450 queries per minute.
Best Practices Dealing with Test Results
Result analysis is performed after the test to conclude whether the test is a pass
or failure. It is the step that sets a skilled performance tester apart from a
novice. Each result analysis process requires good understanding of system
performance, software architecture, and various performance metrics. It is an
art instead of a science. Experience plays an important role in this phase.
All popular commercial performance test tools provide a graph module for the
purpose of analyzing the data. If you have created your own test scripts to
obtain performance results, you can use applications like Microsoft Excel to
generate the graphs.
When analyzing the data resulted from the test, often you encounter
discrepancies if you ran the tests multiple times. The recommendations below
are important to follow to achieve accuracy in the results and ensure that test
results are similar enough to be considered reliable:
Compare results from at least five test executions
If more than 20 percent of the test execution results appear not to be
similar to the others, it indicates issues with the test environment, the
application, or the test execution.
If measurements from a test are noticeably higher or lower than the results
of the other test executions, when charted side-by-side, they are probably
not statistically similar.
If one data set for a particular item (for example the response time for a
single page) in a test is noticeably higher or lower but the results for the
data sets of the remaining items appear similar, the test itself is probably
statistically similar. It is probably worth the time to investigate the reasons
for the difference of the one dissimilar data set.
After the action has been tested, profiled, and analyzed, the next step is to
determine how to optimize it to ensure it meets the performance targets.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 275
Optimize the Action
The Performance Stack Diagram is a helpful tool to optimize the testing action
to ensure it meets the performance targets. It allows you to pinpoint which
components of the execution are the biggest contributors to the long response
time. Start optimizing from these big contributors. Then, you continue the
optimization process until an acceptable performance level is reached.

Only one optimization should be introduced at a time to isolate the


effect of the change and determine its effectiveness.
It is very important to reprofile after each optimization. Reprofiling enables
you to truly measure the impact of the optimization by comparing the results to
the previous run and measure the impact of the change you made.
Reprofiling After Optimizing

The methodology diagram has a loop back arrow from the Optimize to
the Profile step to denote the importance of reprofiling after optimizing.
In the previous example, the Performance Stack Diagram showed that the test
for a Single User did not meet the performance target. Therefore, the goal is to
essentially shave off enough time from the overall execution to get rid of the
portion of time that exceeds the performance target. The true value of a
detailed performance profile can be demonstrated through this example,
because it allows you to focus your optimization efforts in those areas that will
truly make an impact.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
276 Performance Testing Methodology 2011 MicroStrategy, Inc.
For example, if you manage to reduce the execution time of the final
component by 50%, doubling its performance, you still do not meet the defined
performance target. The image below depicts this scenario:
Results when Optimizing the Final Component
However, If you focus on reducing the response time of the first component by
50%, you meet the performance target, as shown below:
Results when Optimizing the First Component
This targeted component-by-component optimization is not possible if you do
not spent the time establishing a full performance profile for the action in the
first place.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Performance Testing Methodology 277
This course, from chapters 1 to 6, provided the knowledge and skills to
optimize the different components of a MicroStrategy BI architecture, after you
generate the full profile for actions on your environment.
Finally, after you profile, optimize, reprofile, and continue this cycle until the
tested action performs at acceptable levels, the next step is establishing a
monitoring plan to ensure that performance remains acceptable as the system
evolves down the road.
Monitor the Environment
The frequency and resources used for monitoring the environment will largely
depend on the characteristics of the BI environment and the action that you
will be monitoring. The key is to ensure that when changes are introduced into
the system, they do not degrade the performance of the actions that are
important to your business users.
The MicroStrategy platform contains tools such as Enterprise Manager, Health
Center, Command Manager, and Integrity Manager that enables you to
monitor your systems performance. In addition to these tools and the different
performance logging mechanisms available within the MicroStrategy platform,
you can also use commercially available software, such as Silk Performer or
Load Testing.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
278 Lesson Summary 2011 MicroStrategy, Inc.

Lesson Summary
In this lesson, you learned:
Performance testing can serve different purposes: it can demonstrate that
the system meets performance criteria, it can compare two systems to find
which one performs better, or it can measure what parts of the system or
workload cause the system to perform badly.
Product performance must be specified in a quantitative way, as a set of
numeric values that describe how fast and how efficiently a software system
can complete specific computing tasks.
The MicroStrategy Performance Testing Methodology is used by
MicroStrategy Technology, Support, and Services teams to test,
troubleshoot, and optimize any individual performance challenge.
It is very important to understand the different performance testing
available and what they accomplish to get the right results during your
testing.
To support performance analysis to answer a specific performance
question, performance metrics need to be collected.
The most used performance metrics are: response time, job throughput,
CPU usage, memory usage, and data size.
The key to make your performance testing fulfill your business and
technical goals is the adoption of a formal performance testing
methodology.
The main steps in the performance testing methodology are: define system
goals, quantify performance, profile the action, optimize the action, and
monitor the environment.
Define the action to be tested and its performance target with precision
because vague (or inexistent) performance definition statements lead to
time waste and incorrect solutions.
Take an initial single-user, high-level measurement. Single-user response
time represents the best possible response time because all of the systems
resources are dedicated to completing that action.
Deploying MicroStrategy High Performance BI Performance Testing Methodology 7
2011 MicroStrategy, Inc. Lesson Summary 279
Depending on the results of the single-user measure you need to do either
Single-User or Concurrency Performance Profiling.
For Single User Performance Profiling, the technique consists of generating
the Performance Stack Diagram, which shows how each component of the
architecture contributes to end-to-end response time.
For Concurrency Profiling, the technique consists of generating the
Performance Degradation Curve, which shows values obtained for User
Experience (Average Response Time) and System Throughput plotted
against increasing Submission Rates.
After a detailed Performance Stack Diagram is generated, methodically
optimize each one of the layers, focusing on the biggest contributors to
response time first.
Only one optimization should be introduced at a time in order to isolate the
effect of the change and determine its effectiveness.
It is very important to reprofile after each optimization and continue the
cycle until your performance targets are reached.
Performance Testing Methodology Deploying MicroStrategy High Performance BI 7
280 Lesson Summary 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 281
WORKSHOP
The object of this workshop is to enable you to experience the performance
gains of redesigning a MicroStrategy dashboard, by following the methodology
presented in this course.
Designing a High Performance Dashboard
ACME Corporation has been using MicroStrategy for several years, delivering
critical business data to key personal using well elaborated dynamic
dashboards. After the release of MicroStrategy 9.2.1, the IT managers at ACME
Corporations quickly decided to upgrade their MicroStrategy environment to
take advantage of the new code and the high performance features. One of the
main goals of the IT team is to reduce the execution time of a high profile
dashboard, which is frequently accessed by the companys stockholders, to less
than 10 seconds. The name of this dashboard is Pre-HP Analysis.
The Pre-HP Analysis dashboard currently takes more than 25 seconds to
execute. As the main dashboard developer at ACME Corporation, you have
been assigned the task to redesign the Pre-HP Analysis dashboard according to
the High Performance recommendations provided by MicroStrategy. The
stockholders expectation is to access the dashboard data in less than 8 seconds.
Workshop Deploying MicroStrategy High Performance BI
282 2011 MicroStrategy, Inc.
Dashboard Overview
The Pre-HP Analysis dashboard is an interactive interface that enables users to
analyze the main business metrics across time. It provides details on how the
business is doing based on geography and income level. The current version of
the dashboard contains 10 datasets, which are displayed across two panel
stacks, each providing different levels of analysis. These datasets are all View
Reports that are fed by a single cube (Cube 1).
This dashboard also contains several selectors that enable users to select the
data they want to see. Some of the selectors enable users to switch between
panels in a panel stack, while others enable users to display different metrics or
elements of an attribute, in grids and graphs.
The Pre-HP Analysis is located at: \MicroStrategy Tutorial\Public
Objects\Reports\High Performance\Workshop folder.
The Design Strategy
Based on the MicroStrategy methodology to achieve high performing
dashboards, the main steps you decide to take to redesign the dashboard are
the following:
Replace all the datasets with the Cube as datasetThis action will
eliminate the overhead caused by Intelligence Server extracting the data
from the cube and putting at the level of the dataset reports. It also
eliminates the need of Intelligence Server to create the virtual dataset and
to process the data to populate the different dashboard components.
Make all selectors Selectors as filtersThis action will significantly
reduce the initial load time of the dashboard because Selectors as filters
initially retrieve only one slice of data, instead of the full set.
High-level Steps
The high-level steps to redesign the Pre-HP Analysis dashboard to make it
execute in less than 10 seconds are listed below:
In Desktop, connect to the three-tier project source and browse to the
folder where the Pre-HP Analysis dashboard is located.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 283
Make a copy of the Pre-HP Analysis dashboard and work on the copy
version (rename it High Performing).
In Web, run the High Performing dashboard before making any changes.
Time the execution in Flash.

Write down the time it takes for the dashboard to execute. At the end
of the workshop, you will compare this time with the time it will take
to execute the redesigned dashboard.
Remove all the datasets (all the view reports) from the dashboard.
Add Cube 1 as the only dataset.
Re-configure all the selectors:
After removing all the datasets, the selectors lose their source and
target. You need to reconfigure them.
Make all the appropriate selectors Selector as filter
Re-configure the Grid/Graphs
Map each of the cube's attributes and metrics to the appropriate grid or
graph.
Re-configure all the view filters for all grids and graphs.

After you delete the datasets, the view filters are also deleted.
Run the High Performing dashboard, timing its execution in Flash.

After comparing the Flash execution times of the High Performing


and the Pre-HP Analysis dashboards, you should notice a great
performance gain.
You can use the detailed instructions if you want help.
Detailed Steps
Phase 1: Preparing the Environment for Optimization
Make a copy of the Pre-HP Analysis dashboard
1 In Desktop, connect to the three-tier project source as Administrator, with
a blank password.
Workshop Deploying MicroStrategy High Performance BI
284 2011 MicroStrategy, Inc.
2 in the Folder List, browse to the following location to find the Pre-HP
Analysis dashboard: \MicroStrategy Tutorial\Public
Objects\Reports\High Performance\Workshop folder.
3 In the Object Viewer, make a copy of the Pre-HP Analysis dashboard and
paste it in the same folder.
4 Rename the copied dashboard High Performing.
Run the High Performing dashboard from MicroStrategy Web and time its
execution
5 On the Windows Start menu, point to Programs, followed by
MicroStrategy, followed by Web, and select Web.
6 On the MicroStrategy Web page, click the MicroStrategy Tutorial folder.
7 On the Login page, in the User name box, type administrator.
8 Click Login.
9 In Web, browse to Shared Reports\High Performance\Workshop.
10 Prepare to time the execution of the High Performing dashboard.
11 Click the High Performing dashboard.
How long does it take to execute, from the time you click on it to the time it
renders in Flash Mode?
_______________________________________
Phase 2: Redesign the Dashboard, Using Cube as Dataset and
Reconfiguring Selectors
Remove all the datasets (all the view reports) from the dashboard and add
Cube 1 as the only dataset
1 In Desktop, edit the High Performing dashboard.
2 In the Datasets pane, remove all 10 datasets.

You can use the CTRL key to multi-select the datasets. Then
right-click the selection and select Delete from Document. In the
MicroStrategy Desktop pop up window, click Yes.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 285
3 In the Datasets pane, click the Add Dataset link.
4 In the Select a report window, double-click the High Performance folder,
followed by the Workshop folder, and finally double-click the Cube and
VR folder.
5 Select Cube 1.
6 Click Open.

Cube 1 becomes the only dataset in the High Performing dashboard.


7 Click Save and Close.
Redefining the dashboard title and reconfiguring all the selectors in the
body of the dashboard
8 In MicroStrategy Web, return to the Workshop folder.
9 Right-click the High Performing dashboard and select Edit.
10 In Design view, double-click the Pre-HP Analysis title.
11 Rename it to High Performing, as shown below:
12 In the dashboard Design View, on the Tools menu, select Document
Structure if it is not already selected.
Workshop Deploying MicroStrategy High Performance BI
286 2011 MicroStrategy, Inc.
13 In the Document Structure pane, expand Body.
14 Right-click Customer State Selector and select Properties and
Formatting.
15 In the Properties and Formatting window, under Properties, select
Selector.
16 In the Source drop-down list, select Customer State.
17 Select the Apply selections as a filter check box.
18 Click Apply.
Your Properties and Formatting window should look like the image below:
19 Click OK.
20 In the Document Structure pane, right-click Income Selector and select
Properties and Formatting.
21 In the Properties and Formatting window, under Properties, select
Selector.
22 In the Source drop-down list, select Income Bracket.
23 Select the Apply selections as a filter check box.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 287
24 Make sure the Show option for All check box is selected.
25 Click Apply.
Your Properties and Formatting window should look like the image below:
26 Click OK.
27 In the Document Structure pane, right-click Payment Selector and select
Properties and Formatting.
28 In the Properties and Formatting window, under Properties, select
Selector.
29 In the Source drop-down list, select Payment Method.
30 Make sure that Grids 1, 119, and 127 display in the Selected box.
31 Select the Apply selections as a filter check box.
32 Make sure the Show option for All check box is selected.
33 Click Apply.
Workshop Deploying MicroStrategy High Performance BI
288 2011 MicroStrategy, Inc.
Your Properties and Formatting window should look like the image below:
34 Click OK.
35 In the Document Structure pane, right-click Brand Selector, and select
Properties and Formatting.
36 In the Properties and Formatting window, under Properties, select
Selector.
37 In the Source drop-down list, select Brand.
38 Make sure that PanelStack1 displays in the Selected box.
39 Select the Apply selections as a filter check box.
40 Click Apply.
41 Click OK.
42 In the dashboard Toolbar, click Save As.
43 In the Save As window, click OK.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 289
44 In the Confirm Overwrite window, click Yes to overwrite the previous
version of the dashboard.
45 In the Document Saved window, click Return to Design Mode.

Because this workshop contains several steps, it is recommended to save


the dashboard from time to time to avoid losing the changes you have
made.
Phase 3: Redesign the Dashboard, Redefining the
Components of PanelStack1
Redefining the components within PanelStack1
1 In the Document Structure pane, expand PanelStack1.
2 Expand Month.
3 Right-click Month Selector and select Properties and Formatting.
4 In the Properties and Formatting window, under Properties, select
Selector.
5 In the Source drop-down list, select Month.
6 Make sure that Graph3, Graph111, and Grid119 display under the Selected
box.
7 Select the Apply selections as a filter check box.
8 Click Apply.
Workshop Deploying MicroStrategy High Performance BI
290 2011 MicroStrategy, Inc.
Your Properties and Formatting window should look like the image below:
9 Click OK.
10 In the Document Structure pane, under Month, select Graph3.

The Graph is highlighted in the dashboard layout.


11 In the toolbar that displays by Graph3, click the Graph Zones icon.

A window with the graph zones displays.


12 On the Tools menu, select Dataset Objects to display the attributes and
metrics to drag and drop in the appropriate graph zones.
13 In the Dataset Objects pane, drag the Revenue metric and drop it in the
Graph Zone window, under METRICS.

When you drag an object to the Graph Zone, a yellow bar displays
when you can drop it in the appropriate zone.
14 In the Dataset Objects pane, drag the Revenue Forecast metric and drop it
in the Graph Zone window, under METRICS, under the Revenue metric.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 291
15 In the Graph Zone window, under SERIES, drag Metrics and drop it under
CATEGORIES.
Your Graph Zones window should look like the image below:
16 Close the Graph Zones window by clicking x.
17 On the Tools menu, select Document Structure.
18 In the Document Structure pane, under Month, select Graph111.

Graph111 is highlighted in the dashboard layout.


19 In the toolbar that displays by Graph111, click the Graph Zones icon.

A window with the graph zones displays.


20 On the Tools menu, select Dataset Objects.
21 In the Dataset Objects pane, drag the following metrics and drop them in
the Graph Zone window, under METRICS: Cost, Freight, Profit, and Max
Revenue per Customer.
Workshop Deploying MicroStrategy High Performance BI
292 2011 MicroStrategy, Inc.
Your Graph Zones window should look like the image below:
22 Close the Graph Zones window by clicking x.
23 On the Tools menu, select Document Structure.
24 In the Document Structure pane, under Month, select Grid119.

Grid119 is highlighted in the dashboard layout.


25 On the Tools menu, select Dataset Objects.
26 In the Dataset Objects pane, drag the Month attribute to the row of the grid
(vertical line) and the Transactions Per Customer metric to the column
(horizontal line).
Your Grid should look like the image below:
27 In the dashboard Toolbar, click Save As.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 293
28 In the Save As window, click OK.
29 In the Confirm Overwrite window, click Yes.
30 In the Document Saved window, click Return to Design Mode.
31 On the Tools menu, select Document Structure.
32 In the Document Structure pane, under PanelStack1, expand Quarter.
33 Right-click Quarter Selector and select Properties and Formatting.
34 In the Properties and Formatting window, under Properties, select
Selector.
35 In the Source drop-down list, select Quarter.
36 Make sure that Graph95, Graph106, and Grid1 display under the Selected
box.
37 Select the Apply selections as a filter check box.
38 Click Apply.
39 Click OK.
40 In the Document Structure pane, under Quarter, select Graph95.

Graph95 is highlighted in the dashboard layout.


41 In the toolbar that displays by Graph95, click the Graph Zones icon.

A window with the graph zones displays.


42 On the Tools menu, select Dataset Objects to display the attributes and
metrics to drag and drop in the appropriate graph zones.
43 In the Dataset Objects pane, drag the Revenue metric and drop it in the
Graph Zone window, under METRICS.
44 In the Dataset Objects pane, drag the Revenue Forecast metric and drop it
in the Graph Zone window, under METRICS, under the Revenue metric.
Workshop Deploying MicroStrategy High Performance BI
294 2011 MicroStrategy, Inc.
45 In the Graph Zone window, under SERIES, drag Metrics and drop it under
CATEGORIES.
Your Graph Zones window should look like the image below:
46 Close the Graph Zones window by clicking x.
47 On the Tools menu, select Document Structure.
48 In the Document Structure pane, under Quarter, select Graph106.

Graph106 is highlighted in the dashboard layout.


49 In the toolbar that displays by Graph106, click the Graph Zones icon.

A window with the graph zones displays.


50 On the Tools menu, select Dataset Objects.
51 In the Dataset Objects pane, drag the following metrics and drop them in
the Graph Zone window, under METRICS: Cost, Freight, Profit, and Max
Revenue per Customer.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 295
Your Graph Zones window should look like the image below:
52 Close the Graph Zones window by clicking x.
53 On the Tools menu, select Document Structure.
54 In the Document Structure pane, under Quarter, select Grid1.

Grid1 is highlighted in the dashboard layout.


55 On the Tools menu, select Dataset Objects.
56 In the Dataset Objects pane, drag the Month attribute to the row of the grid
(vertical line) and the Transactions Per Customer metric to the column
(horizontal line).
Your Grid should look like the image below:
57 In the dashboard Toolbar, click Save As.
Workshop Deploying MicroStrategy High Performance BI
296 2011 MicroStrategy, Inc.
58 In the Save As window, click OK.
59 In the Confirm Overwrite window, click Yes.
60 In the Document Saved window, click Return to Design Mode.
61 On the Tools menu, select Document Structure.
62 In the Document Structure pane, under PanelStack1, expand Year.
63 Right-click Year Selector and select Properties and Formatting.
64 In the Properties and Formatting window, under Properties, select
Selector.
65 In the Source drop-down list, select Year.
66 Make sure that Graph97, Graph2, and Grid127 display under the Selected
box.
67 Select the Apply selections as a filter check box.
68 Click Apply.
69 Click OK.
70 In the Document Structure pane, under Year, select Graph97.

Graph97 is highlighted in the dashboard layout.


71 In the toolbar that displays by Graph97, click the Graph Zones icon.

A window with the graph zones displays.


72 On the Tools menu, select Dataset Objects to display the attributes and
metrics to drag and drop in the appropriate graph zones.
73 In the Dataset Objects pane, drag the Revenue metric and drop it in the
Graph Zone window, under METRICS.
74 In the Dataset Objects pane, drag the Revenue Forecast metric and drop it
in the Graph Zone window, under METRICS, under the Revenue metric.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 297
75 In the Graph Zone window, under SERIES, drag Metrics and drop it under
CATEGORIES.
Your Graph Zones window should look like the image below:
76 Close the Graph Zones window by clicking x.
77 On the Tools menu, select Document Structure.
78 In the Document Structure pane, under Year, select Graph2.

Graph2 is highlighted in the dashboard layout.


79 In the toolbar that displays by Graph2, click the Graph Zones icon.

A window with the graph zones displays.


80 On the Tools menu, select Dataset Objects.
81 In the Dataset Objects pane, drag the following metrics and drop them in
the Graph Zone window, under METRICS: Cost, Freight, Profit, and Max
Revenue per Customer.
Workshop Deploying MicroStrategy High Performance BI
298 2011 MicroStrategy, Inc.
Your Graph Zones window should look like the image below:
82 Close the Graph Zones window by clicking x.
83 On the Tools menu, select Document Structure.
84 In the Document Structure pane, under Year, select Grid127.

Grid127 is highlighted in the dashboard layout.


85 On the Tools menu, select Dataset Objects.
86 In the Dataset Objects pane, drag the Month attribute to the row of the grid
(vertical line) and the Transactions Per Customer metric to the column
(horizontal line).
Your Grid should look like the image below:
87 In the dashboard Toolbar, click Save As.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 299
88 In the Save As window, click OK.
89 In the Confirm Overwrite window, click Yes.
90 In the Document Saved window, click Return to Design Mode.
Phase 4: Redesign the Dashboard, Redefining the
Components of PanelStack3
Redefining the components within PanelStack3
1 On the Tools menu, select Document Structure.
2 In the Document Structure pane, expand PanelStack3.
3 Expand Month.
4 Right-click CS Selector and select Properties and Formatting.
5 In the Properties and Formatting window, under Properties, select
Selector.
6 In the Source drop-down list, select Customer State.
7 Make sure that Graph20 displays under the Selected box.
8 Select the Apply selections as a filter check box.
9 Make sure the Show option for All check box is selected.
10 Click OK.
11 In the Document Structure pane, under Month, select Graph20.
12 In the toolbar that displays by Graph20, click the Graph Zones icon.

A window with the graph zones displays. You may need to resize the
Document Structure pane or use the scroll bars to allow the Graph Zone
window to fully display.
Workshop Deploying MicroStrategy High Performance BI
300 2011 MicroStrategy, Inc.
13 On the Tools menu, select Dataset Objects to display the attributes and
metrics to drag and drop in the appropriate graph zones.
14 In the Dataset Objects pane, drag the Customer State attribute and drop it
in the Graph Zone window, under SERIES.
15 In the Dataset Objects pane, drag the Order Count metric and drop it in the
Graph Zone window, under METRICS.
Your Graph Zones window should look like the image below:
16 Close the Graph Zones window by clicking x.
17 On the Tools menu, select Document Structure.
18 In the Document Structure pane, under Month, right-click Month
Selector, and select Properties and Formatting.
19 In the Properties and Formatting window, under Properties, select
Selector.
20 In the Source drop-down list, select Month.
21 Make sure that Graph20 displays under the Selected box.
22 Select the Apply selections as a filter check box.
23 Click Apply and click OK.
24 In the dashboard Toolbar, click Save As.
25 In the Save As window, click OK.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 301
26 In the Confirm Overwrite window, click Yes.
27 In the Document Saved window, click Return to Design Mode.
28 In the Document Structure pane, under PanelStack3, expand Quarter.
29 Select Quarter.
30 Under Quarter, right-click CS Selector, and select Properties and
Formatting.
31 In the Properties and Formatting window, under Properties, select
Selector.
32 In the Source drop-down list, select Customer State.
33 Make sure that Graph151 displays under the Selected box.
34 Select the Apply selections as a filter check box.
35 Make sure the Show option for All check box is selected.
36 Click Apply and click OK.
37 In the Document Structure pane, under Quarter, select Graph151.
38 In the toolbar that displays by Graph151, click the Graph Zones icon.

A window with the graph zones displays. You may need to resize the
Document Structure pane or use the scroll bars to allow the Graph Zone
window to fully display.
39 On the Tools menu, select Dataset Objects to display the attributes and
metrics to drag and drop in the appropriate graph zones.
40 In the Dataset Objects pane, drag the Customer State attribute and drop it
in the Graph Zone window, under SERIES.
41 In the Dataset Objects pane, drag the Order Count metric and drop it in the
Graph Zone window, under METRICS.
Workshop Deploying MicroStrategy High Performance BI
302 2011 MicroStrategy, Inc.
42 In the Dataset Objects pane, drag the Month attribute and drop it in the
Graph Zone window, under CATEGORIES.
Your Graph Zones window should look like the image below:
43 Close the Graph Zones window by clicking x.
44 On the Tools menu, select Document Structure.
45 In the Document Structure pane, under Quarter, right-click Quarter
Selector, and select Properties and Formatting.
46 In the Properties and Formatting window, under Properties, select
Selector.
47 In the Source drop-down list, select Quarter.
48 Make sure that Graph151 displays under the Selected box.
49 Select the Apply selections as a filter check box.
50 Click Apply and click OK.
51 In the Document Structure pane, under Quarter, right-click Bar Selector,
and select Properties and Formatting.
52 In the Properties and Formatting window, under Properties, select
Selector.
53 In the Source drop-down list, select Month.
54 Make sure that Graph151 displays under the Selected box.
55 Select the Apply selections as a filter check box.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 303
56 Make sure the Show option for All check box is selected.
57 Click Apply and click OK.
58 In the dashboard Toolbar, click Save As.
59 In the Save As window, click OK.
60 In the Confirm Overwrite window, click Yes.
61 In the Document Saved window, click Return to Design Mode.
62 In the Document Structure pane, under PanelStack3, expand Year.
63 Right-click Year Selector and select Properties and Formatting.
64 In the Properties and Formatting window, under Properties, select
Selector.
65 In the Source drop-down list, select Year.
66 Make sure that Graph163 displays under the Selected box.
67 Select the Apply selections as a filter check box.
68 Click Apply and click OK.
69 In the Document Structure pane, under Year, right-click CS Selector and
select Properties and Formatting.
70 In the Properties and Formatting window, under Properties, select
Selector.
71 In the Source drop-down list, select Customer State.
72 Make sure that Graph163 displays under the Selected box.
73 Select the Apply selections as a filter check box.
74 Make sure the Show option for All check box is selected.
75 Click Apply and click OK.
Workshop Deploying MicroStrategy High Performance BI
304 2011 MicroStrategy, Inc.
76 In the Document Structure pane, under Year, select Graph163.
77 In the toolbar that displays by Graph163, click the Graph Zones icon.

A window with the graph zones displays. You may need to resize the
Document Structure pane or use the scroll bars to allow the Graph Zone
window to fully display.
78 On the Tools menu, select Dataset Objects to display the attributes and
metrics to drag and drop in the appropriate graph zones.
79 In the Dataset Objects pane, drag the Customer State attribute and drop it
in the Graph Zone window, under SERIES.
80 In the Dataset Objects pane, drag the Order Count metric and drop it in the
Graph Zone window, under METRICS.
81 In the Dataset Objects pane, drag the Quarter attribute and drop it in the
Graph Zone window, under CATEGORIES.
Your Graph Zones window should look like the image below:
82 Close the Graph Zones window by clicking x.
83 On the Tools menu, select Document Structure.
84 In the Document Structure pane, under Year, right-click Bar Selector, and
select Properties and Formatting.
85 In the Properties and Formatting window, under Properties, select
Selector.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 305
86 In the Source drop-down list, select Quarter.
87 Make sure that Graph163 displays under the Selected box.
88 Select the Apply selections as a filter check box.
89 Make sure the Show option for All check box is selected.
90 Click Apply and click OK.
91 In the dashboard Toolbar, click Save As.
92 In the Save As window, click OK.
93 In the Confirm Overwrite window, click Yes.
94 In the Document Saved window, click Return to Design Mode.
Phase 5: Redesign the Dashboard, Reconfiguring View Filters
Reconfiguring view filters in GridGraph objects
1 In the Document Structure pane, under PanelStack1, expand Month.
2 Right-click Graph3 and select Edit View Filter.
3 In the View Filter window, click Add Condition.
4 In the Filter On drop down list, select Category.
5 Click Select.
6 In the Available box, select Books and Electronics.
7 Click Add to selections.
Workshop Deploying MicroStrategy High Performance BI
306 2011 MicroStrategy, Inc.

Books and Electronics have been added to the Selected box.


8 Click Apply.
9 In the View Filter window, click Add Condition.
10 In the Filter On drop down list, select Year.
11 Click Select.
12 In the Select drop-down list, select Not In List.
13 In the Available box, select 2004.
14 Click Add to selections.

2004 has been added to the Selected box.


15 Click Apply.
16 In the View Filter window, click Add Condition.
17 In the Filter On drop down list, select Customer Region.
18 Click Select.
19 In the Available box, select Northeast, Mid-Atlantic, and Southeast.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 307
20 Click Add to selections.

Northeast, Mid-Atlantic, and Southeast have been added to the Selected


box.
21 Click Apply.
Your View Filter window should look like the image below:
22 Click OK to close the View Filter window.
23 In the Document Structure pane, right-click Graph111, Graph95,
Graph106, Graph97 and Graph2, and define their View Filters with the
same conditions you used for Graph3.

You can find Graph111 under Month, Graph95 and Graph106 under
Quarter, and Graph97 and Graph2 under Year.
24 In the dashboard Toolbar, click Save As.
25 In the Save As window, click OK.
Workshop Deploying MicroStrategy High Performance BI
308 2011 MicroStrategy, Inc.
26 In the Confirm Overwrite window, click Yes.
27 In the Document Saved window, click Return to Design Mode.
28 In the Document Structure pane, under PanelStack1, expand Month.
29 Right-click Grid119 and select Edit View Filter.
30 In the View Filter window, click Add Condition.
31 In the Filter On drop down list, select Category.
32 Click Select.
33 In the Available box, select Books and Electronics.
34 Click Add to selections.

Books and Electronics have been added to the Selected box.


35 Click Apply.
36 In the View Filter window, click Add Condition.
37 In the Filter On drop down list, select Year.
38 Click Select.
39 In the Select drop-down list, select Not In List.
40 In the Available box, select 2004.
41 Click Add to selections.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 309

2004 has been added to the Selected box.


42 Click Apply.
43 In the View Filter window, click Add Condition.
44 In the Filter On drop down list, select Customer Region.
45 Click Select.
46 In the Available box, select Northeast, Mid-Atlantic, and Southeast.
47 Click Add to selections.

Northeast, Mid-Atlantic, and Southeast have been added to the Selected


box.
48 Click Apply.
49 In the View Filter window, click Add Condition.
50 In the Filter On drop down list, select Payment Method.
51 Click Select.
52 In the Available box, select Visa and Amex.
53 Click Add to selections.
Workshop Deploying MicroStrategy High Performance BI
310 2011 MicroStrategy, Inc.

Visa and Amex have been added to the Selected box.


54 Click Apply.
Your View Filter window should look like the image below:
55 Click OK to close the View Filter window.
56 In the Document Structure pane, right-click Grid1 (under Quarter) and
Grid127 (under Year), and define their View Filters with the same
conditions you used for Grid119.
57 In the dashboard Toolbar, click Save As.
58 In the Save As window, click OK.
59 In the Confirm Overwrite window, click Yes.
60 In the Document Saved window, click Return to Design Mode.
61 In the Document Structure pane, expand PanelStack3, followed by Month.
62 Right-click Graph20 and select Edit View Filter.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 311
63 In the View Filter window, click Add Condition.
64 In the Filter On drop down list, select Customer Region.
65 Click Select.
66 In the Available box, select Northeast, Mid-Atlantic, and Southeast.
67 Click Add to selections.

Northeast, Mid-Atlantic, and Southeast have been added to the Selected


box.
68 Click Apply.
69 In the View Filter window, click Add Condition.
70 In the Filter On drop down list, select Income Bracket.
71 Click Select.
72 In the Available box, select 21-30K, 31-40K, 41-50K, 51-60K, 61-70K and
71-80K.
73 Click Add to selections.
74 Click Apply.
Workshop Deploying MicroStrategy High Performance BI
312 2011 MicroStrategy, Inc.
75 In the View Filter window, click Add Condition.
76 In the Filter On drop down list, select Customer State.
77 Click Select.
78 In the Select drop-down list, select Not In List.
79 In the Available box, select Delaware, Florida, Georgia, Idaho, and
Illinois.
80 Click Add to selections.
81 Click Apply.
Your View Filter window should look like the image below:
82 Click OK to close the View Filter window.
83 In the Document Structure pane, right-click Graph151 (under Quarter)
and Graph163 (under Year), and define their View Filters with the same
conditions you used for Graph20.
Deploying MicroStrategy High Performance BI Workshop
2011 MicroStrategy, Inc. 313
84 In the dashboard Toolbar, click Save As.
85 In the Save As window, click OK.
86 In the Confirm Overwrite window, click Yes.
Final Phase: Verify the Performance Gain After Redesigning
the Dashboard

Prepare to time the execution of your newly saved document.


1 In the Document Saved window, click Run newly saved document.
How long does it take to execute now, after you re-designed it?
_______________________________________

After comparing the Flash execution time of the High Performing


dashboard with the time execution of the Pre-HP Analysis dashboard,
you should notice a great performance gain.
Workshop Deploying MicroStrategy High Performance BI
314 2011 MicroStrategy, Inc.
2011 MicroStrategy, Inc. 315
INDEX
Numerics
64-bit 40, 120
A
Advanced Visualization 178
Aggregate Tables 226
aggregation 38, 216
Alerting 109
Analysing Test Results
Best Practices 274
Analytical Engine 267
application lifecycle
overview 93
Automatic Resize 180
B
Backup Frequency 137
Base Metrics 222
benchmark 25
Benchmark Testing 256
BI
Layout 93
Performance 24
Bulk Data Insertion 228
C
Cache 40, 41
disabling 45
element caches 54
Load 48
Maintenance 49
object caches 53
overview 41
per-user 46
report caches 42
Sharing 137
Sizing Recommendations 57
types 44
XML 46
Caches
Obsolete 49
capacity planning 25
Client Rendering 151
Client Time 267
Clustering 109, 137
Command Manager 277
committing 123
computational distance 38
Concurrency 108, 127, 255, 262
Concurrency Performance Profiling 264,
Index MicroStrategy Desktop: Advanced Reporting
316 2011 MicroStrategy, Inc.
269
Conditional Metrics 77
Configuration 113
Configuration Recovery 139
Consolidations 221
CPU
Utilization 118
CPU Usage 257
Custom Groups 157, 221
D
Dashboard
Consolidating Datasets 164
Data Preparation Steps 162
Dataset Techniques 162
Design 281
Design Strategy 282
Design Techniques 172
DHTML 159
Execution Flow 159, 266
Flash
Components 187
High Performance 159
Leveraging View Filters 164
Removing Unused Datasets 164
DashboardViewer.swf 187
Data Architecture
Optimization 242
Data Binary 188
Data conversion 61
data fetch 61
Data fetching 94
Data Movement 243
Data Personalization Method 109
Data Preparation 267
Data Size 258
Data Source 106
Data Transfer 91
Data Types 76
Data Warehouse Access 209
Database
Resource Utilization 244
Database Connections 132, 244
Database Instances
minimizing the number of 244
Database Optimization Layer 227
Database Optimizer 231
Dataset Execution 162
Definition Binary 188
Size Recommendation 188
Degradation Curve 116
Delivery Format 105
Delivery Method 104
De-normalizing 213
DHTML
Performance Topics 184
Direct loading 65
Disk
Utilization 123
Disk Space 49
Disk Storage 121
Disk Swapping 119
Disks Fragmentation 122
Distribution Services
Performance 103
Document Caching 50
Best Practices 52
Drill Paths 158
Dynamic Sourcing 85
Execution 85
Reports 85
Troubleshooting 88
dynamic sourcing
optimization 27
MicroStrategy Desktop: Advanced Reporting Index
2011 MicroStrategy, Inc. 317
E
Element Caching 54
Assigning Memory 56
Best Practices 55
Enterprise Manager 277
execution time 38
Express Mode 172
Extended Fetch 247
F
Failover Suppor 137
Flash 159
Performance Topics 187
vs. DHTML 172
Widgets 191
Flash Caching 189
Flash properties file 188
formatting 38
Formatting Density 189
G
Garbage Collection 143
Grids 177
consolidating 178
Group By 187
H
Health Center 28, 277
Heap Size 144
High Concurrency 246
High Performance Initiative 24
History List
Storage 138
History List Usage
optimizing 130
HTTP Compression 99
I
Idle User Sessions 126
Incremental Cube Refresh 78
Incremental Fetch 157
grids 185
Grouping 186
Indexes 214
In-memory Cube 40, 59
as Dataset 170
Data Normalization Techniques 63
Design 86
Example 60
Loading 73
Peak Memory Usage 62
Publication 94
Publication Process 61
Size 74
Recommendations 75
Tuning Techniques 77
Size Constraints 73
Sizing 73
When to Use 68
Integrity Manager 277
Intelligence Server
Configuration 126
Intermediate Table 67
Types 235
Internationalization 76
Ipad Application 271
J
JavaScript 147, 267
JVM Settings 143
K
KPIs 263
Index MicroStrategy Desktop: Advanced Reporting
318 2011 MicroStrategy, Inc.
L
Layouts 180
Links 168
Linux 123
Load Balancing 139
Load Testing 256
LoadRunner 271
Logging 136
Logical Query Layer 224
Lookup Tables 217
duplicating 245
M
measurements 268
Memory 40, 119
Usage 73, 257
Utilization 121
metadata
performance 27
methodology 259
MicroStrategy Benchmark Tests 273
MicroStrategy Platform 258
Multi Source 243
Multi Source Option 243
Multi-Process 248
Multi-Threaded 248
N
Network
Key Concepts 96
Performance 93
Case Example 94
Recommendations 97
Speed 66
Terminology 96
Time 267
Traffic 137
traffic 259
Normalization 63, 237
O
Object Caching 53
Best Practices 53
ODBC 246
On-Demand Fetching of Panels 184
Operating Systems 123
optimizing 264
Oracle 249
ARRAYSIZE 249
Outer Joins 66
P
Panel 189
Panel Stacks 189
parameterized insertions 229
Parameterized Queries 228
Partitioning 215
PDF 159
Performance Counters 268
Performance Degradation Curve 265, 269
Real Examples 271
Performance Issues 272
Performance Results 95
Performance Stack Diagram 265, 267
Performance Testing 255
Methodology 253, 260
Private Bytes 121
Processing Speed 117
Processor 115
profiling 264
Project Failover 138
MicroStrategy Desktop: Advanced Reporting Index
2011 MicroStrategy, Inc. 319
Q
query execution 61
Query Execution Engine 267
Query Optimization Layer 238
Query Performance 209
Optimize 210
Quick Switch 165
R
RAID
Configurations 122
RAM 40
Read-Only Sources 244
Regression Testing 256
rendering 38
Report
Ad-hoc 69
Configuration Techniques 154
Data Population Method 236
Design 86
Execution Flow 153
High Performance 153
optimization 219
Report Cache 42
Allocating Memory 47
Best Practices 44
Matching Criteria 44
Report Cache Flow 42
reports
overlapping 69
Re-profiling 275
Resource Management 129
resource usage 255
Response Time 41, 74, 256, 262, 269
Degradation 124
rendering 257
System 257
S
Scalability 74, 255
Scalability Lab 25
Schema Design
optimization 219
Selectors 282
filtering 174
Standard 175
serialization 61
Server Load 136
Server Specifications 115
Set Operators 234
SilkPerformer 271
Single User
Performance Profiling 265
Test Outcomes 264
Single User Performance 264
Single User Testing 264
Smart Metrics 184
SQL Generation 61, 267
Algorithm 210
Optimization 224
SQL Hints 231
SQL Passes 240
Merging 241
Static Content 146
Statistics 136
Stop Watch 268
stress testing 255
Submission Rate 269
Sub-Query
Types 232
Subtotals 182
Summary Metrics 222
Swapping 73
System
Architecture 113
Index MicroStrategy Desktop: Advanced Reporting
320 2011 MicroStrategy, Inc.
T
Table Keys 219
Temporary Table 242
Testing Tools 271
Text Boxes 177, 186
Thresholds 182
Throughput 74, 257, 262, 269
Comparison 123
Transformation Formulas 238
Tuning 25, 246
U
Unload 48
User Load 262
User Management 126
User Privileges 127
Utilization 118, 262
V
VARCHAR 76
View Report
Execution 85
Views 216
Virtual Dataset Execution 163
Virtualization 124
Visualization 177
Optimizing 192
VLDB Properties 227
VLDB Setting 67
W
Warehouse Schema 213
Web Pool 144
Web Proxy Server 102
Web Server
Configuration 143
Web Server processing time 267
Windows 123
Working Set Memory
optimizing 129
Working Set Report 156
Workload Management 132
Workshop 281
X
XML Generation 267

You might also like